There was a felt need for standardised practices in cloud management as public, private and hybrid clouds gained popularity. Over the decades, a few standards have emerged and there are many more in the pipeline. Reputed cloud service developers and enablers like Asigra constantly update their software to include new tools that help IT managers and CIOs retain control over their data, maintain security of the systems, and manage their users efficiently. Here are a few cloud tools that help IT managers in their quest for data control.
The cloud democratises computing. While this is good for business, it creates a few headaches for the management. For instance, mobile and remote users can upload or download data from wherever they are, with whatever device they have on hand without obtaining permissions from the IT Administrator. The security of the information, the type of applications in use on the connecting device, and so forth, can create security problems that the Administrator must anticipate and provide for. It follows that there is an urgent need for policies and procedures that enforce access boundaries and user permissions, and for tools that enable the Administrator implement these policies and procedures.
A number of cloud service providers use software agents with administrative dashboards that equip the manager with the necessary tools for creating and managing users, who have the necessary company-defined rights and permissions to access the network, and perform some or all operations on the data that they access.
The cloud entrusts data to third party servers. As a consequence, IT managers worry about security. The cloud addresses a few of these issues by provisioning for layered data security. Most cloud service providers use third party (FIPS-140-2) certified cryptographic algorithms to encrypt data. These algorithms are often described as bank grade or military grade and generally use AES 128,192 or 256 or Blowfish that have proven to be impregnable to date. The symmetric keys that are used are often user defined and private keys that can remain secure with the data owner.
The third party service provider does not have access to the content of the data store that is hosted on their cloud server as a result of the encryption. Security and availability of data is further strengthened with the institution of “as is” replication and disaster recovery systems and guarantees that the information will not be accessed by the service provider or their associates at any time. Managers can recover or purge the information contained in the cloud service stores at any time they wish to rescind from the contract using tools provided for the purpose.
Public clouds are suspect—irrespective of whether or not the suspicion is justified or otherwise. Hence, the adoption of the public cloud has been slow. But, the change is becoming visible, as more and more concerns about the public cloud are addressed, and the public cloud assumes its rightful place as a mode of computing that adds value to the business.
What is the value add that is to be obtained from public clouds? The value add from public clouds is in direct proportion to the commitment the organisation feels towards managing the cloud provider and employing the cloud solution responsibly and effectively. In other words, the responsibility for the success of the public cloud rests with the organisation and not with the cloud vendor.
If this seems to be counter-intuitive and contrary to all that you have heard about the cloud, it is the truth. Public clouds do decrease costs and do deliver all kinds of benefits to the end user. But, it brings with it a number of responsibilities:
- IT professionals within the organisation must stay with the cloud and its implementation. They must make the effort to understand the terms and conditions of the contract and enforce any remedies that may be built into the contract to ensure efficient performance of contact by the cloud vendor. If the public cloud performance is poor, the IT personnel within the organisation are to blame.
- The objective of the public cloud is not just backup and recovery. There is a whole gamut of activities that happen in between. Establishing the metrics and monitoring performance is a business imperative for IT managers. Unmonitored public clouds can cause untold difficulties for end users. Latency, seek time issues or even backup and recovery issues may plague the organisation and make the whole experience of the cloud unpleasant.
- Availability and security are promises of the cloud vendor. But, untested security can be dangerous. IT managers will have to repeatedly test the security systems and run disaster recovery exercises to ensure that everything promised is deliverable and can be delivered at the appropriate time and at the pace required.
- Nothing can be managed without appropriate tools. IT managers need to ensure that the cloud service provides the managers with the right tools for the right tasks. There should be tools for scheduling backups and recoveries. There should be tools for managing users, stores or archives. There should be tools for generating and analysing reports on user activity or system activity. Finally, there should be tools for verifying service level agreements (SLAs) and implementations.
It should be remembered that Cloud service providers do not understand your business. They only understand their own business. It is up to you to make sure that their tools are used to your benefit.
The Nirvanix shut down created panic. What if your cloud service provider follows suit? Is it safe to transfer data to the cloud? How does one recover huge amounts of data entrusted to the cloud service, if the service suddenly decides to close shop? Smart cloud adopters are never fazed by these shutdowns! They have learnt the tricks of the trade and know how to hit the ground running even when their cloud service provider goes out of business. What is their secret?
They do not wait for bandwidth constraints to jam up their systems and force them to give up on their data. Most cloud vendors have the facility of backing up your data to removable media directly from their servers. This data will be copied in the encrypted format and can be shipped back to you. Ask your cloud vendor whether such a facility is available with them. Get your data shipped back to you in removable disk drives at frequent intervals for secure storage in your storage vaults.
Alternately, set up a remote server with a high speed Internet connection and ample bandwidth for transferring your data at frequent intervals during the lifetime of your contract with your cloud service provider. Many service providers permit simultaneous streaming of information to the local backup repositories, even when the data is being streamed to the remote cloud server owned by the service provider.
A few service providers offer local backup appliances or devices (as part of their package) that can connect to your network and download / replicate and mirror information that is streamed from various locations to the remote server in the cloud. This appliance / device remains with you and will be available to you even when your service provider goes out of business.
Smart users do know where their data is stored and how they can get their vendor to purge the data before the shut down. First, they sign up for services that allow them to encrypt the data with a user-defined impregnable key that remains in their custody. Un-purged data will pose no danger, as it cannot be decrypted without the key, and if accessed, it will make no sense to the person accessing. Second, they study the Service Level Agreement (SLA) in great detail and insist on the inclusion of purge clauses. They know exactly how their data is shared or used by the cloud service vendor and have control over their data at all times. They may even insist on the facility to “Delete” data from their end from all or any of the servers that are maintained by the cloud service vendor and go in and delete the data themselves.
Cloud commuting or cloud computing—as it is popularly called—must necessarily take cognisance of Virtualisation. Today, every organisation has virtual machines that need to be backed up and recovered. But, how are these virtual machines different from physical machines? A basic understanding of the concepts will go a long way towards easing the process of backing up or recovering organisation data to or from the cloud.
What is Virtualisation?
It is true that virtualisation has been around for many years now. It was first used in 1960’s and remained a feature of mainframes for a long time. It is only in the last few years–with the emergence of desktop virtualisation, network virtualisation, and later the cloud—that virtualisation has entered mainstream computing. A close look at what the term means will reveal that virtualisation is an umbrella concept. It is used in different contexts to refer to different things in a production environment.
Desktop virtualisation is the process of running multiple virtual desktops on a single PC. Each virtual desktop will contain its own operating system and applications and can be configured to load up on appropriate user authentication. Application virtualisation is the process of separating the application from the operating system and running it within a “sandbox” that can reside on a remote server or stream in from a remote server.
Server virtualisation, as mentioned earlier, allows many virtual machines to run on a single server. The different virtual machines share CPU, memory, storage and networking resources. Storage virtualisation is the process of grouping together physical storage using software, so that all the storage devices appear as if they were a single storage drive. It must be noted here that virtual storage is not the same as the virtual machine. The virtual machine is a set of files that run in the server box to create an operating environment, while virtual storage runs in memory on the storage controller using software to create the impression of a single storage repository. All the files in storage may or may not be present in the single server.
Cloud vendors exploit these two virtualisation features to make their services scalable and redundant. Network virtualisation is the process of using software to perform network functions. This is achieved by decoupling the virtual networks from underlying network hardware and using software switches to move data around.
The cloud is an enabler. All kinds of workflows can be streamlined, and productivity can be enhanced. Let us look at an example of how cloud computing can help hospitals improve information flows and speed up health care operations.
Hospitals have unique issues. The number of beds available for patients is limited. The waiting lists are long. Most of the patients are cared for as outpatients and this results in huge daily inflows and the emergency departments and behavioural health departments are flooded with patients requiring assistance. In short, there is a high volume of activity that must be handled efficiently and effectively, with least amount of inconvenience to patients. The staffs encounter a number of challenges. The hurdles that impact workflow will have to be addressed. How can cloud computing help?
Traditional computing systems force the staff to rely on a number of applications that run on PCs. These applications may be hosted on site or offsite. Interoperability challenges would be many. There will be productivity issues that demand a number of log on and log off operations that slow down access. Computing and information access would cause infinite delays and medical staff would be grappling with technical issues rather than working with the patients and their treatments.
Cloud computing systems offer sophisticated software applications that ease the work of hospitals and health care units. Cloud based systems, separate the underlying hardware infrastructure layer from the application layer and facilitate the creation of a single user login for all kinds of applications that may be deployed from the cloud server.
The desktop is separated from the operating system and management of technology is abstracted to the cloud. This creates a dynamic composition of user workspaces sans technical problems. Interoperability of applications becomes a possibility and this, improves overall clinical productivity. The anywhere, anytime accessibility of the cloud synchronises well with the fast-paced environment of hospitals, and has a direct impact on the way staffs can work with patients. Many of the challenges faced by the staff can be eliminated and patient care can be the focus of all health care workers.
The felt advantages of the cloud are:
- High speed access to information;
- Abstraction of underlying infrastructure from the application layer;
- Centralisation of technology management;
- Greater interoperability;
- Personalised user spaces sans;
- Smooth workflows;
- Improved patient care.
Not being prepared for disaster is not an option. Total environmental duplication is expensive. Attempting to recover only when there has been an incident is tempting the fates. These traditional approaches to disaster recovery create a lot of uncertainty and even stress for the organisation.
The growing sophistication of the cloud holds out a promise for backup and recovery of both physical and virtual machines. Cloud backup vendors are “disaster aware” and make elaborate provisions for recovery of their customer data. The disaster recovery efforts indirectly benefit customers who sign up for cloud based backup and recovery accounts.
At the core of the disaster recovery best practices that is instituted by the cloud vendor is—data replication. Replication helps the vendor rotate the backed up customer data offsite. The replications use high-speed connections for streaming the data and double protecting it. The replication servers are separated geographically from each other, to protect the information against natural disasters that may impact any single data centre. The vendor may create a hot site and a disaster recovery site with failover provisioning to ensure that customers always have access to their data even when the primary server experiences a shut down.
However, it should be noted that virtual machine replication is a complex task and not all cloud vendors have the technology required for the purpose. Traditionally, the data contained in the virtual machine has streamed through the primary server before it is backed up to the replication server. When the virtual machine has to be restored, the virtual machine can be brought online by restoring the virtual disk to the production stage. All this may take a lot of time, and time is the scarce commodity during a disaster.
Improved cloud replication management technologies allow cloud vendors directly stream the data from the virtual machine to both the primary and secondary servers directly. This enables instant recovery of the virtual machine (which exists in the production stage on the secondary server) even when the primary server experiences an outage. Disaster recovery plans can be implemented instantly. Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO) can be tightly coordinated. Check with your cloud vendor on this point.
The integration of VM replication with the data protection solutions offered by your vendor is a bonus. It will empower your organisation and keep you prepared for all kinds of disasters.