Virtual machine sprawl is more common problem than believed. Backing up these VMs can become an expensive and time consuming proposition. It is important to take a step back and view the entire process of VM creation and backup / recovery in a holistic manner. There is a need to rationalise VM backup for cloud computing.
Uncontrolled VM sprawl can result in unscientific use of backup resources requisitioned from cloud service providers. It will also be a burden to the IT function. Therefore, decisions will have to be taken at the management level to institute an organisation wide policy that centralises the creation and deployment of virtual machines. A quick survey of existing VMs and the possibility of consolidating multiple VMs should be given a serious consideration. Unnecessary and rogue VMs should be deleted before the cloud backup process is initiated. This will help the organisation avoid the backup of stealth VMs and exhaustion of computing resources indiscriminately.
Virtual Machine rationalisation policies must include strict procedures for creation and deployment of virtual machines by anyone in the organisation with no exceptions. There should be strict procedures for implementing production system without appropriate approvals being sought and given. Backup needs should be clearly defined by the VM and data owners. The defined need should be subjected to an impact analysis and there should be clarity on what would be the consequences of backing up or not backing up the VM.
Rationalisation of VMs for backup and recovery in the cloud will only be possible if IT Administrators are equipped with the right tools to evaluate the virtual environment and arrive at the right solutions to the problem of VM sprawl. Monitoring tools should be made available to them in order to ensure that the environment continues to remain clean, and rogue VMs are not introduced at any point into the environment without appropriate approvals being given. All these efforts will help IT Administrators keep a tab on raw and provisioned storage resources, and ensure that resources are not wasted indiscriminately by anyone within the organisation by deploying unauthorised VMs. This will protect the organisation and help in quick recovery of digital assets in the event of disaster.
Virtual machine protection is one of the most challenging tasks of cloud computing. More than backing up the data, it is recovering the data that needs focus. Defining a few metrics upfront will help organisations backup and recover their data efficiently.
The most important metric in Virtual machine recovery is Recovery Time Objective (RTO). How quickly must the virtual machine be recovered and made operational in the event of disaster or human error? This metric has to be defined by the business managers and not the IT personnel. The answer to this question will vary with the kind VM that needs to be recovered. If the VM is mission-critical, the RTO will be very tight and the organisation may need to be able to switch to a hot site or disaster recovery site at the point of failure. They will have to select a continuous backup option with continuous mirroring / replication of data to an alternate site for high availability. If the VM is non-critical, organisations can afford to go slow on the recovery. The organisation can afford to wait for the recovery of the primary server for a specified time frame.
Closely linked with RTO concept is the Recovery Point Objective (RPO) concept. The recovery point objective specifies the acceptable data loss window. Can the organisation afford to lose a few minutes or a few hours of data inputs? This is because the data input at the point of failure may not have been saved and may not be available to the organisation for recovery. If the organisation cannot afford to lose even a few minutes of data entry, the data protection strategy that should be selected is a continuous data backup system. If a few hours of data loss does not make much of a difference, the organisation can afford to set up scheduled backup systems.
An unfortunate aspect of virtual machine deployment is the potential for creating Virtual Machine sprawls. This is because virtual machines can be easily created on the fly and no additional hardware or software needs to be requisitioned. The resource impact of these rogue virtual machines can be immense and organisations that have this problem on hand will have to undertake the exercise of rationalising their VM deployments or provisioning for time required for backing up and recovering these machines.
It should be noted at this point in the discussion that there may be several adjunct systems that need to be recovered along with the VM. The Recovery Time Objective and the Recovery Point Objective that may be defined by the business must take into consideration the time required for the recovery of these systems in addition to the recovery of the VM.
Doomsayers will say what they must, but, you need to prepare for the worst and plan to deliver the best of the cloud to your organisation. It is true that there are risks attached to cloud computing. But, these risks are controllable if you know what they are and how they need to be handled. Let us examine some of the risks and understand how they can be mitigated.
The use of the Internet as the network makes cloud computing vulnerable. Several issues are often highlighted in context. First, the APIs used by cloud vendors are not checked for vulnerabilities by cloud users. This creates security problems. Second, most cloud solutions are multi-tenant solutions and vulnerabilities may be introduced at the software level, making the user content pregnable. Third, cloud computing may create several compliance risks, whatever the vendor may claim. Finally, proliferation of cloud subscriptions can complicate budgeting and IT spending and make it difficult to recognise patterns of spending, given the fact that cloud billing is usage driven.
Take a close look at the statements. You will realise that each of these problems can be effectively countered with the right tools. It is obvious that there are a number of measures that can be taken to ensure that any transactions conducted over the Internet are inviolable and cloud subscriptions do not become uncontrollable. All it requires is a little bit of common sense, tons of enforceable discipline, and plenty of planning. This is the most boring part of cloud migration, but an effort that will yield huge dividends to any organisation in the long run.
Companies launching on their cloud venture must lay down clear ground rules. A cloud purchase policy will be a good starting point. The policy must stipulate that all cloud subscriptions must be authorised by a central authority and branches or mobile users cannot subscribe to cloud vendors of their choice. If one centralised authority does all the cloud buying, it will be possible to have a complete and holistic picture of cloud spending.
Further, the centralisation of purchases will facilitate any other risk mitigation activity that the organisation may wish to undertake. For instance, the IT Administrator can examine in some detail the software deployments offered by the cloud vendor. Any multi-tenancy environments can be vetted before they can be accepted. The compliance offerings can be tested before data is transferred to the cloud. The encryption protocols offered can be evaluated for impregnability of the information being transmitted over the Internet and stored in the cloud repository. It will be a good idea to check on the disaster recovery functionalities before the final commitment is made.
The bottom line is: you are responsible for the security of your cloud. Forget what the doomsayers have to say!
If you are migrating to the cloud, make sure your processes have been tuned up and made intelligent. Intelligent processes can make your computing experience pleasant and meaningful. It can bring significant benefits to your business. It will be possible to measure your performance and even attribute numerical values to less tangible effects of cloud computing on your business.
The greatest contribution of the cloud is agility. The cloud uses agile tools to automate processes such as Document Management Systems (DMS) and Business Process Management Systems (BPMS). These tools allow organisations input rules and procedures into the system, and align the cloud to business policies on the fly. Process logs that record data as processes execute, allow organisations work with predictive and process analytical tools to understand the glitches and problems that retard the smooth execution of their processes. The results can then be used to improve the processes and reduce the latencies that may occur due to process slowdowns or network problems. Analytical outcomes can also be used to predict future “what if” scenarios and assist in enterprise decision making.
The knowledge worker regards the cloud as a boon. The anywhere, anytime, any device access that is made possible by the cloud, allows specialists apply their special skill and knowledge in decision making from wherever they are. The intelligent process will allow them to see all past and current information in a single window, and make informed decisions about current or future business actions at once. Since all information input is policy driven, the knowledge worker receives information in context, and is empowered to make the best decision possible.
All this results in greater customer satisfaction. Greater customer satisfaction then translates in to greater profitability for the business.
This brings us to the most important question: How can you make your processes more intelligent for cloud computing? Here are a few tips for starters:
1. Begin by modelling your individual processes;
2. Link all processes together into the value chain;
3. Add instrumentation to processes to collect business events information wherever possible;
4. Create analytical visualisations at runtime so that actual process activity can be observed and recorded accurately for future analysis;
5. Implement business rules based on business policies and observe how the rules impact the process flow. Do they retard the flow? Are process latencies increased by business rules?
6. Ensure process security and observe how security impacts process flow;
7. Improve process flows with process analytics and predictive analysis.
Predictive process analytics assumes greater importance in cloud computing scenarios. This is especially so, as processes are executed over the Internet and they are accessed by users from multiple locations with varying levels of bandwidth and connectivity.
It is important to find answers to all of the following questions:
• How well do these processes execute?
• What are the problems being experienced by users?
• What glitches can we expect to face in the future with these same processes in place?
• What needs to be done to ensure that the processes execute as desired?
Predictive process analysis uses statistical models in the analysis of how processes are executed and uses simulations to predict how the process will behave under different constraints. Since the exercise uses historical data, it is necessarily a post mortem exercise. The constraints used during the simulation exercise are locational constraints, time constraints, and process constraints that are derived from process data available with the organisation and recorded during the execution of the process.
Process analytics helps organisations improve process designs and reduce the latency of processes as they execute over the Internet. For instance, if deadlines are being missed due to execution of standard processes today, predictive analytics will predict possible future misses given the current speed of execution. This is because predictive process analytics can use ‘what if’ scenarios and future-state-scenarios to simulate business situations that may occur at some distant date in the future time. Organisations will be able to predict whether they need to change or improve their processes to meet future demands on the process or fine-tune the current process to compensate for any network latencies that may be occurring due to variations in Internet connectivity that is being experienced by its mobile users or branch offices.
Are predictive process analytical tools included in cloud service offerings today? The answer is not yet. The good news is that it will be. Cloud services that are first to provide their users with predictive process analysis tools will be able to corner a large share in the market. They will be able to distinguish themselves from the competition and help their users create processes that are cloud ready and resource efficient. The future of cloud computing lies in being able to predict how a process will perform when it is accessed from multiple locations and time zones by multiple users working with diverse devices.
Complete automation is a myth. Absolute agility is a dream. But, the cloud makes it possible to automate those routine processes and activities that would otherwise consume considerable amount of time and deprive the organisation of the precious time that can be spent innovating, communicating, and building up their business.
The first step towards smarter computing is to spell out your rules and policies. These are triggers and frames for intelligent process definitions. For instance, if you want only certain section of your employees to have access to a specified set of data, it is important to have a user management policy. Each employee who can be authorised for access must be given a user id and password that allows access to the data set. The authentication server database must contain the information that is required for authenticating and permitting such employees to access the information. Any other person attempting to access the information will then be automatically rejected and denied access to the data set. Once the policy is in place and the rules of access have been spelled out, the system will take care of the process intelligently.
The cloud allows enmeshing of heterogeneous systems into a single system to increase enterprise reach and improve the agility of the business. This may involve transfer of data and information between these systems across time zones over the Internet. Security during the process of data transfer, and security at the point of data use become a major concern. Cloud service providers use encryption and user management protocols in innovative ways to ensure security of the information passing through the network. Data is encrypted at source and remains encrypted at rest. Only authorised users, who are authenticated by the authentication server, are given access to decrypted information. All others attempting to listen in will be unable to access the decrypted information in any manner. Attempts to listen in also generates alerts that can be tracked to the source.
Organisations that have migrated to the cloud can let go their tight hold on the amount of server / storage resources consumed by individual users. Users will consume only as much resources as they need for the present. The scalability of the cloud precludes the need to provision for and hoard resources against possible future needs. Moreover, users cannot store duplicate pieces of information, indiscriminately consuming space. The backup and recovery software automatically detects duplicate pieces of information and eliminates them during the data transfer to storage repositories.
Interesting? It seems smart! Smart organisations get smarter with cloud computing!