Tag Archives: Virtualisation

VMware for Cloud Computing

Continuous availability is the promise of the cloud. Tech refreshes sans downtime is a part of this promise. One of the factors that enable cloud services to deliver this promise is the use of VMware as part of their service offering.

VMware comes with a number of computing kits. VMware vSphere enables scalable server consolidation and functions as a high performance virtualisation layer.

vSphere abstracts server hardware resources and makes servers sharable across virtual machines . It consolidates workloads, automates business continuity, and facilitates centralised management of all applications.

VMware vCenter Site Recovery manager is a disaster recovery solution for multi-site environments. The Distributed Resource Scheduler (DRS) aligns computing resources to business needs and orchestrates load balancing across host machines, and optimises power consumption during peaks and troughs in the business.

vMotion eliminates the need for planned downtime by allowing live virtual migrations between machines.

Hadoop workloads can be run with vSphere to enhance utilisation, reliability and agility of the data center.

VMware delivers greater control over the network to the end user. It allows users define priority access to network resources in synchronisation with business policies. Network provisioning can be centralised and administration / monitoring of the data centre can be achieved by a process of network aggregation.

High availability is the foundation on which VMware promises are made. But, VMware does not confront the IT Administrator with the complexities of traditional clustering and builds in a fault tolerance that promises zero data loss in the event of server failures. It automatically detects and recovers systems even when the operating system fails. The agentless disk backup (with data de-duplication) reduces the amount of disk space consumed and eliminates any third party costs that may be incurred by users for replication of data.

VMware was designed to automate common tasks. The technology allows users deploy and patch vSphere hosts in minutes or create / configure profiles instantly. The update manager performs most tasks in the background non-intrusively.

Interestingly, security is never compromised even when performance is enhanced by eliminating anti-virus footprints. Security is offloaded to a virtual appliance called vShield Endpoint provided by partner organisations. vShield Endpoint is designed to eliminate anti-virus storms, implement end user access policies, and manage custom access configurations while facilitating compliance with existing legal mandates.

Actionable Network Intelligence for Cloud Computing — The Need of the Hour

What are the factors that are forcing the direction of cloud and network technology development?

First, there is an explosive growth in the mobile / computing device market. Enterprises are being forced to adopt BYOD (Bring your own device) policies. The outcome—variety in the nature and types of devices that are used to connect to the enterprise network, a demand for an intelligent network management system that will ensure reliability, security, and availability 24 x 7 x 365! In other words, there is an urgent and overwhelming need to create network “connections” that are thought-through constructs and not a set of pipes put together anyhow. The protocols must address core issues, recognising the complex and dynamic nature of the relationships between the network and the device connecting to it. Network management must keep pace with the current developments and synchronise IP address management concepts to provide the necessary network intelligence and control.

Second, the infrastructure is a moving target. Virtualisation has resulted in continuous movement of virtual environments between data centres and networks. As a result, there is an overwhelming need to keep track of when, where, and how virtual servers and devices are being stored at any given point in time, and how they can be accessed. Therefore, implementation of an automated network infrastructure is a business imperative. Businesses can no longer afford to be bogged down with IP address conflicts and network configuration errors or lack of visibility into address usage. IP address management will have to be integrated with self service portals to prevent bottlenecks in provisioning for virtual and physical devices in the cloud.

It is no secret that connectivity is the key to business competitiveness in the modern world. The network must provide the actionable intelligence that businesses need to survive and grow in the increasingly global world that is battered by big data and unquenchable hunger for information. Businesses that are slow to adapt are doomed to fade away and die. Commonsense dictates that enterprise IT system developers focus their energies on the development of solutions that facilitate smarter ways of connecting system resources, mobile devices, applications, virtual environments and clouds together, and infuse it all with a unified sense of purpose. The felt need is to build an integrated IT system, a mobile security system, resolve address management issues, and encourage automation / self service while preserving scalability, reliability, security, and at the same time reducing costs, enhancing delivery, and optimising user experience.

Get Yourself Scalable Storage!

If your data is burgeoning and your volumes are becoming unmanageable, it is time to get yourself some scalable storage.  Virtualisation, no doubt is a first step, but it is just that—the first step. You need to take the logical next step and move to the cloud.

The cloud is designed to handle unstable workflows.  The peaks and troughs in your data flow need no longer bother you. You can scale up your storage when data volumes peak, and scale it down when your data volumes dip, and you need to pay only for what you use!  If you pause and consider the implications of this, you will appreciate the flexibility you gain thereby. Your legacy systems never allowed you to enjoy this luxury.  They were designed for unchanging workflows and you had to spend hours anticipating the peaks and troughs in workflows and provisioning for the same. You had to maintain redundant resources to ensure that you did not find yourself short when peaks were encountered.

Die-hard fans of legacy systems will be quick to point out that there must be some kind of trade off involved. Perhaps, scalability will be provided at the cost of performance?  Well, no.  It is the legacy systems that create trade offs with performance. Legacy systems have monolithic controller architectures. When resources are shared, all the applications try to hog the available resources, creating noise and performance dips.  If resources are to be dedicated to performance sensitive applications, a management nightmare is in the making.  Resources will have to be hard wired to specific applications while other applications are starved for resources.   The process can also result in storage fragmentation.  Cloud architectures are designed to dedicate and release resources on demand.  This results in optimum utilisation of resources and makes the resources available for other applications, as and when the demand is made.  Storage fragmentation is never an issue as storage is never fragmented or permanently dedicated to any one application.

Cloud resource scalability extends to security scalability.  Data is always encrypted, isolated and in synchronisation with standard security requirements. Comprehensive security can continue to be provided even when hardware resources are scaled up or down. This is in distinct contrast with legacy systems, where security cannot be scaled up or down on demand, and additional security can only be provided if physical resources are added to the data centre.

Virtualisation Basics for Cloud Commuters

Cloud commuting or cloud computing—as it is popularly called—must necessarily take cognisance of Virtualisation.  Today, every organisation has virtual machines that need to be backed up and recovered. But, how are these virtual machines different from physical machines? A basic understanding of the concepts will go a long way towards easing the process of backing up or recovering organisation data to or from the cloud.

What is Virtualisation?

It is true that virtualisation has been around for many years now. It was first used in 1960’s and remained a feature of mainframes for a long time. It is only in the last few years–with the emergence of desktop virtualisation, network virtualisation, and later the cloud—that virtualisation has entered mainstream computing.  A close look at what the term means will reveal that virtualisation is an umbrella concept. It is used in different contexts to refer to different things in a production environment.

Desktop virtualisation is the process of running multiple virtual desktops on a single PC.  Each virtual desktop will contain its own operating system and applications and can be configured to load up on appropriate user authentication. Application virtualisation is the process of separating the application from the operating system and running it within a “sandbox” that can reside on a remote server or stream in from a remote server.

Server virtualisation, as mentioned earlier, allows many virtual machines to run on a single server. The different virtual machines share CPU, memory, storage and networking resources.  Storage virtualisation is the process of grouping together physical storage using software, so that all the storage devices appear as if they were a single storage drive.  It must be noted here that virtual storage is not the same as the virtual machine. The virtual machine is a set of files that run in the server box to create an operating environment, while virtual storage runs in memory on the storage controller using software to create the impression of a single storage repository. All the files in storage may or may not be present in the single server.

Cloud vendors exploit these two virtualisation features to make their services scalable and redundant.  Network virtualisation is the process of using software to perform network functions. This is achieved by decoupling the virtual networks from underlying network hardware and using software switches to move data around.

Our Customers

  • ATOS
  • Age UK
  • Alliance Pharma
  • Liverpool Football Club
  • CSC
  • Centrica
  • Citizens Advice
  • City of London
  • Fujitsu
  • Government Offices
  • HCL
  • LK Bennett
  • Lambretta Clothing
  • Leicester City
  • Lloyds Register
  • Logica
  • Meadowvale
  • National Farmers Union
  • Network Rail
  • PKR

Sales question? Need support? Start a chat session with one of our experts!

For support, call the 24-hour hotline:

UK: 0800 999 3600
US: 800-220-7013

Or, if you've been given a screen sharing code:

Existing customer?

Click below to login to our secure enterprise Portal and view the real-time status of your data protection.

Login to Portal