Tag Archives: Replication

Functionality, Quality, Price—The Evaluation Parameters for the Cloud

IT budgets do not scale in proportion to IT needs. Data growth outstrips infrastructure and headcount growth. The CIO is forced to compromise.

What if the enterprise could become instantly IT enabled with very little investment in infrastructure, software or HR?

Utility computing in the cloud serves the enterprise with what they need, when they need it, through any channel and any kind of device. The technology integrates and automates the value chain, adapts easily and innovates constantly. Risk and environmental responsibilities are well orchestrated and everything is streamlined to deliver ‘best fit’ services. Functionality, quality and price are definitely attractive.

Cloud computing enhances the efficiency and functionality of the enterprise. Cloud storage systems are developed to support “on demand” utility computing models — SaaS, PaaS and IaaS — with intent to deliver IT as a service over the Internet. Users can scale up or scale down on infrastructure or space instantly and pay only for what they use. Mobile and remote computing technologies are made available for disparate parts of the business and mobile workers can synchronise their activities with that of the parent business from wherever they are. Employees can collaborate with each other or access business applications from the central server. User and usage management policies can be implemented by exploiting the functionality inbuilt into the cloud application.

Quality of service delivery is the unique selling point (USP) of cloud vendors. QOA distinguishes them from the competition and builds trust in business relationships with their customers. Cloud vendors are conscious that their services are evaluated on the basis qualitative factors, such as design and delivery of security systems, compression and de-duplication of data or speed of backup and recovery. The way the services are packaged together also makes a difference.

Economies of scale, deriving from multi-tenancy computing models, make the cloud attractive to cash strapped enterprises. The pay per use model, typical to the utility services sector enables small and medium enterprises with small budgets garner and use resources that were earlier only available to their larger brethren. Additionally, CAPEX vanishes and is replaced by OPEX. This makes it wholly attractive to managements who do not want to invest scarce resources in IT infrastructure to the detriment of other business activities.

Support services provided by the cloud shrinks IT expertise requirements within the enterprise. Hardware and software maintenance in the cloud is the responsibility of the cloud vendor. The vendor is also committed to ensuring high availability of customer information and 99.9% uptime. Responsibility for mirroring, replication, de-duplication, compression and secure storage of information is transferred to the cloud vendor. A single IT Administrator can manage the database and maintain offsite copies of the data for additional data availability.

We at Backup Technology, offer the best of the breed public, private and hybrid cloud services to our customers unfailingly. We anticipate customers’ every need and work towards providing them with the functionalities they require without compromising on quality. Our pay per use pricing model is economical and wholly affordable. For more information, please do visit our website: www.Backup-Technology.com.

How Do You Avoid Bad Backups?

Every backup process is fraught with risks. The cloud is no exception. However, unlike in tape and other kinds of backup, bad backups or backup failures in the cloud can be instantly tracked and corrected.

Backup failures in the cloud can occur at source. If the backup software manager is not resilient, power failures can disrupt a backup schedule and cause a backup failure. Most cloud vendors are conscious of this problem. The software comes with log files which immediately record the failure of the backup or backup reports can be set to automatically popup on restoration of power or manually called up by the system administrator monitoring the status of the backup.

Where the client based cloud software provisions for continuous backup, the backup failure is registered in the log and the backup will resume automatically on the restoration of power. In either case, proactive cloud vendors handle bad backups and backup failures by constantly monitoring the status of customer backups from their end of the chain and notifying the customer of a backup failure via email or telephone in addition to any alerting mechanisms that may be triggered at the client end.

Poor compression, de-duplication and encryption algorithms can generate bad backups. The data being encrypted, compressed and deduplicated may become corrupted by poorly constructed algorithms, making the data unrecoverable. A similar problem can arise at destination if the data is encrypted, compressed or deduplicated at the cloud vendor’s server with poorly structured algorithms. Mid process power failures may be blamed for other types of data corruption that can occur at the client or server side of the backup process.

Unauthorised listeners on the network, employees with malafide intent or even ignorant personnel can cause a good backup to go bad. While most cloud vendors attempt to prevent hijack of data during transmission and make an all out effort to safeguard customer data from malicious attackers, no system is absolutely hacker proof. Wise users will insist on maintaining multiple copies of their backup as insurance against possible corruption of any one copy.

Data replication, data mirroring are done at the server end of the chain by cloud vendors to ensure high availability and security of customer data. Many cloud vendors encourage their customers to maintain a local copy of their data in addition to the offsite copies that they create. Many vendors offer local backup devices as part of their package. The client based software creates a local copy of the data on the onsite device even as a cloud based copy is being created on the remote server.

We, at Backup Technology, understand the security needs of our customers. Our software logs every activity that is performed and backup failures are instantly reported. The continuous backup option enables the backup to automatically resume after power is restored, while a failed schedule can be completed by manually invoking the backup process. Our encryption, decryption and compression algorithms are well tested and proven. We replicate, mirror and maintain customer information in multiple servers that are geographically disbursed to ensure high availability and disaster recovery.

What is Your Backup and Recovery ROI?

The bottom line is profit. No organisation can afford to have a backup and recovery system that consumes more resources than it can afford. So, calculating the Return on Investment (ROI) on backup and recovery is a business imperative. But, how does one calculate the ROI? Here is a glimpse into factors that go into the calculation of the ROI on digital assets.

Start with the available data management technologies. Remember, the cost of re-engineering using similar new or existing technologies will be enormous when compared with the cost of using the newer technologies that are available in the market. For instance, the cloud will present a more cost effective method of backup and recovery when compared to offline backup, tape backup, and recovery methods. The difference between what you will invest on offline technologies and online backup and recovery technologies is the first low hanging fruit you can harvest!

Unlike other physical assets, data is an asset that has peculiar characteristics. A data asset must be accessible to your employees anywhere, anytime and on any device to be meaningful. The asset, consequently, tends to get distributed, replicated, duplicated and stored on multiple devices across the enterprise. The vulnerability of the asset increases in proportion to the number of times it is replicated and duplicated. Security systems for protecting this asset acts as a drain on the resources. Consolidation of data into a single data repository that can be centrally protected and universally accessed makes a lot of business sense. Costs can be brought down, while security need not be compromised and access need not be denied. Productivity can be maintained or even enhanced as more and more mobile workforces are given access to the data on the go.

Regulatory compliance is a major consideration in the management of data assets. Distributed data with minimal security results in a number of compliance headaches. Centralised data and data management systems ensures controls, data consistency across the organisation, and facilitates compliance. Risks of non-compliance are significantly reduced and hard dollar costs to the company can be avoided.

Digital assets must be hedged against disaster. Disaster recovery is an expensive proposition. It is often ignored or neglected for this reason. However, disaster recovery is automatic and part of the packaging for cloud offerings. No specialised efforts are required and no separate teams need to be deployed for managing and maintaining disaster recovery protocols (if any) that may be instituted. The actual exercise of replicating information and keeping it highly available for business use is abstracted to the cloud service provider. All this reduces costs and increases profitability of the digital asset.

How Cloud Services Guarantee High Availability

As businesses become more and more dependent on access to their digital assets, there is a growing intolerance to outages and down times. Continuous availability and high availability anywhere, anytime with any kind of device is the mantra of the age. Businesses struggling to meet this demand turn to cloud services to fulfil these expectations. Cloud service providers too, in keeping with their promise, are making an all out effort to bring the right technologies to the table so that their customers are never offline and their data never becomes inaccessible, whatever the circumstances.

Continuous availability of information requires planning. Once the customer has identified the applications that are mission-critical and must be continuously available, cloud service providers will recommend continuous backup of these applications. The backup process is orchestrated quietly in the background with no disruption to the production systems. Technologies such as bandwidth throttling, are used to ensure that the backup process consumes only redundant bandwidth or a minimal bandwidth. Data transmitted to the remote server is then continuously replicated on to one or more geographically dispersed servers to create redundant stores of the same information for future recovery.

The cloud service provider is very disaster conscious and has the responsibility of ensuring that disasters that impact the cloud data centre do not get passed on to individual customers using the services in the cloud. As part of the disaster recovery plan for the data centre, the cloud service provider links together the primary and secondary servers (that are geographically dispersed) in failover configurations. Secondary servers kick start their activity the moment the primary server develops a glitch or fails in any manner. Customers accessing or updating information are seamlessly shifted over from the primary server to the secondary server. The operations are so smooth that customers may not even realise that they have switched servers during a production operation. In the process, the cloud service provider creates a time-pocket in which the primary server can be set right and brought back into operation. High availability is an automatic outcome of this operation.

Customers who have geographically dispersed activities can also take advantage of cloud services multi-site data storage functions. If the enterprise has branches in the local area where a replication server is housed, the enterprise can configure the local replication server to service the requests of the branch. This will cut down on any latency that may be experienced by the branch in accessing and working with data stored on the centralised remote primary server. Any updates or operations on the enterprise data can be reverse-replicated from the secondary server to the primary server continuously.

It is no wonder that high availability is the guarantee of cloud service providers.

Disaster Recovery of Backed up Virtual and Physical Machines

Not being prepared for disaster is not an option. Total environmental duplication is expensive.  Attempting to recover only when there has been an incident is tempting the fates. These traditional approaches to disaster recovery create a lot of uncertainty and even stress for the organisation.

The growing sophistication of the cloud holds out a promise for backup and recovery of both physical and virtual machines.  Cloud backup vendors are “disaster aware” and make elaborate provisions for recovery of their customer data. The disaster recovery efforts indirectly benefit customers who sign up for cloud based backup and recovery accounts.

At the core of the disaster recovery best practices that is instituted by the cloud vendor is—data replication. Replication helps the vendor rotate the backed up customer data offsite.  The replications use high-speed connections for streaming the data and double protecting it.  The replication servers are separated geographically from each other, to protect the information against natural disasters that may impact any single data centre.  The vendor may create a hot site and a disaster recovery site with failover provisioning to ensure that customers always have access to their data even when the primary server experiences a shut down.

However, it should be noted that virtual machine replication is a complex task and not all cloud vendors have the technology required for the purpose.  Traditionally, the data contained in the virtual machine has streamed through the primary server before it is backed up to the replication server.  When the virtual machine has to be restored, the virtual machine can be brought online by restoring the virtual disk to the production stage.  All this may take a lot of time, and time is the scarce commodity during a disaster.

Improved cloud replication management technologies allow cloud vendors directly stream the data from the virtual machine to both the primary and secondary servers directly.  This enables instant recovery of the virtual machine (which exists in the production stage on the secondary server) even when the primary server experiences an outage.   Disaster recovery plans can be implemented instantly. Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO) can be tightly coordinated. Check with your cloud vendor on this point.

The integration of VM replication with the data protection solutions offered by your vendor is a bonus. It will empower your organisation and keep you prepared for all kinds of disasters.

Does Your Cloud Backup Provider have a Replication System in Place?

Backing up your data to the cloud is becoming a more popular option that is being explored and undertaken by the majority of businesses. Therefore, the data backup market has become more competitive with hundreds of companies all offering a backup service that will get your data offsite. Due to the competitiveness of the market, companies have cut corners with the aim to reduce their costs and therefor offer the end customers a lower price. A couple of examples of this is storing data on cheap hardware and re-branding home user backup solutions into a business solution. One of the most overlooked aspects of a solution is whether a replication system is in place.

When someone is looking for a cloud backup provider to back up their data, the primary objective is to find a solution that will work and is within their budget. In most cases, little or no consideration is taken for whether the offered solution has a replica system in place.

Having a replica system in place offers both the service provider and customer several advantages and peace of mind that if anything did happen to the primary system, a switch can be made to the replica system and therefore allowing backups and restores to run as if nothing had happened.

If a service provider is utilising cheap hardware to store the data on, the likelihood of them suffering from hardware failure is drastically increased. In this event and with no replica system in place, you are left with a data backup solution that cannot connect to a system and therefore backups and restores cannot run. You are then reliant on the service provider replacing the hardware as quickly as possible. This could take days or even weeks and therefore leaving you highly exposed to potential data loss which could have an unprecedented impact on the running of your company.

There are also other scenarios that could result in the primary system becoming unavailable. Such scenarios could include the hardware vendor releasing a firmware update thatisn’tstable or workmen cutting through power orfibrelines. There could also be some sort of disaster experienced where the primary system is located, affecting the telecommunication infrastructure in the entire area.

If you are currently utilising a cloud backup solution in which a replica system isn’t in place, it is worthwhile checking to see what contingency plans that the service provider has in place if anything was to happen to the primary system in which your data is kept on. There are a number of scenarios that can take the primary system offline and therefore meaning that you cannot backup you data and run restores.

In order to ensure that downtime of a system is kept to a minimum, making sure that your backup provider has a replica system in a geographical separate location is vital. By having this in place, the likelihood of you being able to run backups and restore data with very little disruption is drastically increased. If they do not have a replica system, you are putting all your eggs in one basket where one event or problem can take your entire backup system down and leave you unable to backup and restore your data for an extended period of time.

Do youknowif your backup provider has a replication system in place? Do you consider this as an important aspect of a cloud backup solution?

Our Customers

  • ATOS
  • Age UK
  • Alliance Pharma
  • Liverpool Football Club
  • CSC
  • Centrica
  • Citizens Advice
  • City of London
  • Fujitsu
  • Government Offices
  • HCL
  • LK Bennett
  • Lambretta Clothing
  • Leicester City
  • Lloyds Register
  • Logica
  • Meadowvale
  • National Farmers Union
  • Network Rail
  • PKR

Sales question? Need support? Start a chat session with one of our experts!

For support, call the 24-hour hotline:

UK: 0800 999 3600
US: 800-220-7013

Or, if you've been given a screen sharing code:

Existing customer?

Click below to login to our secure enterprise Portal and view the real-time status of your data protection.

Login to Portal