All posts by Damien Garvey

Top Ten Reasons to Leave your Cloud Backup Service Provider – Part I

Business relationships are important. You have done your homework. You have researched and tested several solutions and settled on one that you thought was a great cloud backup vendor. Before you picked this company, you considered several factors, such as: technology, experience, financial status, reputation, security, compliance, support, certification, scalability, and trust. But, now, the vendor is taking it all for granted and is providing you with substandard services, resulting in not so good relationship.

Is your relationship with your cloud backup vendor healthy? If your business relationship starts to show some signs of stress, chances are the relationship will die at one point. Perhaps, it is time for you to gauge your business relationship. If you notice any or all of the following points, you might be in a bad relationship:

1/ Data Backup – the company doesn’t backup all of your data across all operating systems, and on mobile devices. Are you using various backup solutions across operating systems (iOS, Windows, etc.) and across mobile devices?

2/ Appliance – Is the vendor appliance centric? Do you find yourself spending more than what you planned for appliances? Is the vendor requesting you to acquire additional appliances to match with your backed up data? Relying heavily on appliances might not be an ideal solution. Cloud centric solutions, however, offer unlimited scaling when your data grows. Is your data being tethered to an appliance instead, and as a result, forcing you to delete data and/or buy bigger appliance to gain extra space for your growing data?

3/ SLA – Service Level Agreements are very important. SLAs have a purpose and that is why a great deal of effort is put into preparing them. Does the vendor execute per signed and approved SLA?

4/ Price – the price the vendor is charging you varies all the time, and is complicated, and you can not figure out how the pricing model works. Is it per GB of raw or compressed data? Do you get credit for not recovering data, say, in the past one year?

5/ BLM – Does your vendor treats all data the same and back them all up in the same vault? Keep in mind that all data has the same value. The older a data gets, the less important it becomes. So, mission-critical data should be stored separately with clearly defined RTO and RPO while less important data should be stored in less expensive vaults. Intelligent software have the ability to automatically segment the data into these two tiers.

If you decide to move your services to a new vendor, make sure that you don’t end up with the same problems as the vendor you just switched from. Insist on asking the new vendor to help with the data migration, at least consultation help. Remember that choosing a cloud backup service provider is not a simple task; and the vendor you choose could end up causing you to go out of business.

In Part II, we will discuss other five factors that affect your relationship with your vendor, such as bandwidth throttling; data centre location; vendor lock-in; DRaaS; and periodic vendor research results.

Functionality, Quality, Price—The Evaluation Parameters for the Cloud

IT budgets do not scale in proportion to IT needs. Data growth outstrips infrastructure and headcount growth. The CIO is forced to compromise.

What if the enterprise could become instantly IT enabled with very little investment in infrastructure, software or HR?

Utility computing in the cloud serves the enterprise with what they need, when they need it, through any channel and any kind of device. The technology integrates and automates the value chain, adapts easily and innovates constantly. Risk and environmental responsibilities are well orchestrated and everything is streamlined to deliver ‘best fit’ services. Functionality, quality and price are definitely attractive.

Cloud computing enhances the efficiency and functionality of the enterprise. Cloud storage systems are developed to support “on demand” utility computing models — SaaS, PaaS and IaaS — with intent to deliver IT as a service over the Internet. Users can scale up or scale down on infrastructure or space instantly and pay only for what they use. Mobile and remote computing technologies are made available for disparate parts of the business and mobile workers can synchronise their activities with that of the parent business from wherever they are. Employees can collaborate with each other or access business applications from the central server. User and usage management policies can be implemented by exploiting the functionality inbuilt into the cloud application.

Quality of service delivery is the unique selling point (USP) of cloud vendors. QOA distinguishes them from the competition and builds trust in business relationships with their customers. Cloud vendors are conscious that their services are evaluated on the basis qualitative factors, such as design and delivery of security systems, compression and de-duplication of data or speed of backup and recovery. The way the services are packaged together also makes a difference.

Economies of scale, deriving from multi-tenancy computing models, make the cloud attractive to cash strapped enterprises. The pay per use model, typical to the utility services sector enables small and medium enterprises with small budgets garner and use resources that were earlier only available to their larger brethren. Additionally, CAPEX vanishes and is replaced by OPEX. This makes it wholly attractive to managements who do not want to invest scarce resources in IT infrastructure to the detriment of other business activities.

Support services provided by the cloud shrinks IT expertise requirements within the enterprise. Hardware and software maintenance in the cloud is the responsibility of the cloud vendor. The vendor is also committed to ensuring high availability of customer information and 99.9% uptime. Responsibility for mirroring, replication, de-duplication, compression and secure storage of information is transferred to the cloud vendor. A single IT Administrator can manage the database and maintain offsite copies of the data for additional data availability.

We at Backup Technology, offer the best of the breed public, private and hybrid cloud services to our customers unfailingly. We anticipate customers’ every need and work towards providing them with the functionalities they require without compromising on quality. Our pay per use pricing model is economical and wholly affordable. For more information, please do visit our website: www.Backup-Technology.com.

Mis-Saves—Is your Data Lost Forever?

Mis-saves happen. It is a fact of the computing world. The ‘Intelligent’ recognise the fact and provide for it. Cloud service vendors live with this reality and understand its implications. They consequently, take extraordinary efforts to ensure that mis-saves do not cause data loss for their customers.

Most cloud vendors use versioning and time stamps to distinguish between backups and protect customer data against mis-saves.

Each file that is saved into the system is tagged with a unique identifier and a time stamp. The original or the first copy of the file that is saved into the system, is called the primary file and any copies of the file saved from the same node or different nodes are identified as replicas of the original. These copies are then deleted and only one copy of the file is retained.

If changes are made to the file and it is saved into the repository, the new version of the file is compared with the existing version of the file and it is tagged as a new version with a version identifier. The backup algorithm identifies changes in the file and saves the changes to the file as a new version with references to unchanged content in the original file. Users, however, will be able to see all the content in the file version called, as data from the original file replaces the referenced sections of the file when the file is being displayed.

It follows that mis-saved files will not result in complete loss of data. Only the changes made to the new file will be lost and users can recall the original version of the file from the backup repository and redo the changes required once more. They can even use the latest version of the file that is available on the system to rebuild the lost version of the file since most cloud vendors permit users save and retain many versions of a file in the backup repository.

We, at Backup Technology, are powered by Asigra, a robust agentless cloud backup system. Our continuous backup system in place constantly monitors changes to files and saves the changed file as a new version of the file. Since new versions are created after extracting the changes and creating pointers to unchanged content—the new versions occupy less disk space and are smaller in size. Users can access versions of a single file from the storage. Additionally, the open file driver that comes with our software automatically backs up files that have been left open for long periods of time in applications such as Outlook, QuickBooks and Simply Accounting. It performs backup snapshots of data as scheduled back on to servers. So, we invite you to try our software and experience first-hand the power of always having your files saved for you automatically and constantly without conscious effort on your part!

Duplication and De-duplication

Organisational databases are not created by a single individual with a single access device. These databases grow and the growth is fed by multiple users inputting data from multiple devices from diverse locations. The data is often shared across devices by users attempting to collaborate. As a result, data is downloaded and stored on local devices for instant access and use. This results in disorganised duplication of same, similar or slightly modified version of the information and storage of such data at multiple locations.

The IT Administrator entrusted with the task of consolidation backup and recovery of information for the organisation is often flummoxed by the infinite number of times a single piece of information is duplicated across the organisation. If each piece of information is to be checked for duplication manually and then dropped into the backup basket, the task will be gruelling to say the least and will assume nightmarish proportions for the individual over a period of time. De-duplication technologies are used to automate the task of identifying and eliminating duplicates of information during the process of consolidation.

Most cloud backup and recovery software come with integrated de-duplication technologies. The IT Administrator has to begin the process of consolidation by identifying a primary backup set for seeding the backup repository. Each piece of information is encoded with a hash algorithm that is unique to the file/folder or block of information seeded. Backup of data from every other device connecting to the enterprise network is also encoded with the hash algorithm and hash algorithms are compared for identifying any duplicate information that may exist in the current backup set. All duplicates are then eliminated and references to the original information is stored in place of the duplicates in case the data has to be recovered to a new device with all duplicates intact.

De-duplication is often described as a compression function. This is because the removal of data compresses the volume of information that is ultimately stored in the cloud database. Moreover, compression functions in a sense remove duplicates of information at granular levels within the file or folder. For instance compression removes all the spaces between words to reduce the amount of space that is occupied by the data in the storage repository. However, the two functions differ from each other in purpose and scope. De-duplication attempts to remove duplicates of information to rationalise the data stored in the database. Compression is purely a functionality used to save on space. The process of de-duplication and compression will have to be reversed at the time of recovery, in order to obtain the complete data set from the storage.

How Cloud Services Guarantee High Availability

As businesses become more and more dependent on access to their digital assets, there is a growing intolerance to outages and down times. Continuous availability and high availability anywhere, anytime with any kind of device is the mantra of the age. Businesses struggling to meet this demand turn to cloud services to fulfil these expectations. Cloud service providers too, in keeping with their promise, are making an all out effort to bring the right technologies to the table so that their customers are never offline and their data never becomes inaccessible, whatever the circumstances.

Continuous availability of information requires planning. Once the customer has identified the applications that are mission-critical and must be continuously available, cloud service providers will recommend continuous backup of these applications. The backup process is orchestrated quietly in the background with no disruption to the production systems. Technologies such as bandwidth throttling, are used to ensure that the backup process consumes only redundant bandwidth or a minimal bandwidth. Data transmitted to the remote server is then continuously replicated on to one or more geographically dispersed servers to create redundant stores of the same information for future recovery.

The cloud service provider is very disaster conscious and has the responsibility of ensuring that disasters that impact the cloud data centre do not get passed on to individual customers using the services in the cloud. As part of the disaster recovery plan for the data centre, the cloud service provider links together the primary and secondary servers (that are geographically dispersed) in failover configurations. Secondary servers kick start their activity the moment the primary server develops a glitch or fails in any manner. Customers accessing or updating information are seamlessly shifted over from the primary server to the secondary server. The operations are so smooth that customers may not even realise that they have switched servers during a production operation. In the process, the cloud service provider creates a time-pocket in which the primary server can be set right and brought back into operation. High availability is an automatic outcome of this operation.

Customers who have geographically dispersed activities can also take advantage of cloud services multi-site data storage functions. If the enterprise has branches in the local area where a replication server is housed, the enterprise can configure the local replication server to service the requests of the branch. This will cut down on any latency that may be experienced by the branch in accessing and working with data stored on the centralised remote primary server. Any updates or operations on the enterprise data can be reverse-replicated from the secondary server to the primary server continuously.

It is no wonder that high availability is the guarantee of cloud service providers.

Predictive Process Analytics for Cloud Computing

Predictive process analytics assumes greater importance in cloud computing scenarios. This is especially so, as processes are executed over the Internet and they are accessed by users from multiple locations with varying levels of bandwidth and connectivity.

It is important to find answers to all of the following questions:

• How well do these processes execute?
• What are the problems being experienced by users?
• What glitches can we expect to face in the future with these same processes in place?
• What needs to be done to ensure that the processes execute as desired?

Predictive process analysis uses statistical models in the analysis of how processes are executed and uses simulations to predict how the process will behave under different constraints. Since the exercise uses historical data, it is necessarily a post mortem exercise. The constraints used during the simulation exercise are locational constraints, time constraints, and process constraints that are derived from process data available with the organisation and recorded during the execution of the process.

Process analytics helps organisations improve process designs and reduce the latency of processes as they execute over the Internet. For instance, if deadlines are being missed due to execution of standard processes today, predictive analytics will predict possible future misses given the current speed of execution. This is because predictive process analytics can use ‘what if’ scenarios and future-state-scenarios to simulate business situations that may occur at some distant date in the future time. Organisations will be able to predict whether they need to change or improve their processes to meet future demands on the process or fine-tune the current process to compensate for any network latencies that may be occurring due to variations in Internet connectivity that is being experienced by its mobile users or branch offices.

Are predictive process analytical tools included in cloud service offerings today? The answer is not yet. The good news is that it will be. Cloud services that are first to provide their users with predictive process analysis tools will be able to corner a large share in the market. They will be able to distinguish themselves from the competition and help their users create processes that are cloud ready and resource efficient. The future of cloud computing lies in being able to predict how a process will perform when it is accessed from multiple locations and time zones by multiple users working with diverse devices.

Our Customers

  • ATOS
  • Age UK
  • Alliance Pharma
  • Liverpool Football Club
  • CSC
  • Centrica
  • Citizens Advice
  • City of London
  • Fujitsu
  • Government Offices
  • HCL
  • LK Bennett
  • Lambretta Clothing
  • Leicester City
  • Lloyds Register
  • Logica
  • Meadowvale
  • National Farmers Union
  • Network Rail
  • PKR

Sales question? Need support? Start a chat session with one of our experts!

For support, call the 24-hour hotline:

UK: 0800 999 3600
US: 800-220-7013

Or, if you've been given a screen sharing code:

Existing customer?

Click below to login to our secure enterprise Portal and view the real-time status of your data protection.

Login to Portal