All posts by Stewart Parkin

What is Pre-Process De-Duplication?

The context of Pre-Process De-duplication is also known as Source De-duplication in most case. The source de-duplication is mostly used for de-duplication of data prior to its transmission to the device for storage. The entire data are channelled via the source de-dupe software or hardware before being ready to be transmitted to the storage device where it will be stored. The major objective of source de-duplication is to prevent sending of duplicated data across the network to the device where it will be stored. There is establishment of connection using the designated storage device as well as evaluation of data prior to the time when it will initiate the de-duplication process. The synchronisation with the target disk upholds all through the process in order to ensure synchronisation of data, removing the files that match at the source. The main advantage of this is that it helps save bandwidth for the user.

In order to identify changed bytes, there is always the need for byte level scans by either the source de-dupe software or hardware. To make recovery easy for the user, the changed bytes are transferred to the destination or target device, pointing it to the original indexes and files updated with the pointer. Indeed, it does not take time to control the entire operation as they happen quickly without compromising the accuracy and efficiency of the process. The process of source de-dupe is light on processing power when compared to post process de-dupe. It has been observed that source de-duplication has the capability to categorise data in real time. The device configurations that are based on policy can classify data at granular levels, as well as filter out data while they pass across the source de-dupe device. There can be addition or removal of files on usual basis of the group, domain, user, owner, age, path, file type, or storage type, or even on the basis of RPO or retention periods.

Having said the advantages of source de-dupe, there are some disadvantages associated with source de-dupe. It is true that source de-duplication helps decrease the bandwidth you need to transmit data or files to the destination or target, however, there is imposition of higher processing load on the clients, as the entire process is involved in the source de-duplication. In addition, the central processing unit (CPU) power consumption of your device will go higher by about 25% to 50% during source de-duplication process, which may not really be favourable for you at all. There may be needs to incorporate source based de-dupe nodes into each of the locations connected. This involves more cost and will obviously be more expensive than the target de-duplication techniques, where all the de-duplications are carried out on one de-duplication device, within the network nodal point.

Lastly, if the existing software does not support de-duplication hardware or algorithms, there may be the need for redesigning of the software. This, however, is not a problem in target de-duplication, where there is isolation of de-dupe hardware and software from the organisation’s hardware or software. Also, there are no changes needed at the source de-dupe.

Points to Note about Bare Metal Restore

In general, majority of data restore systems often specify their ability to restore data to only the computing systems with similar or the same software running on them (i.e., environment like operating system, applications, and others). But the case of Base Metal restore is totally different and more unique as it is a repair and restoration technique designed to restore data to a computing system with entirely different hardware and software environment. So, the restoration process is coined “Bare Metal”, which means that the process does not need any requirement from the computing system where the data backup was done for it to restore the data.

Some Difference between Bare Metals and Disk Image Restore
Though, Bare Metal and Disk image restores work with similar process with each other, there are differences that exist between them. While Bare Metal involved installation of operating system, applications and component of data in the image file, Disk Image Restore only requires the installation of the restoration software and a copy of the disk image installed in the backed up computing system. With the help of Bare Metal imaging applications, the process of bare metal restore will be made easier. There is always replication of images of software components as well as the data to the new disk that were present at the time of backup through snapshots. This normally makes it easy for the data and the images to be restored using Bare Metal. The entire file systems in the disk images as well as disk partitions are copied using Linux Booth CD. There is also creation of new partition with either the same or larger size disk image, which resulted from the backup. Some servers support Bare Metal restores, and users can make use of such Bare Metal enabled servers to restore or recover their system in to a HyperV virtual machine.

Indeed, just with one single restore process, the entire computing system can be restored to the last known good state with the help of Bare Metal systems. This wonderful system is great and useful for recovering of data from the system during a disaster. This is made possible owing to the fact that the disk image can be useful for recreating the whole system at completely new geo-location with new hardware, using online application access. The restoration process involved in Bare Metal has been referred to as an art instead of a science, as it differs in the implementation from other system restore software. Not all online backup providers have Bare Metal services. Asigra is a powerful cloud backup software that has the capability to provide Bare Metal restore services.

Before starting your restoration using Bare Metal, you must ensure that the configuration of your system is supported by bare metal as the storage configurations of the servers may not be the same. So, confirm whether the basic partition table type of your system is supported by Bare Metal before going ahead to make use of it for your system data restore.

Facts Behind Re-Purposing Outdated Hardware

Hardware failure is not a day to day issue for most of users. If, however, hardware failure happens on constant basis, you must need to think about it seriously. Due to hardware failure, important files and folders can be lost. Survey about hardware failure is conducted to discover ratio in IT departments.

In past, almost 99% of IT experts faced troubles due to hardware failure;
At time of data loss, above 70% IT professionals have given their services to clients;
80% are of the view that hard drive is the basic reason of hardware failure.

Hardware failure affects business continuity. Many companies are using old hardware, and this could cause hardware failure. It is concerning why computer users have old hardware. When a piece of equipment completes its life span without getting out of order, why it is used until it stops working completely? How long, do you expect a hardware to continue to work? How do you use old equipment?

IT professionals state that re-purposing of hardware gives a number of benefits to users. However, at the same time, there are some drawbacks associated with re-purposing. If you have made up your mind for re-purposing hardware, consider these facts:

Time Span for Hardware
IT professionals agree that it is good to re-purpose a machine that is used for few years. However, parts such as thermal paste, battery, RAM and fans should be swapped. Five years old hardware is good for home use, such as media server. Conversely, it will create troubles if much older machine is re-purposed for production. IT experts recommend that old hardware should not be used as production machines.

What is the Purpose?
Most of users select old hardware either for testing or storage purposes. As far as storage is concerned, you can perform a few things with old hardware. It can be used for backups, as well as for shared storage. Whatever is your target, it is good to keep copy of backup data to use in critical situations. On the other hand, old equipment is good to use for testing purpose in laboratories like virtualization, training or any other purpose. If hardware is old, it does not mean it is useless.

Consideration before Re-purposing Hardware
Everyone wants to get benefits from hardware as long as it is in working condition. Nobody wants to throw away hardware equipment just for the reason it has covered its expected life span. As compared to other equipment, computer hardware are highly superseded; therefore, users do not need to get in trouble for re-purposing, repairing or refurbishing. IT professionals note that repair and ownership cost may result in waste of time and effort.

Recycling is another form of re-purposing hardware. Visit local electronics that accept old computer hardware or you can also try to design some thing new using the old hardware. You can find interesting projects online for old computer components.*

* – http://www.zdnet.com/pictures/upcycled-tech-24-new-uses-for-your-old-hardware/

Importance of Reliable Help Desk System for Managed Service Provider Business

Help desk is the backbone of Information Technology, as it streamlines wide range of system management procedures and provides platform to meet the requirements of customers. Starting from task management to satisfaction of clients, help desk gives great benefits to MSPs.

Effective Management of Tasks
When an effective help desk system is used, it improves customer service by making the task easy and uncomplicated. Task management offers a variety of features to help MSPs design and make schedule of tasks, share tasks, assign to others. In other words, help desk gives an insight in to who is performing what kind of job. Therefore, management gets access to all tasks, and hand over workload to staff.

Ideal Management of Time
Customer service environment becomes a disorganized system when a lot of inquiries are received from customers. Management, as well as staff, cannot answer to these queries on time that makes clients frustrated. Through ticket option and tracking capability, help desk’s reliable tool will reduce the ratio of time lost, which is used by teammates to discuss and inform managers about all cases.

Customer Response
To make an impact on new clients, every company desires to have great customers’ feedback. You cannot make each and every client satisfied with your services. Instead of getting annoyed on negative comments, turn them into positive. Help desk has specific system that helps in getting feedback from customers in the form of a survey. Various help desk systems offer to organize feedback in emails or web formats. Many help desks are designed for automatic delivery if tickets are ended.

Assisting Clients Mean to Assist Themselves
Self-service is one of the most remarkable features of help desk. It gives an easy way-in to the issues and resources. Before ticket is delivered to the web hosts, the service provider will check web portal to find answers of their questions. It can be something like Frequently Asked Questions (FAQ) page to get self-service information to reduce the number of tickets and load of work on staff. Moreover, clients will give satisfactory feedback as they are given instant response. Knowledge management is the latest and useful mode in customer service.

Get Support in Upcoming Years
In an infrastructure based on data management, the role of help desk system is inescapable. For instance, MSPs collect information and past objections in system to design best practices database to direct staff while managing frequent issues. It gives chance to the administrator to check the number of resources required for solving issues. Capability to scrutinize resources proves helpful when the company needs to make budget for next financial year.

Impression of Clients
Impression of clients about your business becomes a reality as you cannot ignore the power of WOM. Even if you are delivering satisfactory managed services, delay in customers service will affect your standing in the business world. Use help desk system for instant response to customers’ tickets and develop their confidence on your quality services.

Managed Disaster Recovery Fundamentals Are Significant For Business Continuity Vendors

Many companies are facing difficulties due to unsatisfactory disaster recovery solutions. Some companies do not have appropriate strategies, whereas others are missing basic elements to execute recovery plan. Most of the companies are exposed to downtime without having proper strategies.

In order to find out satisfactory solution against downtime, IT vendors need to follow five basic rules:

Stress-free Deployment
When different companies are observed to find out the reason of inadequate disaster recovery plan, major drawback is noticed in the form of complexity. Planning, execution and supervision in an IT environment requires enough investment in staff resources, money, and time. These resources are a luxury for many. For business continuity, it is required to have managed solution to make recovery plan simple and trouble-free. Service providers must keep current management very simple, with automated processes that can make data recovery more effective when disaster occurs.

Full Service
It is the responsibility of the service providers to manage each and everything. They are liable to provide disaster recovery plan, schedule backups, handle execution, and guide testing. Vendors have all loads on their shoulders to improve DR plan and business continuity. Such a full service is also known as “White Glove Service”. As a result, clients can get time to pay attention on other business objectives.

Powerful Integration with Present Infrastructure
For managed disaster recovery system, clients must be empowered with correct tools. Though it is the responsibility of MSPs to take the burden, organization must be supported with the right tools so that clients can make the management aspects simpler. These tools have capability to change the company’s IT environment.

Access & Control System for Clients
Managed disaster recovery is concerned to be a very delicate task. It is the responsibility of the MSP to offer a fully compact managed service and at the same time allow its clients to have some power over the disaster recovery process. In this regard, clients are given access to certain tools to check out DR plan, recover files and folders. Even if a white glove service is selected, clients may require expertise and the ability to carry out recovery options by themselves when, especially, in emergency situations.

Cost Efficient Disaster Recovery Solution
To lessen the burden of disaster recovery management, companies need to pay an amount on regular basis. DR solution should be priced in range of target market. For MSPs, selling a solution means to make it affordable for customers to maintain business continuity. MSPs should provide reasonably priced solutions and at the same time stay profitable. This can be achieved with the help of cloud computing. Through the cloud, companies are able to scale up or down resources as needed, allowing payment for only what they actually have used.

In disaster recovery market, DRaaS is a rapidly developing segment. Researchers have reported that the DRaaS market will be more than $5.7 Billion USD by 2018. This is a very encouraging news, and MSPs should try their best to get the piece of the action from this market.

Cloud Computing & Virtualization: Counterpart in Information Technology

Virtualization has some basic characteristics which help organizations to reduce hardware investment, operating cost, maintenance charges, and consumption of energy. It improves performance of system, as well as capability to recover data in disaster. Virtualization plays an important role in the growth of cloud computing.

Abstraction is important for cloud computing; therefore, virtualization separates applications as well as operating system from underlying machines. Decoupled procedure makes them able to travel across clusters, data centers, and servers. Side by side, virtualization brings together memory, resources and storage into particular virtual environment, and makes it easy to share applications and computing capacity through cloud.

Infrastructure Management
As far as cloud computing is concerned, it is linked with notable features. Some of familiar characteristics are as follows:

Scalability
Automation
IT Management
Agility
On-demand Delivery

Virtualization is recognized as primary source to help with the growth of cloud computing. On the other hand, the cloud offers a variety of benefits to virtual environment. When resources are librated and organized in virtual settings, they are dispersed and provisioned as needed. The cloud offers on-demand access, automated delivery and flexibility to particular resources and accommodates growth requirements. As a result, complexity and cost are reduced; and resources can be best utilized and extra attention can be directed towards business operations.

Growth of Non-Virtual Clouds
It is a fact that the cloud does not need virtualization to flourish; however, pairing of these two help in harvesting maximum benefits easily. It is the basic reason why the cloud platform is assisted through hypervisors like Hyper-V, as well as VMware Sphere. Moreover, there are a variety of technologies which are targeting to authorize the cloud to grow without assistance of virtualization any more.

Open source community has presented CoreOS as simple and easy to use operating system. It is based on Linux Kernel, which is familiar to may around the world. Services which are not important are stripped and Linux allocation makes use of particular containers to deliver security, performance and dependability advantages of virtual machines by using similar abstraction parts for which virtualization is renowned for. There is no use of hypervisor to get maximum advantage. Conversely, CoreOS assures nominal performance operating cost, as well as capability to strap up the cloud power by utilizing a few machines only.

One more open source alternative is Docker. It is best for those who are concerned to tap into cloud with no virtualization hassle. It is a particular type of tool, planned to containerize applications in an environment (Linux) without the need to virtualize the whole operating system. Cloud deployment is given power from Docker to enable the delivery of processes, tasks and application easy across a single physical machine or group of virtual machines. It works flawlessly with system CoreOS and improves the efficiency, speed and flexibility of available containers, as well as their significant applications.

If your organization is getting benefit from virtual environment, it means you have taken initiative for an infrastructure (cloud based) for an easy conversion. Though virtualization administration come with their own challenges, technologies such as CoreOS and Docker can be adopted as alternatives.

Our Customers

  • ATOS
  • Age UK
  • Alliance Pharma
  • Liverpool Football Club
  • CSC
  • Centrica
  • Citizens Advice
  • City of London
  • Fujitsu
  • Government Offices
  • HCL
  • LK Bennett
  • Lambretta Clothing
  • Leicester City
  • Lloyds Register
  • Logica
  • Meadowvale
  • National Farmers Union
  • Network Rail
  • PKR

Sales question? Need support? Start a chat session with one of our experts!

For support, call the 24-hour hotline:

UK: 0800 999 3600
US: 800-220-7013

Or, if you've been given a screen sharing code:

Existing customer?

Click below to login to our secure enterprise Portal and view the real-time status of your data protection.

Login to Portal