Category Archives: Other

What is Post-Process De-Duplication?

There are two ways in which target de-dupes can be performed, and they are either as post process de-duplication or inline de-duplication.

The de-duplication that occurs between the source and the destination or target is what is what is termed Inline de-duplication. On the other hand, Post process de-duplication is best defined as the situation whereby data is de-duplicated at programmed time frames when it has already been transmitted by the source, but prior to it getting to the storage device. The channelling of the data can either be through hardware or software based on the case involved. Both the hardware and the software remain in sync with the storage disk. There is also the evaluation of data against the ones already in the storage disk for easy identification and removal of duplicated data, irrespective of the target de-duplication being used.

The enterprise that has proprietary software installed in their system will benefit enormously from the post process de-duplication. But, there is not always a need for modification or redesigning of the source software at the organisation end in order to meet the needs of de-duplication hardware or software. There is no need to be concerned about compatibility issues, as the source system can easily push the data into transmission. There is no need for installation of de-duplication hardware or software at every terminal node in order to permit transmission of data. With the central location of the de-duplication software or hardware, data from all the nodes are automatically channelled via the de-dupe device located on the network.

Lastly, for more effective use of enterprise computing system, the CPU powers can be released when de-duplication load are removed from the client central process unit (CPU). This is where the post-process de-duplication is better than the pre process de-duplication. There is no doubt about the fact that the target de-dupe is quicker when compared with source de-duplication. The data is said to be pushed into the network, making the de-dupe process to operate at the storage end so as to match data quicker and remove duplicates with ease.

With lots of advantages of source process de-duplication, it is not without its flaws. The post process de-duplication is known to be bandwidth intensive. For that reason, if there is an exponential increase in the amount of data in an enterprise, the target de-duplication will not be the best option. Before scheduled post process de-duplication is started, although it might involve additional expenses, large arrays of storage disk will need to be used to create space for storage of transmitted data. This additional cost is among the flaws associated with post process de-duplication.

The need to redesign the proprietary software to accommodate demands of the de-duplication devices and process, installation of de-duplication hardware at all the connecting nodes, and others will contribute to be more cost effective than the use of technologies that are based on target de-duplication. If the cloud service provider partnering with the enterprise determines charges fees based on the bandwidth usage, source de-duplication may further be attractive.

Therefore, companies must determine the particular kind of de-duplication process that will work best for them. Some of the things enterprises need to consider before selecting any of the de-duplication process include: Volume of data, availability of bandwidth, cost of bandwidth, and lots of other important factors. In fact, the exercise involved in determining the best fit for an enterprise is not an easy one.

What is Pre-Process De-Duplication?

The context of Pre-Process De-duplication is also known as Source De-duplication in most case. The source de-duplication is mostly used for de-duplication of data prior to its transmission to the device for storage. The entire data are channelled via the source de-dupe software or hardware before being ready to be transmitted to the storage device where it will be stored. The major objective of source de-duplication is to prevent sending of duplicated data across the network to the device where it will be stored. There is establishment of connection using the designated storage device as well as evaluation of data prior to the time when it will initiate the de-duplication process. The synchronisation with the target disk upholds all through the process in order to ensure synchronisation of data, removing the files that match at the source. The main advantage of this is that it helps save bandwidth for the user.

In order to identify changed bytes, there is always the need for byte level scans by either the source de-dupe software or hardware. To make recovery easy for the user, the changed bytes are transferred to the destination or target device, pointing it to the original indexes and files updated with the pointer. Indeed, it does not take time to control the entire operation as they happen quickly without compromising the accuracy and efficiency of the process. The process of source de-dupe is light on processing power when compared to post process de-dupe. It has been observed that source de-duplication has the capability to categorise data in real time. The device configurations that are based on policy can classify data at granular levels, as well as filter out data while they pass across the source de-dupe device. There can be addition or removal of files on usual basis of the group, domain, user, owner, age, path, file type, or storage type, or even on the basis of RPO or retention periods.

Having said the advantages of source de-dupe, there are some disadvantages associated with source de-dupe. It is true that source de-duplication helps decrease the bandwidth you need to transmit data or files to the destination or target, however, there is imposition of higher processing load on the clients, as the entire process is involved in the source de-duplication. In addition, the central processing unit (CPU) power consumption of your device will go higher by about 25% to 50% during source de-duplication process, which may not really be favourable for you at all. There may be needs to incorporate source based de-dupe nodes into each of the locations connected. This involves more cost and will obviously be more expensive than the target de-duplication techniques, where all the de-duplications are carried out on one de-duplication device, within the network nodal point.

Lastly, if the existing software does not support de-duplication hardware or algorithms, there may be the need for redesigning of the software. This, however, is not a problem in target de-duplication, where there is isolation of de-dupe hardware and software from the organisation’s hardware or software. Also, there are no changes needed at the source de-dupe.

Making Use of SOX as a Regulatory Intervention

Sarbanes Oxley (SOX) is among the regulatory intervention controls, which raucously promotes the finance internal control functions. SOX also makes such finance controls very transparent. The connection between risk management and internal audit is stressed and the controls are known to be liable for provision of required guarantee that the management is practically justifying and identifying all the possible risks that may come up from the internal systems, business operations, as well as organisational structure. SOX is the force behind management readiness and evaluation of controls in an organisation. That is why Sarbanes Oxley is among most reorganised interventions used in an organisation.

Due to the benefits of SOX in an organisation, industries that act in accordance with the terms put in more efforts to find out risks involved in their organisation and completely check the associated risks. Very little or no human intervention is required, as the entire processes and internal controls are known to be automated. There are lots of things involved in the process starting from documentation, evaluation as well as standardisation of controls in the entire enterprise for overall enhancement and effectiveness. There is establishment of automated control measures which help to determine the overall continuous improvement and effectiveness.

Nevertheless, the unique lack of regulatory guidance associated with SOX has been often raised by many organisations that have utilised the system. For that reason, there is a limitation in the use of purpose-driven applications, making information to always be soiled, fragmented as well as scattered in the entire organisations. These have led to deficiencies in controls as only about 20-50% of controls are automated. On the other hand, the manual controls have turned to show themselves as being expensive and more labour intensive. There is also a problem in the workflows, which are often seen to be uncoordinated, as well as deficiencies in the coordination existing in the midst of external auditors and the internal compliance teams leading to more expensive consulting costs.

So, for enablement and reliability purposes, there is recommendation for automated system that is content based. The ability of an enterprise to standardise controls, document contents, and effectively manage compliance process is said to be the most important for a complete enhancement of controls. The critical sources for SOX compliance are, therefore, said to be the collaborative content and document management systems. The rule-based security models and workflow management, which monitor or sign off on the assigned tasks in the entire organisation, are said to be enabled by these systems. There is also improvement in the preventive controls, and the enforcement of segmentation of duties by user management systems. These help to guarantee sustainability, integrity, and persistency of the generated, used and stored data by the enterprise for compliance of SOX.

Therefore, it is important for you to know that pulling off SOX is complicated and can only be achieved through technology integration to function in one accord, so as to manage risk effectively. What is required for this to be possible is just hard work, and nothing less than that.

What Are the Environmental Standards For Data Centre?

If you are assigned a design project for a data centre, it is essential to follow environmental standards for developing ideal conditions in offices and buildings. There are five environmental standards necessary for data centres:

Temperature Control
University of Toronto* researchers found out that servers do not necessarily sweat at higher temperatures. In fact, heat can cut the consumption of energy that is associated with cooling equipment. This finding actually supports yet another research by the American Society of Heating, Refrigerating and Air-Conditioning Engineers’ (ASHRAE)**, who decided to increase recommended data centre temperature range from 20-25 degrees Centigrade to 18-27 degrees Centigrade. If particular type of server hardware is used, it is easy to control effects of temperature on equipment. Recently, new variety of heat resilient hardware have hit the market, having the capacity to work with no problems at 10 degrees warmer environment.

Humidity Control
It is foremost priority of data centres to keep humidity level in control. CRAC unit gaggle is perfect for constant airflow all through the room. Units work to pull in heat, cool it down and then pushing out through vents towards server. ASHRAE suggests 60% humidity level for 5 to 15 degrees Centigrade. When humidity level is too high, it generates water that could damage server hardware. Data centre designers need to design systems that have the capacity to detect water and humidity present in the surrounding area of equipment.

Monitoring of Static Electricity
Static electricity is one of the most serious threats in the data centre environment. Sensitive IT equipment could get damaged partially or completely with a 25 or less Volts discharge***. If issue is not resolved, the company could face problems, such as system crashes, dropped calls and data corruption. To check static electricity, there are specific equipments that are usually installed in positions where static electricity charges are largest.

Fire Control
Fire control is one of the essential environmental standards of data centre. In Virginia, e-commerce giant became fire victim**** in the beginning of 2015. Fire started on the roof top, and as a result, no injuries or damage were reported to Amazon Web Services. Fire protection system is part of the data centre standards. To get peace of mind, fire control system must be checked on regular basis to make certain that it works in emergency situations.

Systems for Physical Security
Physical security is an important part of environmental standards. Operators are directed to design a plan to keep unauthorized members away from server rooms, buildings and racks. Deploying security guards is not enough for physical security. There must be surveillance IP systems and latest sensors to inform relevant personnel when unauthorized party enters in server racks or building.

Data Centre Centralized Environment
Environmental standards make monitoring of data centre a very challenging task. Centralized systems that incorporate applications and software for server management, and that are capable of overseeing all features from one central location, is much preferred.

* —
** —
*** —
**** —

Facts Behind Re-Purposing Outdated Hardware

Hardware failure is not a day to day issue for most of users. If, however, hardware failure happens on constant basis, you must need to think about it seriously. Due to hardware failure, important files and folders can be lost. Survey about hardware failure is conducted to discover ratio in IT departments.

In past, almost 99% of IT experts faced troubles due to hardware failure;
At time of data loss, above 70% IT professionals have given their services to clients;
80% are of the view that hard drive is the basic reason of hardware failure.

Hardware failure affects business continuity. Many companies are using old hardware, and this could cause hardware failure. It is concerning why computer users have old hardware. When a piece of equipment completes its life span without getting out of order, why it is used until it stops working completely? How long, do you expect a hardware to continue to work? How do you use old equipment?

IT professionals state that re-purposing of hardware gives a number of benefits to users. However, at the same time, there are some drawbacks associated with re-purposing. If you have made up your mind for re-purposing hardware, consider these facts:

Time Span for Hardware
IT professionals agree that it is good to re-purpose a machine that is used for few years. However, parts such as thermal paste, battery, RAM and fans should be swapped. Five years old hardware is good for home use, such as media server. Conversely, it will create troubles if much older machine is re-purposed for production. IT experts recommend that old hardware should not be used as production machines.

What is the Purpose?
Most of users select old hardware either for testing or storage purposes. As far as storage is concerned, you can perform a few things with old hardware. It can be used for backups, as well as for shared storage. Whatever is your target, it is good to keep copy of backup data to use in critical situations. On the other hand, old equipment is good to use for testing purpose in laboratories like virtualization, training or any other purpose. If hardware is old, it does not mean it is useless.

Consideration before Re-purposing Hardware
Everyone wants to get benefits from hardware as long as it is in working condition. Nobody wants to throw away hardware equipment just for the reason it has covered its expected life span. As compared to other equipment, computer hardware are highly superseded; therefore, users do not need to get in trouble for re-purposing, repairing or refurbishing. IT professionals note that repair and ownership cost may result in waste of time and effort.

Recycling is another form of re-purposing hardware. Visit local electronics that accept old computer hardware or you can also try to design some thing new using the old hardware. You can find interesting projects online for old computer components.*

* –

Importance of Reliable Help Desk System for Managed Service Provider Business

Help desk is the backbone of Information Technology, as it streamlines wide range of system management procedures and provides platform to meet the requirements of customers. Starting from task management to satisfaction of clients, help desk gives great benefits to MSPs.

Effective Management of Tasks
When an effective help desk system is used, it improves customer service by making the task easy and uncomplicated. Task management offers a variety of features to help MSPs design and make schedule of tasks, share tasks, assign to others. In other words, help desk gives an insight in to who is performing what kind of job. Therefore, management gets access to all tasks, and hand over workload to staff.

Ideal Management of Time
Customer service environment becomes a disorganized system when a lot of inquiries are received from customers. Management, as well as staff, cannot answer to these queries on time that makes clients frustrated. Through ticket option and tracking capability, help desk’s reliable tool will reduce the ratio of time lost, which is used by teammates to discuss and inform managers about all cases.

Customer Response
To make an impact on new clients, every company desires to have great customers’ feedback. You cannot make each and every client satisfied with your services. Instead of getting annoyed on negative comments, turn them into positive. Help desk has specific system that helps in getting feedback from customers in the form of a survey. Various help desk systems offer to organize feedback in emails or web formats. Many help desks are designed for automatic delivery if tickets are ended.

Assisting Clients Mean to Assist Themselves
Self-service is one of the most remarkable features of help desk. It gives an easy way-in to the issues and resources. Before ticket is delivered to the web hosts, the service provider will check web portal to find answers of their questions. It can be something like Frequently Asked Questions (FAQ) page to get self-service information to reduce the number of tickets and load of work on staff. Moreover, clients will give satisfactory feedback as they are given instant response. Knowledge management is the latest and useful mode in customer service.

Get Support in Upcoming Years
In an infrastructure based on data management, the role of help desk system is inescapable. For instance, MSPs collect information and past objections in system to design best practices database to direct staff while managing frequent issues. It gives chance to the administrator to check the number of resources required for solving issues. Capability to scrutinize resources proves helpful when the company needs to make budget for next financial year.

Impression of Clients
Impression of clients about your business becomes a reality as you cannot ignore the power of WOM. Even if you are delivering satisfactory managed services, delay in customers service will affect your standing in the business world. Use help desk system for instant response to customers’ tickets and develop their confidence on your quality services.

Our Customers

  • ATOS
  • Age UK
  • Alliance Pharma
  • Liverpool Football Club
  • CSC
  • Centrica
  • Citizens Advice
  • City of London
  • Fujitsu
  • Government Offices
  • HCL
  • LK Bennett
  • Lambretta Clothing
  • Leicester City
  • Lloyds Register
  • Logica
  • Meadowvale
  • National Farmers Union
  • Network Rail
  • PKR

Sales question? Need support? Start a chat session with one of our experts!

For support, call the 24-hour hotline:

UK: 0800 999 3600
US: 800-220-7013

Or, if you've been given a screen sharing code:

Existing customer?

Click below to login to our secure enterprise Portal and view the real-time status of your data protection.

Login to Portal