All posts by Callum Huddlestone

Effective Business Case for Tiered Cloud Storage

One of the most appreciable parts of the cloud economics discussion is the profiling of real money versus assumed risk. Supporters of cloud computing will have problems when it comes to making business case for the cloud if they are not aware of the amount of real money they will be able to save for their enterprise by taking their business to the cloud (in other words, need to know the real ROI before making a decision).

Best guess is that cloud computing is less expensive. Indeed, the guess is true as cloud computing is less expensive when you compare it with the other computing systems. However, the problem is always how one will be able to explain that cloud computing is cheaper. The cost of a data center is categorized into two, which are direct and indirect cost, as well as the issues associated with floor space, power, and IT operation, storage, scalability, network and other issues. Before deciding on the cost of cloud computing, you will need to determine the above mentioned data center issues in financial terms. Looking at the other side of the coin, it is obvious to know that cloud-computing services are priced per GB per month, using only bandwidth usage as the only associated cost, without considering (at least for the user) any of the data center mentioned above.

In order to build a case for saving of real money in the cloud, appropriate cloud based data storage usage should be defined. In addition, it is important to note that some data are suitable for storage in the cloud while others are not. In order to make effective business case for tiered cloud storage in monetary terms, the following questions must be answered:

What are the performances needed for storage of certain category of data?
In terms of platform construction, deployment, application, time required for completion and compute rates, what are the needed performance requirement?
In creating and deploying application in each platform, what are monetary cost and the specific resource requirements?
It is possible to use cloud computing platforms along side with local offsite systems so as to enhance effectiveness of cost?

The compliance for security is automatic in the cloud, and legal discovery is known to be made easy. Therefore, one should not look down the real money savings just by avoiding legal actions and improved compliance to legal mandates. With all these being said, it is vital to make clear that infrastructures and cloud services need to be monitored constantly. Though cloud services can be helpful in up-front cost of enterprises through dispersal of the expenditures over the period which may be months and even years, it is important for CIOs to be watchful so as to avoid creeping in pricing services known as “pay as you go”. The company will find it difficult to reduce the costs, as well as maintain their storage in the cloud at low and economical rate if they do not remove unwanted data to create space on their device.

Making Use of SOX as a Regulatory Intervention

Sarbanes Oxley (SOX) is among the regulatory intervention controls, which raucously promotes the finance internal control functions. SOX also makes such finance controls very transparent. The connection between risk management and internal audit is stressed and the controls are known to be liable for provision of required guarantee that the management is practically justifying and identifying all the possible risks that may come up from the internal systems, business operations, as well as organisational structure. SOX is the force behind management readiness and evaluation of controls in an organisation. That is why Sarbanes Oxley is among most reorganised interventions used in an organisation.

Due to the benefits of SOX in an organisation, industries that act in accordance with the terms put in more efforts to find out risks involved in their organisation and completely check the associated risks. Very little or no human intervention is required, as the entire processes and internal controls are known to be automated. There are lots of things involved in the process starting from documentation, evaluation as well as standardisation of controls in the entire enterprise for overall enhancement and effectiveness. There is establishment of automated control measures which help to determine the overall continuous improvement and effectiveness.

Nevertheless, the unique lack of regulatory guidance associated with SOX has been often raised by many organisations that have utilised the system. For that reason, there is a limitation in the use of purpose-driven applications, making information to always be soiled, fragmented as well as scattered in the entire organisations. These have led to deficiencies in controls as only about 20-50% of controls are automated. On the other hand, the manual controls have turned to show themselves as being expensive and more labour intensive. There is also a problem in the workflows, which are often seen to be uncoordinated, as well as deficiencies in the coordination existing in the midst of external auditors and the internal compliance teams leading to more expensive consulting costs.

So, for enablement and reliability purposes, there is recommendation for automated system that is content based. The ability of an enterprise to standardise controls, document contents, and effectively manage compliance process is said to be the most important for a complete enhancement of controls. The critical sources for SOX compliance are, therefore, said to be the collaborative content and document management systems. The rule-based security models and workflow management, which monitor or sign off on the assigned tasks in the entire organisation, are said to be enabled by these systems. There is also improvement in the preventive controls, and the enforcement of segmentation of duties by user management systems. These help to guarantee sustainability, integrity, and persistency of the generated, used and stored data by the enterprise for compliance of SOX.

Therefore, it is important for you to know that pulling off SOX is complicated and can only be achieved through technology integration to function in one accord, so as to manage risk effectively. What is required for this to be possible is just hard work, and nothing less than that.

What Are the Environmental Standards For Data Centre?

If you are assigned a design project for a data centre, it is essential to follow environmental standards for developing ideal conditions in offices and buildings. There are five environmental standards necessary for data centres:

Temperature Control
University of Toronto* researchers found out that servers do not necessarily sweat at higher temperatures. In fact, heat can cut the consumption of energy that is associated with cooling equipment. This finding actually supports yet another research by the American Society of Heating, Refrigerating and Air-Conditioning Engineers’ (ASHRAE)**, who decided to increase recommended data centre temperature range from 20-25 degrees Centigrade to 18-27 degrees Centigrade. If particular type of server hardware is used, it is easy to control effects of temperature on equipment. Recently, new variety of heat resilient hardware have hit the market, having the capacity to work with no problems at 10 degrees warmer environment.

Humidity Control
It is foremost priority of data centres to keep humidity level in control. CRAC unit gaggle is perfect for constant airflow all through the room. Units work to pull in heat, cool it down and then pushing out through vents towards server. ASHRAE suggests 60% humidity level for 5 to 15 degrees Centigrade. When humidity level is too high, it generates water that could damage server hardware. Data centre designers need to design systems that have the capacity to detect water and humidity present in the surrounding area of equipment.

Monitoring of Static Electricity
Static electricity is one of the most serious threats in the data centre environment. Sensitive IT equipment could get damaged partially or completely with a 25 or less Volts discharge***. If issue is not resolved, the company could face problems, such as system crashes, dropped calls and data corruption. To check static electricity, there are specific equipments that are usually installed in positions where static electricity charges are largest.

Fire Control
Fire control is one of the essential environmental standards of data centre. In Virginia, e-commerce giant became fire victim**** in the beginning of 2015. Fire started on the roof top, and as a result, no injuries or damage were reported to Amazon Web Services. Fire protection system is part of the data centre standards. To get peace of mind, fire control system must be checked on regular basis to make certain that it works in emergency situations.

Systems for Physical Security
Physical security is an important part of environmental standards. Operators are directed to design a plan to keep unauthorized members away from server rooms, buildings and racks. Deploying security guards is not enough for physical security. There must be surveillance IP systems and latest sensors to inform relevant personnel when unauthorized party enters in server racks or building.

Data Centre Centralized Environment
Environmental standards make monitoring of data centre a very challenging task. Centralized systems that incorporate applications and software for server management, and that are capable of overseeing all features from one central location, is much preferred.

* — http://goo.gl/zCC7j
** — https://goo.gl/9rzz3q
*** — http://goo.gl/ofpzrk
**** — http://goo.gl/ewUje5

Best Open Source Tools Perfect for Linux Administrators

System administrators rely on particular set of tools to meet their requirements. In Linux environment, there are many open source tools that are used to achieve desired set of goals. Some of these tools are as follows:

Adminer
Linux administrators often act as database administrators. Adminer tool is used to make management of data effective and stress-free. Adminer is user friendly tool that improves performance and gives more security than other similar tools. Another factor that has made Adminer first choice of administrators’ is its compatibility with PostgreSQL and MySQL, SQL server, and Oracle Database.

BleachBit
Admins are required to keep sensitive data secure and confidential at all times. BleachBit is also known as CCleaner and is a valuable open source tool for Linux that has gained reputation for its easy to use format. It helps system administrators to increase disk space by taking out sensitive data. BleachBit has ability to search files, folders, web browsers, databases and registry thoroughly without removing important files that are required to perform various functions.

FileZilla
It is the job of Linux administrators to use web server for uploading as well as downloading files. When it is your day to day duty to download and upload files, you must have a dependable tool for transferring files. As compared to other applications, FileZilla is lightweight, robust and reliable app that supports protocols such as IPv6, HTTP, SFTP and FTP. FileZilla supports administrators by making file management trouble-free using built-in file management capabilities. Built-in system allows administrators to systematize files in line, resume transfer, pause and revise files.

Open Source Puppet
This tool has given support to Linux administrators in performing all jobs. For instance, it is easy to arrange cron jobs to work across different Linux systems without setting them manually. Open Source Puppet is able to execute checks to make certain that system services, such as Apache are in working mode. Moreover, it helps in taking the initiative to begin services that are currently discontinued. Centralized aptitude of Open Source Puppet is really useful when administrators need to perform some mind-numbing actions, such as running commands on various systems. Instead of mentioning the same command on multiple systems again and again, Open Source Puppet allows you to run command once and it will keep on executing it on all other systems. As a result, administrators save loads of time and get rid of monotonous actions as well.

Webmin
Administrators need to use Webmin while managing Linux distro in web sphere. Webmin was designed to support UNIX; however, it is used in BSD and Linux systems to simplify management processes. Streamline interface is available to make everything convenient starting from accounts of users to domains management. Webmin is used as an alternative to Plesk and cPanels.

Though there are a number of other open source tools which are great for Linux administrators, the above five Linux open source tools are indomitable. Do you have a favourite Linux tool? If so, please share with us in the comment below.

Four-Tier System to Define Data Centre Standards

Data centre is a cluster of computer servers that are used for storage and distribution of data. It gives security, connectivity, and centralized storage options. Data centre is best for businesses that need a lot of networking tools and access to more than one server at a time. All data centres do not provide same features. Standards of data centres are described through Four-Tier system. This classification method was developed in 1995 by Uptime Institute to differentiate facilities through design features, like redundancy, building features, and definite uptime. This classification approach helps potential tenants to select facilities that can accomplish their demands, compliance, and operational requirements. Uptime Institute states that 500 services are certified in 66 countries with specific standards. In this article, we have described worldwide standards for data centres that are shared across all tiers.

Tier I
Bare minimum is offered by data centre Tier I in the form of infrastructure adjustments. It has single electrical path for cooling and powering equipment. Failure of one component means to face downtime. Facilities of Tier 1 provide minimum 99.671% uptime which means downtime of 29 hours on annual basis. It is suitable for small business, however, it is not ideal for an enterprise that demands no interruption during its operations.

Tier II
Just like Tier 1, data centre Tier II has one pathway for cooling and power allocation. Facilities are provided with backup generators as well as cooling systems to work in case of power outage. It has uptime 99.741%, which means, a downtime of 7 hours per annum. As compared to Tier I, it is more dependable system.

Tier III
In data centre Tier III, redundancy is highlighted more than all other factors. It is developed on infrastructure N+1. At Tier III, data centres are well equipped with tools required to work on regular basis. Moreover, it has safe backup of all components. As compared to first two data centres, Tier III has uptime 99.982%. It is perfect for large scale companies, as downtime is just 1.6 hours per annum, that means negligible disruption while maintenance is performed.

Tier IV
It has a completely redundant environment that has potential to work even if some fault occurs. Tier IV has infrastructure N+2 that means to have backup generators, UPS systems and two additional cooling systems. For this data centre, estimated uptime is minimum 99.995 percent. Its downtime ratio is just 0.8 hours in a year. Tier IV offers less disruption than Tier III data centre. Usually such facilities are selected by tech giants, state-run governments, and large scale companies that need maximum accessibility.

Conclusion
Irrespective of uptime or downtime ratio, companies do not always require round the clock availability to achieve their goals. Four systems are compared and Tier IV seems an ideal, bulletproof, and completely redundant system, whereas Tier I is reasonably priced option. Companies must consider their IT requirements, and budget while selecting data centre to get consistent solution by paying low operational cost.

High Traffic Environments: Backing up your Server in the Cloud

Server backup can be tricky. The backup cloud must support all kinds of operating systems and applications. It should be remembered that servers operate in a high traffic environments. They are always connected to the remote backup server in the cloud via the Internet and continuously service the data requirements of the end user. The server platform can include all flavours of Windows, Linux, Unix, Mac OS X, AS-400, Solaris or all kinds of applications such as Exchange/ Outlook, Lotus Notes/Domino, Groupwise, SQL, MySQL, PostgreSQL, Oracle, IBM DB2, Sharepoint or more.

Agentless server backup is preferable. The agentless software requires only a single instance installation of the agent on a network. All servers connected to the network can then be identified and the agent can be configured to accept backup requests from the servers. The entire backup process can be managed and controlled from a single console.

IT Administrators can schedule backups and decide on the granularity of the backup. For instance the IT Administrator can decide that Windows and Linux systems must be backed up continuously and other systems may be backed up as per a fixed schedule. A single backup can be the backup of all data in a server, a share or volume or directory. The backup may also be of a single file or registry. Backups can also be scheduled to start and stop within a specific backup window.

Multiple servers can be backed up in parallel. The multi threaded backup application ensures that the software can process large volumes of data and support concurrent backup and restore requests. The only limiting factor would be the amount of bandwidth that is available with your organization.

The data being backed up to the server can be compressed or deduplicated using compression algorithms that have been tested and proven. The lossless algorithms can reduce the size of the raw data being transmitted from the server and common file elimination protocols can ensure that the same file is not transmitted twice. Versions of files may be stored with appropriate version numbers for ease of discovery. Retention policies can be specified to ensure that only relevant and active files continue to be stored in expensive storage media and other files are relegated to cheaper storage systems to save on costs.

Backup Technology’s Backup for Servers is high performance software that is designed to operate efficiently in high traffic environments with many of the features that have been discussed above. For more information please visit, http://www.backup-technology.com

Our Customers

  • ATOS
  • Age UK
  • Alliance Pharma
  • Liverpool Football Club
  • CSC
  • Centrica
  • Citizens Advice
  • City of London
  • Fujitsu
  • Government Offices
  • HCL
  • LK Bennett
  • Lambretta Clothing
  • Leicester City
  • Lloyds Register
  • Logica
  • Meadowvale
  • National Farmers Union
  • Network Rail
  • PKR

Sales question? Need support? Start a chat session with one of our experts!

For support, call the 24-hour hotline:

UK: 0800 999 3600
US: 800-220-7013

Or, if you've been given a screen sharing code:

Existing customer?

Click below to login to our secure enterprise Portal and view the real-time status of your data protection.

Login to Portal