All posts by Stewart Parkin

File-Sharing Services & Strategies of Enterprises to Avoid Security Lapse

File-sharing services may give some benefits to organisations, but security breaches could end up costing the company thousands of dollars. Due to carelessness of employees, sensitive information of clients could get leaked and companies have to pay charges for violation of HIPAA regulations. As a result, file-sharing services are believed to be major threats to secured data. Though most of the companies might take security measures, they cannot share the files effectively to end users. One of the reasons is that companies lack visibility into how employees use the services.

Why File-Sharing Services are Used?
Companies find file-sharing services useful when they are required to transfer documents to clients on regular basis. Though there are limitations applied to email services, content managers find it to be a reliable solution for communicating with developers, freelance writers, vendors and designers. The platform is not only inexpensive, but also gives facility to communicate and share information irrespective of time. Many services offer a basic starter package for free and allow you to buy additional storage at affordable rates.

Drawbacks
Enterprises require many security features which are missing from file-sharing services. For instance, some file-sharing solutions lack encryption and as file-sharing is mostly of a public nature, confidential files could end up at the wrong hands. The convenience of file-sharing services is provided at the cost of consistency. Moreover, anywhere and anytime feature of file-sharing often leads to compliance issues.
The disconcerting drawback is loss of trust that is developed by end users on the services of the companies. Perhaps, the problem is created due to carelessness of newly appointed employee, who is not an expert; and they might delete some of the important files or heap of folders by mistake. Improper management of data is the most serious threat that makes way for malware or hackers’ attack.

Ways to Tackle File-sharing Issues
File-sharing has so many disadvantages, so enterprises are applying strict approaches to handle file-sharing services. It is the reason why many companies do not allow their employees to check mainstream file-sharing platforms. When companies pay penalties against compliance and security breaches just because of employees’ slip up, it becomes reasonable to apply an extreme approach to avoid such issues.

 

There are three basic methods to cope with issues concerning file-sharing services in the enterprise:

 

Careful Selection
Is it necessary for organisations to embrace latest file-sharing services? Cloud services are not accepted by all organisations. A small number of enterprises are getting the benefit from these services. Careful selection of cloud services is mandatory when organisations need custom fit solutions to fulfil their business and compliance requirements. Security features, cost, privacy settings, storage space, and maximum size of uploaded file are some of the factors that are considered before choosing a cloud service.

Customise Business Solutions
Do you need a service like Google Drive or Dropbox that is more secure and reliable? It is possible to have such type of platform by building a customised solutions. Tailored business solutions allow file-sharing as well as team work along with security of mission critical data within the enterprise. All organisations do not have resources to build customised solutions. They have an opportunity to hire services to get on-demand solutions.

Block File Storage Platforms
In order to avoid security breaches, IT managers should block unauthorised file-sharing platforms. Administrators have to configure hardware and firewalls to block particular IP addresses.

 

Shall We Replace or Upgrade the Company Server?

Upgrades are necessary to prolong the lifetime of Linux or Windows server. RAM, modem, graphics cards and network cards are considered as easy to upgrade components. On the contrary, motherboard and OS have highly complicated upgrade process as these components need to upgrade other parts to give satisfactory results. For instance, latest RAM is also required when you are installing a new motherboard.

Usually, customers do not have an idea when they should upgrade the system and when buy latest server to carry on the tasks.

Here are a few conditions to judge the right time to change or upgrade a system:

Additional Drives

If your server is purchased just a few years ago, you can add the latest hard drives to accomplish storage needs. To add new drives, users must have SATA open port or open channel on the motherboard and RAID controller respectively.

Storage Requirement

Same model and hard drive brand need to be used when you are using a RAID system.  Your system may have 500GB drive and you may need to use 2TB drive to get required outcome. On the basis of controller and RAID nature, it is possible to change one drive and then move to the next to remake the array. You can use external device to keep backup before replacing drives.

Improved Performance

To improve the performance of server, various avenues can bring into account. If your requirements are not complicated, start with upgrading primary drive. Use of an SSD to improve OS drive will impact on performance. In order to make the conversion painless, Intel and Samsung offer specific SSDs with migration software.

Range of SSDs varies from simple 256GB up to 2TB. These SSDs have enough storage capacity to run Linux or Windows. If OS is running with an SSD, performance is renounced in terms of OS receptiveness along with shutdown and boot. Moreover, OS patches take less time on SSDs.

To overcome performance issue, PCI-E latest storage drives are the best. These storage drives need 3.0 PCI-Express to work with maximum speed limit. Furthermore, these drives are used on X99 and Z97 chipsets as boot drives. These drives have capacity to work with old chipsets but work as secondary storage devices only. When applications demand first-rate performance, use PCI-E to get fastest speed.

Age and configuration of server are the two factors that determine the nature of server upgrades. Each server has certain limits. Usually servers older than a year or more are worthy to upgrade. On the other hand, servers that are used for more than three years should be replaced.

Server Replacement

IT managers are often inquired about right time to replace server. Cloud services and virtualization have made the task more complicated. Companies are using cloud services for data storage, whereas virtualization has improved the server lifespan. Still there comes a stage when server outweighs all its advantages and it becomes necessary to change the server.

Here are the three conditions when buying new server is recommended? 

  • When server is used for three continuous years, the failure rate increases significantly. To overcome failure, replacing the aging server becomes crucial for a company.
  • Commercial servers have three years warranty, therefore, it is good to replace the servers before they get crashed.
  • An upgrade is not the only solution to cure a server. When RAID array or Windows is creating problem, reinstall the RAID or OS to keep things in order.

Backup Technology can assist you not only with your cloud backup and recovery needs, but we can also advise you when you are ready to replace or upgrade your servers.

 

Do Solar Storms Cause Data Disasters?

There have been scientific predictions on the geomagnetic storms for a while now. Scientists have estimated various unpleasant impacts of the solar storms to be about $2 trillion. The damages caused by electricity charged gas travelling at 5 million miles per hour is anticipated to disrupt both the communication technology infrastructure, as well as, communication networks for many years to come.

 

A renowned Laboratory for Atmospheric and Space Physics expert, Daniel N. Baker, PhD from University of Colorado, commented, “I have come away from our recent studies more convinced than ever that earth and its inhabitants were incredibly fortunate that the 2012 eruption happened when it did. If the eruption had occurred only one week earlier, the earth would have been in the line of fire.” This is particularly worrying as the sun has been in its dormant state for more than a century. The comment, made by Dr. Baker to the press, noted that solar flares that took place for a few years in this decade have disrupted ground communication.

 

According to Wikipedia, solar storms are classified as A, B, C, M or X. Class A being the lowest and class X being the highest (as in Richer scale for earthquakes).  Each letter has its own scale. For instance, X1 is less powerful than X9.

 

The earth has experienced class X sun storm several times. Class X sun flare is so powerful that it did radiate billions of electrically charged particles to the earth. Such discharge is known as Coronal Mass Ejections that light up geomagnetic storms in the magnetic field of the earth. Dr. Baker went on to state that “while technology we use every day will be susceptible to the impact of space weather conditions, it will help us evaluate the robustness of the systems we have built”.

 

Is there any relation between data backup / data storage and solar storms? The vigorous particles that are discharged from the solar storm and the sun will certainly interact with the surrounding magnetic field of the earth. This will help increase the ionisation in the ionosphere for 100 km to 1,000 km above the earth. The discharged flares could cause equipment damages and increase the chance of strong electric current in long conductors, including power lines and the pipelines, which could eventually result in system outages. As a result, technological systems could fail and data could be lost; and colossal amount of data could be at risk, including cloud backed up data. This is a new kind of risk that we have not encountered so far (unlike Tornadoes, Floods, Earthquakes) and the risk level can not be determined at the moment, as it has not happened yet, and unfortunately, the impact can only be known after the sun storm actually hits the earth.

 

Given such circumstances, companies around the world have been paying extra attention regarding their digital data and disaster recovery. Cloud backup and disaster recovery service providers around the globe are working harder than ever before to secure and protect organization’s data redundantly at multiple levels to make sure that data is recovered in an event a disaster strikes and there is a massive data deletion of the entire system. Therefore, it is the most intelligent and adaptable company that will be able to survive a disaster. In addition, it is more beneficial to be safe than to be sorry.

 

If you have not prepared for disaster — whether for sun flare related disasters or otherwise — you need to start right now.

The Roles of Third Party Companies in Data Protection

Due to data breaches, protection of personal and health information has become a vexing issue. Numerous organisations, including health care industries have lost sensitive data. The data typically includes details of vendors, patients, staff, health id numbers, contractors, etc. When such data loss happens at a hospital, the hospital in question usually apologizes for the inconvenience that staff and patient have faced due to the data breach. In some cases, they try to shift the responsibility to some other entity, claiming that the data theft was a “result of negligence by an outside contractor” that was initially hired as an “expert” in handing sensitive data.

But, does shifting blame to a third party right? Third party companies are selected due to their surety to store and handle sensitive data properly to begin with. They make their living handling such data and it is not in their best interest to lose any data.

To gain the trust of affected individuals, some vendors who lost data due to breach, take the responsibility of providing timely information and offer credit monitoring services for the affected accounts. Providing these services shows that the company has taken the responsibility and acted on it to calm down individuals, who are worried about their sensitive data.

While the vendor has acted to address its responsibilities to communicate affected accounts according to legal mandates and federal regulations, the fact is that sensitive data, including identities have been stolen. It is annoying that theft of information will impact on affected parties for a longer period of time. There is the possibility that the affected parties can sue the organization for negligence for a millions of dollars. Such type of incidents raise questions about data security and precautions against data breaches.
• Is it good to share sensitive information with third parties for data storage?
• How do third parties give assurance to organizations that data will be protected and will never be accessed inappropriately or misused?
• What is the liability of a third party for the data in their custody and what type of charges can be applied when information is misused?
Though the answers of these queries are not easy, the popularity of cloud storage services, as third party service providers, has brought these questions to the forefront.

Enterprises trusting their data to third parties must make an effort to ensure that the data is safe and secure. Enterprises should spend their time and energy to weigh up the reliability of the third party and their data protection claims. Here are some questions that can help in searching suitable third part cloud storage service:
• What is the method of data storage in repository?
• Is the encryption methodology certified by a reliable authority?
• How do people access sensitive data and who has access to the data?
• What are the liabilities and rights of an organization in case of data breaches?
• Does the vendor share sensitive data with anyone? If so, with whom and why?
• Does the secure cryptographic mode of data security are really impregnable or not?
• Does assurance of sensitive information protection check in veracity by service vendors?
• Does the vendor take the responsibility of data protection and guarantee of data breaches due to negligence?
When your company gets the answers of these questions, it becomes easy to evaluate your service provider and their security protocol. Answers to these questions will help in understanding the level of data security and selecting the suitable service to protect sensitive information.

Making Administration of Compliance Easy

Compliance has been made frightening to most organizations through multiple regulatory mandates. It is now a difficult and tedious task to organize a product for SOX and PCI or HIPAA compliance. The manual workaround, auditing failures as well as intimidating pressure of disciplinary action under various acts makes gradual compliance unreasonable and undesirable. Therefore, enterprises are longing for easy compliance that are highly automated, tightly integrated, extensive, scalable and reliable solutions. Cloud computing services that incorporate superior capabilities, at the same time allow you to be in control of proliferating information and compliance requirements are now in high demand.

Automated cloud service that enable data collection, benchmark mapping, change of tracking, and report are designed to make auditing simple and easy, for both inside or outside the enterprise. For legal compliance, upbeat and proactive controls are created. In addition, they allow numerous compliance management and bring easy, centralized structure that permit definition of policy, as well as computerized compliance auditing process across platforms and environments. There is also provision of extensive library policies that set rules tackling some problems associated with compliance, which is built into some of the computing solutions in the cloud. This can be beneficial for automatically defining and scanning results on the dashboard that share similarity with the specific legal mandates. The outcome is that simplified compliance and permits viewing of data at a glance.

Majority cloud backup and storage systems come with uninterrupted features that burn out users that are unauthorized and tracks user’s activities. Changes to files, directories, registry keys, and provision of visible and immediate checkmating so as to avoid incident that may result in data compromise are kept by file integrity monitory software. The read and write protection and reconciliation maps in cloud backup systems, which follow changes to the original system, safeguarding sufficient security. For that reason, there is a reduction in management complexities while security is vastly made stronger.

Below are the functions of computing services in the cloud which you need to know:

– Help to centralize data management for the purpose of policy inspection, strategy, and reporting;
– Provide uninterrupted change management and integrity of configuration;
– Permit integration of software to prevent hosting intrusion, management against vulnerability, and control of application.

Users of cloud computing can be made aware of the incessant compliance expectations with uninterrupted visibility. With the help of cloud computing, labour intensive mistakes, errors of disconnected compliance products can be avoided. In addition, there is absolute reduction in the cost of compliance, as well as dramatic reduction in the complexity of operation with the help of cloud compliance. In fact, there are more to the benefits associated with cloud computing than what we have discussed above.

What is Post-Process De-Duplication?

There are two ways in which target de-dupes can be performed, and they are either as post process de-duplication or inline de-duplication.

The de-duplication that occurs between the source and the destination or target is what is what is termed Inline de-duplication. On the other hand, Post process de-duplication is best defined as the situation whereby data is de-duplicated at programmed time frames when it has already been transmitted by the source, but prior to it getting to the storage device. The channelling of the data can either be through hardware or software based on the case involved. Both the hardware and the software remain in sync with the storage disk. There is also the evaluation of data against the ones already in the storage disk for easy identification and removal of duplicated data, irrespective of the target de-duplication being used.

The enterprise that has proprietary software installed in their system will benefit enormously from the post process de-duplication. But, there is not always a need for modification or redesigning of the source software at the organisation end in order to meet the needs of de-duplication hardware or software. There is no need to be concerned about compatibility issues, as the source system can easily push the data into transmission. There is no need for installation of de-duplication hardware or software at every terminal node in order to permit transmission of data. With the central location of the de-duplication software or hardware, data from all the nodes are automatically channelled via the de-dupe device located on the network.

Lastly, for more effective use of enterprise computing system, the CPU powers can be released when de-duplication load are removed from the client central process unit (CPU). This is where the post-process de-duplication is better than the pre process de-duplication. There is no doubt about the fact that the target de-dupe is quicker when compared with source de-duplication. The data is said to be pushed into the network, making the de-dupe process to operate at the storage end so as to match data quicker and remove duplicates with ease.

With lots of advantages of source process de-duplication, it is not without its flaws. The post process de-duplication is known to be bandwidth intensive. For that reason, if there is an exponential increase in the amount of data in an enterprise, the target de-duplication will not be the best option. Before scheduled post process de-duplication is started, although it might involve additional expenses, large arrays of storage disk will need to be used to create space for storage of transmitted data. This additional cost is among the flaws associated with post process de-duplication.

The need to redesign the proprietary software to accommodate demands of the de-duplication devices and process, installation of de-duplication hardware at all the connecting nodes, and others will contribute to be more cost effective than the use of technologies that are based on target de-duplication. If the cloud service provider partnering with the enterprise determines charges fees based on the bandwidth usage, source de-duplication may further be attractive.

Therefore, companies must determine the particular kind of de-duplication process that will work best for them. Some of the things enterprises need to consider before selecting any of the de-duplication process include: Volume of data, availability of bandwidth, cost of bandwidth, and lots of other important factors. In fact, the exercise involved in determining the best fit for an enterprise is not an easy one.

Our Customers

  • ATOS
  • Age UK
  • Alliance Pharma
  • Liverpool Football Club
  • CSC
  • Centrica
  • Citizens Advice
  • City of London
  • Fujitsu
  • Government Offices
  • HCL
  • LK Bennett
  • Lambretta Clothing
  • Leicester City
  • Lloyds Register
  • Logica
  • Meadowvale
  • National Farmers Union
  • Network Rail
  • PKR

Sales question? Need support? Start a chat session with one of our experts!

For support, call the 24-hour hotline:

UK: 0800 999 3600
US: 800-220-7013

Or, if you've been given a screen sharing code:

Existing customer?

Click below to login to our secure enterprise Portal and view the real-time status of your data protection.

Login to Portal