One of the main reasons for this level of growth is that it is ‘as a service’ and not the complex and expensive ‘roll your own’ environment that it used to be. This has made DRaaS much more accessible to the SMB market, as well as enterprise customers. But, as the list of DRaaS solutions grows along with adoption rates, it’s important for VMware vSphere customers to carefully consider how their choice of cloud provider should be influenced by their existing infrastructure to avoid technical challenges ahead.
When choosing a DR solution, what are the considerations?
As mentioned above, in the past customers would usually have to resort to building out a secondary data centre complete with a suitably sized stack of infrastructure to support their key production servers in the event of a DR event. They could either build with new infrastructure, or eke out a few more years from older servers and networking equipment. Often, they would have to buy similar storage technology that would support replication.
More recently, software-based replication technologies have enabled a more heterogeneous set up, but still requiring a significant investment in the secondary data centre.
And, not forgetting the power and cooling required in the secondary DC, and ongoing maintenance of the hardware which all increase the overall cost and management task of the DR strategy.
Even recent announcements such as, VMware Cloud on AWS, are effectively managed co-location offerings, involving a large financial commitment to physical servers and storage which will be running 24/7.
So, should customers be looking to develop their own DR solutions, or would it be easier and more cost-effective to buy a service offering?
Now, customers only have to pay for the storage associated with their virtual machines being replicated and protected, and only pay for CPU and RAM when there is a DR test or real failover.
So, what are the benefits for VMware on-premises customers to working with a VMware-based DRaaS service provider?
Clearly, one of the main benefits is that the VMs will not need to be converted to a different hypervisor platform such as Hyper-V, KVM or Xen. This can cause problems as VMware tools will need to be removed (deleting any drivers) and the equivalent tools installed for the new hypervisor. NICs will be deleted and new ones will need to be configured. This results in significantly longer on-boarding times as well as ongoing DR management challenges; all increasing the overall TCO of the DRaaS solution.
In the case of the hyperscale cloud providers, there is also the need to align VM configuration to the nearest instance of CPU, RAM and storage that those providers support. If you have several virtual disks, this may mean that you need more CPU and RAM in order to allow more disks (the number of disks is usually a function of the number of CPU cores). Again, this can significantly drive up the cost of your DRaaS solution.
In some hyperscale cloud providers, the performance of the virtual disks is limited to a certain number of IOPS. For typical VMware VM implementations, with a C: drive and a data disk or two, this can result in very slow performance.
A VMware-based cloud provider such as iland, allows for virtual machines to be the exact same configuration as they were on-premises, although customers still have the ability to alter the VMs as required.
Over the past few years, iland has developed a highly functional web-based console, that gives DRaaS customers the same VMware functionality that they were used to on-premises, allowing them to launch remote consoles, reconfigure VMs, see detailed performance data, take snapshots while running in DR and, importantly, perform test failovers in addition to other functions.
For VMware customers, leveraging a VMware-based cloud provider for Disaster Recovery as a Service delivers rapid on-boarding, cost-effectiveness, ease of ongoing management and a more flexible and reliable solution to protect your business.
For more information and to sign up for a DRaaS demo, click here.