DRaaSZerto

Using Zerto for Disaster Recovery and Workload Migration to a VMware Cloud – Part 2

ByMarch 21, 2018
Zerto partner 2016 & 2017 logoIn my previous blog, I covered the pre-migration steps necessary to move your VMs to the cloud with Zerto. In the second part of this series, we will cover the steps that should be performed after your VMs have been successfully migrated to the iland Secure Cloud platform.

Adjusting VM Components
After moving your workloads to our vCloud Director platform, it would be advisable that you consider replacing any E1000 NICs in your VMs with VMXNet3 adapters. This will allow you to utilize higher network bandwidth between your VMs. The one good thing that can be said about the E1000 network adapter is that most OSs will have a native driver for it. Still, you should only consider using it in production environments if you absolutely have to. Additionally, you could set up affinity rules between your front-end servers and your back-end servers, i.e. a database and web server pair, to take full advantage of the increased network throughput.
Pic 5

The adapter type selection filed is not displayed by default, make sure to select the correct type of adapter

Touching on the subject of performance, it is worth noting that some of the VMs you migrate using Zerto might use older vSCSI controllers. Whilst in most cases, making any changes to the SCSI control setup will be unnecessary, it is worth noting that in order to gain maximum performance from the underlying storage you would have to use a LSI Logic SAS or a Paravirtual SCSI controller. This is especially true for workloads requiring large IOPS, running on datastores backed by all-flash storage. Fortunately, in most cases you will be either able to change the type of the controller used (data drives only, changing this for the boot disk might result in the OS not being able to boot), or simply add new drives with the appropriate controller and migrate your data. As usual, please proceed with caution and make sure you have a backup copy you would be able to revert to if needed.

Affinity and Anti-Affinity Rules
Both the iland Secure Cloud Console and vCloud Director now allow the configuration of affinity and anti-affinity rules. It’s worth noting that, unlike in your own vSphere environment, you can’t tie a VM to a specific host. This is an option often used for licensing purposes, as some applications need to run on the same CPU at all times, otherwise they will stop working. The rules available in our platform allow you to determine which VMs in your environment should always run on the same host or which should always run on separate hosts. This can increase performance of applications that rely on communication between multiple servers, as it removes the need for traffic to flow across the network. Alternatively, you might need to spread out, across multiple hosts, clustered application stacks. Using anti-affinity rules would allow you to ensure no more than one VM would reboot, in case of a HA event on the platform.
pic6_v3

Adding affinity and anti-affinity rules is very simple in the vCloud Director interface

Streamlining VMware Tools Updates
You might have been keeping VMware tools up-to-date by using automatic updates on VM reboot or updating them in bulk via vSphere Update Manager. Neither of those is available in vCloud Director, however, the iland console allows you to select an automatic update policy. All you need to do is right click a VM, within the vApp view, and select Manage VMware Tools Upgrade Policy. It’s worth noting that this option is recommended only for test/dev VMs. No one wants to experience an additional reboot after rebooting a production VM.
pic 7_v3

The iland console exposes the functionality to automatically update VMware tools on VM reboot

The Linux admins among you might have heard about Operating System Specific Packages (OSPs) and Open VM Tools. The latter should be the preferred method for all systems that support it. The benefits cannot be overlooked, as you essentially manage VMware tools updates as part of your server patching process. This means that when you query whatever packet update solution you are utilizing, ‘yum’ being an example for Red Hat Enterprise Linux, then you will also have the option to update the Open VM Tools version you are using. This also means that you are always running a version that the operating system vendor is supporting. Aside from the technical benefits, let’s not forget that you are also most likely using precautions when patching your systems, i.e. VM snapshots, and also have a testing process to confirm that the system is operating as expected after the patching has been completed. Using Open VM Tools allows you to seamlessly integrate the update process into your existing patching strategy. The thing to bear in mind is that whilst OSPs are mainly targeted at Linux distributions not supporting Open VM Tools (RHEL6 for example), the VMware website hosts tools for Windows operating systems as well.

Summary
In these two blog posts we have touched on some key recommendations for migrating and running workloads in a VMware cloud environment:
1. Keep VMware tools updated in your VMs, preferably use version 10.
2. Update your virtual hardware version, preferably to version 9 or above.
3. Make sure you are using high performance virtual hardware, VMXNet3 network adapters and LSI Logic SAS or Paravirtual SCSI controllers.
4. Use affinity rules for performance gains and anti-affinity rules to keep clustered application nodes on separate physical hosts.

Caveats:
1. Make sure you always stay cautious! Take VM snapshots before making changes and make sure you always have a backup copy you would be able to revert to.
2. Don’t make changes to VMs that are appliances, virtual firewalls or load balancers (i.e. Citrix NetScaler’s). Those VMs are deployed from templates provided by the manufacturers and should not be subject to VMware tools updates nor should you make any changes to their virtual hardware.

I hope these blog posts on pre and post-migration tasks for workload migration to a VMware cloud have been useful to you – don’t hesitate to reach out for a personalized demo of iland Secure DRaaS with Zerto. With over a decade of experience delivering cloud backup and DR solutions, we’ve got a lot of expertise to share!
Daniel Stasinski

Daniel Stasinski

Daniel is a Cloud Deployment Engineer at iland. He’s experienced in deploying business continuity solutions, running disaster recovery tests as well as migrating on-premise workloads to the cloud. He holds current certifications from VMware, Zerto and Cisco. Daniel works closely with iland customers to implement cloud solutions that meet their business requirements.