In my previous blog, I covered the pre-migration steps necessary to move your VMs to the cloud with Zerto. In the second part of this series, we will cover the steps that should be performed after your VMs have been successfully migrated to the iland Secure Cloud platform.
Adjusting VM Components
After moving your workloads to our vCloud Director platform, it would be advisable that you consider replacing any E1000 NICs in your VMs with VMXNet3 adapters. This will allow you to utilize higher network bandwidth between your VMs. The one good thing that can be said about the E1000 network adapter is that most OSs will have a native driver for it. Still, you should only consider using it in production environments if you absolutely have to. Additionally, you could set up affinity rules between your front-end servers and your back-end servers, i.e. a database and web server pair, to take full advantage of the increased network throughput.
The adapter type selection filed is not displayed by default, make sure to select the correct type of adapter
Touching on the subject of performance, it is worth noting that some of the VMs you migrate using Zerto might use older vSCSI controllers. Whilst in most cases, making any changes to the SCSI control setup will be unnecessary, it is worth noting that in order to gain maximum performance from the underlying storage you would have to use a LSI Logic SAS or a Paravirtual SCSI controller. This is especially true for workloads requiring large IOPS, running on datastores backed by all-flash storage. Fortunately, in most cases you will be either able to change the type of the controller used (data drives only, changing this for the boot disk might result in the OS not being able to boot), or simply add new drives with the appropriate controller and migrate your data. As usual, please proceed with caution and make sure you have a backup copy you would be able to revert to if needed.
Affinity and Anti-Affinity Rules
Both the iland Secure Cloud Console and vCloud Director now allow the configuration of affinity and anti-affinity rules. It’s worth noting that, unlike in your own vSphere environment, you can’t tie a VM to a specific host. This is an option often used for licensing purposes, as some applications need to run on the same CPU at all times, otherwise they will stop working. The rules available in our platform allow you to determine which VMs in your environment should always run on the same host or which should always run on separate hosts. This can increase performance of applications that rely on communication between multiple servers, as it removes the need for traffic to flow across the network. Alternatively, you might need to spread out, across multiple hosts, clustered application stacks. Using anti-affinity rules would allow you to ensure no more than one VM would reboot, in case of a HA event on the platform.
Adding affinity and anti-affinity rules is very simple in the vCloud Director interface
You might have been keeping VMware tools up-to-date by using automatic updates on VM reboot or updating them in bulk via vSphere Update Manager. Neither of those is available in vCloud Director, however, the iland console allows you to select an automatic update policy. All you need to do is right click a VM, within the vApp view, and select Manage VMware Tools Upgrade Policy. It’s worth noting that this option is recommended only for test/dev VMs. No one wants to experience an additional reboot after rebooting a production VM.
The iland console exposes the functionality to automatically update VMware tools on VM reboot
In these two blog posts we have touched on some key recommendations for migrating and running workloads in a VMware cloud environment:
1. Keep VMware tools updated in your VMs, preferably use version 10.
2. Update your virtual hardware version, preferably to version 9 or above.
3. Make sure you are using high performance virtual hardware, VMXNet3 network adapters and LSI Logic SAS or Paravirtual SCSI controllers.
4. Use affinity rules for performance gains and anti-affinity rules to keep clustered application nodes on separate physical hosts.
1. Make sure you always stay cautious! Take VM snapshots before making changes and make sure you always have a backup copy you would be able to revert to.
2. Don’t make changes to VMs that are appliances, virtual firewalls or load balancers (i.e. Citrix NetScaler’s). Those VMs are deployed from templates provided by the manufacturers and should not be subject to VMware tools updates nor should you make any changes to their virtual hardware.