Hot migration of data centers – we did it!
There are so many possible reasons for moving workloads between different data centers. Upgrading from versions that are no longer supported. Extending or replacing a data center. Implementing a disaster recovery plan. And the rest. Good thing that it has never been so quick and easy to switch from the US West Coast to the East Coast or from Amsterdam to Limbourg. In just a couple of clicks, workloads can be sent between different data centers via secure HCX tunnels.
A few figures: it took a customer five weeks to move 300TB of VMs, including planning, installation, replication and switchover. In one day, this customer increased the amount of data transferred between two data centers in Germany to 23TB, or around 1TB per hour. Another customer moved more than 200TB from their data center – spread over 750 VMs – without downtime.
Until a year ago, no one would have imagined these “hot” migrations could be possible. Even the idea of moving workloads between two data centers was a fantasy. Doing it hot was just a pipe dream.
The transfer of workloads is based on a VMware technology called HCX, aimed at the Private Cloud platform. As well as managing the migration of workloads securely, this technology allows for a seamless transition by providing a network connection between the source and destination data centers via a Layer 2 stretched network. A virtual machine that is sent “hot” in the Private Cloud does not lose connectivity with the other machines it operates with under standard conditions.
HCX uses three appliances. One is the Cloud Gateway (CGW), for managing the transfer of virtual machines from one data center to another. The second, the WAN Accelerator, works in conjunction with the CGW. The third serves the stretched network (L2C). These appliances are automatically deployed on the Private Cloud side, and require a fourth appliance on the premises to control the deployment and configuration of these three essential elements.
Notes: The CGW also appears in the inventory as a registered host.
In brief: at least two tunnels are built between the source data center and the destination data center, the OVH Private Cloud. One tunnel runs between the CGWs to transfer the VMs. One tunnel goes between the L2Cs to create a stretched network in case it is necessary to extend a subnetwork. It goes without saying that it is possible to deploy several L2Cs, depending on the number of networks that need extending.
Once the architecture is deployed, a dashboard summarizes the migration possibilities and provides a history.
There are several ways to move virtual machines: a "hot" migration, a so-called "warm" migration and a "cold" migration.
Hot migration is by far the most impressive. In just a few clicks, a virtual machine appears in the destination data center without losing its state, connectivity and context. The process is similar to vMotion, which VMware users already know about. It is called vMotion Migration. The idea is that the VM datastore is sent to the destination data center. Once the VM backup is fully synchronized, the memory and CPU are in turn synchronized and the destination data center takes over. This method has a limitation: the sequencing of the process can impact workloads that are distributed over several VMs and require low latency between VMs. After migrating a VM, latency between data centers is noticeable.
We address this issue using "Bulk Migration", which also adds features to help with controlled migration. The aim is to synchronize one or more VMs with the destination data center while they are still on the source data center, and to maintain this synchronization over time. The switchover of all VMs takes place at a time chosen by the administrator who initiated the migration, in a timeslot most favorable for the switchover. The switchover simply involves switching off the VM in the source data center and starting the VM in the destination data center. As well as controlling the switchover period, it is possible to customize the VM (update VMware Tools, upgrade virtual hardware, etc.). Since all VMs move at the same time, there are no issues with latency in the stretched network.
The last switching method is disruptive for production machines, and therefore more intended for the migration of templates, archives and backups. It is done cold, with the VM off, and simply synchronizes and switches automatically when all the data has arrived at the destination data center.
We have more than three years’ experience in secure workload migration, initially with vCloud Air data centers, then with OVH's own data centers. During this period we have migrated exabytes of data all over the world.
HCX is a tool designed to address several migration-related issues, including the operational maintenance of virtual machines during transfer, the need to switch sets of VMs and the network connection between different data centers. It requires some work on the architecture. A migration is in fact prepared upstream, with sufficient sizing of the destination data center. This can be adapted over time on the OVH Private Cloud. It is also necessary to work on an estimate of the switchover duration and on the switchover strategy for the different workloads in the source data center. As for the rest, it's just a few clicks in HCX.