1. Welcome to OS Migrate
OS Migrate provides a framework and toolsuite for exporting and importing resources between two clouds. It’s a collection of Ansible playbooks that provide the basic functionality, but may not fit each use case out of the box. You can craft custom playbooks using the OS Migrate collection pieces (roles and modules) as building blocks.
At present OS Migrate supports migration from VMware clouds to OpenStack, and OpenStack to OpenStack.
OS Migrate strictly uses the official OpenStack API and does not utilize direct database access or other methods to export or import data. The Ansible playbooks contained in OS Migrate are idempotent. If a command fails, you can retry with the same command.
2. VMware to OpenStack Guide
An important function included in OS Migrate is our VMware tooling, which allows you to migrate a virtual machine from an ESXi/Vcenter environment to OpenStack environments.
The code used os-migrate Ansible collection in order to deploy conversion host and setup correctly the prerequists in the Openstack destination cloud. It also used the vmware community collection in order to gather informations from the source VMWare environment.
The Ansible collection provides different steps to scale your migration from VMWare to Openstack:
-
A discovery phase where it analizes the VMWare source environment and provides collected data to help for the migration.
-
A pre-migration phase where it make sure the destionation cloud is ready to perform the migration, by creating the conversion host for example or the required network if needed.
-
A migration phase with different workflow where the user can basicaly scale the migration with a very number of virtual machines as entry point, or can migrate sensitive virtual machine by using a near zero down time with the change block tracking VMWare option (CBT) and so perform the virtual machine migration in two steps. The migration can also be done without conversion host.
2.1. Pre-requisites and Validation Checks
Before migrating VMware workloads to OpenStack, ensure the following pre-requisites are met. These checks are based on issues encountered in real-world migration engagements.
2.1.1. Network settings and checks
Before attempting a migration you need to make sure to have the correct network settings and host resolutions.
Network checks
Make sure the Vcenter fqdn and OpenStack endpoints can be resolved from the conversion host. Refer to this section to perform the checks if needed: Connectivity from Conversion Host to VMware.
Network requirements
Ensure that you met all the following network requirements:
| Port / Protocol | Direction | Source / Destination | Purpose |
|---|---|---|---|
443/TCP |
Egress |
VMware vCenter |
Main VMware communication used for authentication, VM metadata, snapshots, and VDDK operations. |
902/TCP |
Egress |
VMware ESXi hosts |
Direct disk access used to read VM disk data via NFC/NBD protocols. |
22/TCP |
Ingress |
Ansible Controller / Admin |
Remote management of the conversion host over SSH. |
10809/TCP |
Internal to host |
Conversion host |
Local NBDKit server used to stream disk data during conversion (no firewall rule required). |
2.1.2. On the VMware side
Disk Consolidation
Ensure that all virtual machine disks are consolidated before migration.
Virtual machines with unconsolidated disks may fail during migration or result in data inconsistencies. Disk consolidation merges redundant delta disks created by snapshots back into the base disk.
To check if disks need consolidation:
-
In vSphere Client, check the VM summary tab for "Consolidation needed" warnings
-
Via PowerCLI:
Get-VM | Where-Object {$_.ExtensionData.Runtime.ConsolidationNeeded}
To consolidate disks:
-
In vSphere Client: Right-click the VM → Snapshots → Consolidate
-
Ensure the operation completes successfully before proceeding with migration
Snapshot Hierarchy Depth
Verify that the snapshot hierarchy.
-
In vSphere Client: Right-click the VM → Snapshots → Manage Snapshots
-
Review the snapshot tree depth
-
If the hierarchy is too deep, consider consolidating or removing unnecessary snapshots before migration
VMware User Access Control Lists (ACLs)
To avoid using the Administrator role and in order to be able to connect, parse the vCenter datastore, manipulate the snapshots and migrate virtual machines, OS-Migrate needs the following ACLs for the vCenter user:
| Category | Privilege Group | Privileges |
|---|---|---|
Datastore |
— |
Browse datastore |
Virtual Machine |
Guest operations |
All |
Provisioning |
Allow disk access |
|
Service configuration |
Allow notifications |
|
Snapshot management |
Create snapshot |
To verify permissions:
-
In vCenter: Administration → Access Control → Roles
-
Review the assigned role for the migration user
-
Ensure all required privileges are granted
Change Block Tracking (CBT)
Change Block Tracking is recommended for near-zero downtime migrations with large disks.
| CBT allows incremental data transfer by tracking changed disk blocks between snapshots, significantly reducing downtime during the final synchronization phase. |
Enabling CBT
CBT must be enabled on the virtual machine before migration.
Prerequisites for enabling CBT:
-
VMware Tools must be installed on the VM (see below)
-
VM must be powered off or a snapshot must be taken for changes to take effect
-
vSphere 4.0 or later
To enable CBT:
-
Power off the virtual machine (or plan for a snapshot)
-
Edit VM settings and add/modify the following advanced parameters:
-
ctkEnabled = TRUE -
scsi0:0.ctkEnabled = TRUE(for each disk, e.g., scsi0:1, scsi0:2, etc.)
-
-
Power on the virtual machine
Verify CBT is enabled:
Via vSphere Web Client:
# Check VM configuration for changeTrackingEnabled parameter
| If CBT is not enabled, OS Migrate will perform a full disk copy in a single pass, which may result in longer downtime for VMs with large disks. |
VMware Tools Installation
VMware Tools installation status should be verified before migration.
Importance:
-
For standard migrations: Recommended but not mandatory
-
Improves guest OS detection and metadata gathering
-
Enables graceful shutdown capabilities
-
Provides better VM customization options post-migration
-
-
For CBT-based migrations: Mandatory
-
CBT functionality requires VMware Tools to be installed
-
Without VMware Tools, CBT cannot be enabled or used
-
To check VMware Tools status:
Via vSphere Client:
-
Select the VM and check the Summary tab
-
Look for "VMware Tools" status: should show "Running" or "OK"
-
Status of "Not installed" or "Not running" indicates action is needed
Via PowerCLI:
Get-VM <vm-name> | Select Name, @{N='Tools Status';E={$_.ExtensionData.Guest.ToolsStatus}}
To install VMware Tools:
-
In vSphere Client: Right-click the VM → Guest OS → Install VMware Tools
-
Follow the guest OS-specific installation process
-
Verify the installation completes successfully
|
Attempting a CBT-based migration without VMware Tools installed will fail. Ensure VMware Tools are installed and running before enabling CBT. |
2.2. Workflow
There is different ways to run the migration from VMWare to OpenStack.
-
The default is by using nbdkit server with a conversion host (an Openstack instance hosted in the destination cloud). This way allow the user to use the CBT option and approach a zero downtime. It can also run the migration in one time cycle.
-
The second one by using virt-v2v binding with a conversion host. Here you can use a conversion host (Openstack instance) already deployed or you can let OS-Migrate deployed a conversion host for you.
-
A third way is available where you can skip the conversion host and perform the migration on a Linux machine, the volume migrated and converted will be upload a Glance image or can be use later as a Cinder volume. This way is not recommended if you have big disk or a huge amount of VMs to migrate: the performance are really slower than with the other ways.
All of these are configurable with Ansible boolean variables.
2.3. Features and supported OS
2.3.1. Features
The following features are availables:
-
Discovery mode
-
Network mapping
-
Port creation and mac addresses mapping
-
Openstack flavor mapping and creation
-
Migration with nbdkit server with change block tracking feature (CBT)
-
Migration with virt-v2v
-
Upload migrate volume via Glance
-
Multi disks migration
-
Multi nics
-
Parallel migration on a same conversion host
-
Ansible Automation Platform (AAP)
2.3.2. Supported OS
The VMware Migration Toolkit uses virt-v2v for conversion. For a list of supported guest operating systems for virt-v2v, see the Red Hat Knowledgebase article: Converting virtual machines from other hypervisors to KVM with virt-v2v in RHEL 7, RHEL 8, RHEL 9, and RHEL 10.
RHOSO uses Kernel-based Virtual Machine (KVM) for hypervisors. For a list of certified guest operating systems for KVM, see the Red Hat Knowledgebase article: Certified Guest Operating Systems in Red Hat OpenStack Platform, Red Hat Virtualization, Red Hat OpenShift Virtualization and Red Hat Enterprise Linux with KVM.
2.3.4. Nbdkit migration example with the Change Block Tracking
Step 1: The data are copied and the change ID from the VMware disk are set to the Cinder volume as metadata
| The conversion cannot be made at this moment, and the OS instance is not created. This functionality can be used for large disks with a lot of data to transfer. It helps avoid a prolonged service interruption. |
Step 2: OSM compare the source (VMware disk) and the destination (Openstack Volume) change ID
| If the change IDs are not equal, the changed blocks between the source and destination are synced. Then, the conversion to libvirt/KVM is triggered, and the OpenStack instance is created. This allows for minimal downtime for the VMs. |
2.3.5. Migration demo from an AEE
The content of the Ansible Execution Environment could be find here:
And the live demo here:
2.3.6. Running migration
Conversion host
You can use os_migrate.os_migration collection to deploy a conversion, but you can easily create your conversion host manually.
A conversion host is basically an OpenStack instance.
| Important: If you want to take benefit of the current supported OS, it’s highly recommended to use a CentOS-10 release or RHEL-9.5 and superior. If you want to use other Linux distribution, make sure the virtio-win package is equal or higher than 1.40 version. |
curl -O -k https://cloud.centos.org/centos/10-stream/x86_64/images/CentOS-Stream-GenericCloud-10-20250217.0.x86_64.qcow2
# Create OpenStack image:
openstack image create --disk-format qcow2 --file CentOS-Stream-GenericCloud-10-20250217.0.x86_64.qcow2 CentOS-Stream-GenericCloud-10-20250217.0.x86_64.qcow2
# Create flavor, security group and network if needed
openstack server create --flavor x.medium --image 14b1a895-5003-4396-888e-1fa55cd4adf8 \
--key-name default --network private vmware-conv-host
openstack server add floating ip vmware-conv-host 192.168.18.205
VMware VDDK setup
Download and extract the VMware VDDK
-
In a browser, navigate to the VMware VDDK download page.
-
Select version 8.0.1 and download the archive.
-
Save the archive in a temporary directory and extract it:
tar -xzf VMware-vix-disklib-<version>.x86_64.tar.gz
You can then specify the library path via:
conversion_host_vmware_vix_disklib: /usr/lib/vmware-vix-disklib
If you want to skip the conversion_host role entirely, specify the library path on the migrator instead:
import_workloads_libdir: /usr/lib/vmware-vix-disklib
Inventory, Variables files and Ansible command:
inventory.yml
migrator:
hosts:
localhost:
ansible_connection: local
ansible_python_interpreter: "{{ ansible_playbook_python }}"
conversion_host:
hosts:
192.168.18.205:
ansible_ssh_user: cloud-user
ansible_ssh_private_key_file: key
myvars.yml:
# if you run the migration from an Ansible Execution Environment (AEE)
# set this to true:
runner_from_aee: true
# osm working directory:
os_migrate_vmw_data_dir: /opt/os-migrate
copy_openstack_credentials_to_conv_host: false
# Re-use an already deployed conversion host:
already_deploy_conversion_host: true
# If no mapped network then set the openstack network:
openstack_private_network: 81cc01d2-5e47-4fad-b387-32686ec71fa4
# Security groups for the instance:
security_groups: ab7e2b1a-b9d3-4d31-9d2a-bab63f823243
use_existing_flavor: true
# key pair name, could be left blank
ssh_key_name: default
# network settings for openstack:
os_migrate_create_network_port: true
copy_metadata_to_conv_host: true
used_mapped_networks: false
vms_list:
- rhel-9.4-1
secrets.yml:
# VMware parameters:
esxi_hostname: 10.0.0.7
vcenter_hostname: 10.0.0.7
vcenter_username: root
vcenter_password: root
vcenter_datacenter: Datacenter
os_cloud_environ: psi-rhos-upgrades-ci
dst_cloud:
auth:
auth_url: https://keystone-public-openstack.apps.ocp-4-16.standalone
username: admin
project_id: xyz
project_name: admin
user_domain_name: Default
password: openstack
region_name: regionOne
interface: public
insecure: true
identity_api_version: 3
Ansible command:
ansible-playbook -i inventory.yml os_migrate.vmware_migration_kit.migration -e @secrets.yml -e @myvars.yml
2.4. Usage
You can find a "how to" here, to start from sratch with a container: https://gist.github.com/matbu/003c300fd99ebfbf383729c249e9956f
Clone repository or install from ansible galaxy
git clone https://github.com/os-migrate/vmware-migration-kit
ansible-galaxy collection install os_migrate.vmware_migration_kit
2.4.1. Nbdkit (default)
Edit vars.yaml file and add our own setting:
esxi_hostname: ********
vcenter_hostname: *******
vcenter_username: root
vcenter_password: *****
vcenter_datacenter: Datacenter
If you already have a conversion host, or if you want to re-used a previously deployed one:
already_deploy_conversion_host: true
Then specify the Openstack credentials:
# OpenStack destination cloud auth parameters:
dst_cloud:
auth:
auth_url: https://openstack.dst.cloud:13000/v3
username: tenant
project_id: xyz
project_name: migration
user_domain_name: osm.com
password: password
region_name: regionOne
interface: public
identity_api_version: 3
# OpenStack migration parameters:
# Use mapped networks or not:
used_mapped_networks: true
network_map:
VM Network: private
# If no mapped network then set the openstack network:
openstack_private_network: 81cc01d2-5e47-4fad-b387-32686ec71fa4
# Security groups for the instance:
security_groups: 4f077e64-bdf6-4d2a-9f2c-c5588f4948ce
use_existing_flavor: true
os_migrate_create_network_port: false
# OS-migrate parameters:
# osm working directory:
os_migrate_vmw_data_dir: /opt/os-migrate
# Set this to true if the Openstack "dst_cloud" is a clouds.yaml file
# other, if the dest_cloud is a dict of authentication parameters, set
# this to false:
copy_openstack_credentials_to_conv_host: false
# Teardown
# Set to true if you want osm to delete everything on the destination cloud.
os_migrate_tear_down: true
# VMs list
vms_list:
- rhel-1
- rhel-2
2.4.2. OpenStack Flavor
When using VMware as a source, there are several ways to handle the flavor for the resulting OpenStack instance. VMware has no native flavor concept, so OS-Migrate supports:
-
Find the closest matching flavor
-
Enable:
use_existing_flavor: true -
If no flavor matches, OS‑Migrate will create one automatically.
-
-
Create a new flavor for each VM
-
The created flavor name follows:
osm-vmware-<vm_name>-<random_id>(example:osm-vmware-myvm-9999).
-
-
Provide a specific flavor UUID
-
Force usage of an existing flavor:
flavor_uuid: <your_flavor_uuid> -
Useful to define custom properties for host aggregation or targeted placement.
-
2.4.3. Virt‑v2v
Provide the following additional information when using virt‑v2v:
# virt‑v2v parameters
vddk_thumbprint: XX:XX:XX
vddk_libdir: /usr/lib/vmware-vix-disklib
Generate the thumbprint of your VMware source cloud with:
openssl s_client -connect ESXI_SERVER_NAME:443 </dev/null | \
openssl x509 -in /dev/stdin -fingerprint -sha1 -noout
2.4.4. Running migration from local shared NFS
OS-Migrate can migrate directly from a local shared directory mounted on the conversion host. If the VMware virtual machines are located on an NFS datastore that is accessible to the conversion host, you can mount the NFS storage on the conversion host and provide the path to the NFS mount point.
OS-Migrate will then directly consume the disks of the virtual machines located on the NFS mount point. Configure the Ansible variable to specify your mount point as follows:
import_workloads_local_disk_path: "/srv/nfs"
| In this mode, only cold migration is supported. |
2.4.5. Ansible configuration
Create an invenvoty file, and replace the conv_host_ip by the ip address of your conversion host:
migrator:
hosts:
localhost:
ansible_connection: local
ansible_python_interpreter: "{{ ansible_playbook_python }}"
conversion_host:
hosts:
conv_host_ip:
ansible_ssh_user: cloud-user
ansible_ssh_private_key_file: /home/stack/.ssh/conv-host
Then run the migration with:
ansible-playbook -i localhost_inventory.yml os_migrate.vmware_migration_kit.migration -e @vars.yaml
2.4.6. Running Migration outside of Ansible
You can also run migration outside of Ansible because the Ansible module are written in Golang. The binaries are located in the plugins directory.
From your conversion host (or an Openstack instance inside the destination cloud) you need to export Openstack variables:
export OS_AUTH_URL=https://keystone-public-openstack.apps.ocp-4-16.standalone
export OS_PROJECT_NAME=admin
export OS_PASSWORD=admin
export OS_USERNAME=admin
export OS_DOMAIN_NAME=Default
export OS_PROJECT_ID=xyz
Then create the argument json file, for example:
cat <<EOF > args.json
{
"user": "root",
"password": "root",
"server": "10.0.0.7",
"vmname": "rhel-9.4-3",
"cbtsync": false,
"dst_cloud": {
"auth": {
"auth_url": "https://keystone-public-openstack.apps.ocp-4-16.standalone",
"username": "admin",
"project_id": "xyz",
"project_name": "admin",
"user_domain_name": "Default",
"password": "admin"
},
"region_name": "regionOne",
"interface": "public",
"identity_api_version": 3
}
}
EOF
Then execute the migrate binary:
pushd vmware-migration-kit/vmware_migration_kit
./plugins/modules/migrate/migrate
You can see the logs into:
tail -f /tmp/osm-nbdkit.log
2.5. Troubleshooting
2.5.1. Connectivity from Conversion Host to VMware
Ensure network and name resolution are properly configured before running migrations.
-
Port 902 must be reachable from the conversion host:
curl -v telnet://<vcenter_ip>:902 # or nc -zv <vcenter_ip> 902
The connection should succeed.
-
vCenter FQDN resolution Ensure the vCenter hostname resolves from the conversion host. If necessary, update
/etc/hosts:
echo "<vcenter_ip> vcenter.domain.local" | sudo tee -a /etc/hosts
2.5.2. OpenStack Metadata service
If the metadata service is not reachable you may see errors like:
Failed to fetch metadata: Get "http://169.254.169.254/openstack/latest/meta_data.json": dial tcp 169.254.169.254:80: connect: no route to host
As a workaround you can set a manual instance UUID in the import playbook:
import_workloads_instance_uuid: <uuid>
2.5.3. Enable Debugging Flags During Migration
Increase verbosity and capture detailed debug information by setting:
import_workloads_debug: true
OS-Migrate creates a unique log file per migration on the conversion host under /tmp, and in case of failure pulls it back to the OS-Migrate work directory (default /opt/os-migrate) under a folder named after the VM. The log naming format is:
osm-nbdkit-<vm-name>-<random-id>.log.
tail -f /tmp/osm-nbdkit-<vm-name>-<random-id>.log
2.5.4. NBDKit errors
If you encounter:
nbdkit: error: server has no export named '': No such file or directory
Common causes:
-
Port 902 not open between conversion host and vCenter.
-
vCenter FQDN not resolvable.
-
Malformed
nbdkitcommand (invalid characters or parameters).
2.5.5. Manual debug procedure
You can replay the commands manually for troubleshooting.
Step 1 – Run nbdkit manually
Run the command shown in the logs with --verbose and wrap the VMDK path in double quotes:
nbdkit --verbose vddk ".../guest-00001.vmdk"
If the migration snapshot has been deleted, remove the snapshot option and use the base disk instead.
Step 2 – Run nbdcopy in another shell
Run the nbdcopy command as shown in the logs and observe nbdkit output. You should see:
vddk: config_complete.
Step 3 – Analyze authentication and paths
At this point, authentication was already verified by the migration process. The VMDK path is returned by the VMware API, typically:
[Datastore 1] path/to/the/guest-00001.vmdk.
2.6. Ansible Execution Environment (AEE) Images
vmware-migration-kit provides a containerized Ansible Execution Environment (AEE) image that encapsulates all necessary dependencies for running migration playbooks in a consistent, isolated environment.They include:
-
Ansible Core
-
os-migrate Ansible collection
-
OpenStack SDK and related Python packages
-
All required dependencies for OpenStack resource migration
This approach ensures consistent behavior across different environments and simplifies deployment and maintenance.
2.6.1. Building AEE Images
Prerequisites
Before building AEE images, ensure you have the following tools installed:
-
ansible-builder- Tool for building execution environments -
podmanordocker- Container runtime -
git- Version control system -
python3- Python runtime (version 3.8 or higher)
Installing Dependencies
Install the required dependencies using the project-specific requirements files:
For vmware-migration-kit:
# Clone the repository
git clone https://github.com/os-migrate/vmware-migration-kit.git
cd vmware-migration-kit
# Create and activate virtual environment
python3 -m venv .venv
source .venv/bin/activate
# Install build dependencies
pip install -r requirements-build.txt
Collection Requirements in AEE
The AEE images use requirements.yml files to specify which Ansible collections to install. The collection installation method depends on the build context:
For main branch builds (development):
Install collections directly from Git repositories using the main branch:
# requirements.yml for main branch builds
collections:
- name: https://github.com/os-migrate/vmware-migration-kit.git
type: git
version: main
For stable/tagged builds (production):
Install collections from Ansible Galaxy using specific version tags:
# requirements.yml for stable/tagged builds
collections:
- name: vmware.vmware
version: 2.4.0
- name: vmware.vmware_rest
version: 4.9.0
- name: os_migrate.vmware_migration_kit
type: file
source: tmp/os_migrate-vmware_migration_kit-2.0.9.tar.gz
Benefits of this approach:
-
Main branch builds: Always get the latest development code with latest features and fixes
-
Stable builds: Use tested, released versions for production stability
-
Version consistency: AEE image tags match the collection versions they contain
-
Reproducible builds: Same collection versions produce identical AEE images
Execution Environment Definition
AEE images are defined using execution-environment.yml files that specify:
-
Base image (typically
quay.io/centos/centos:stream10-minimal) -
Python dependencies
-
Ansible collection
-
Additional image configurations
Example structure:
version: 1
images:
base_image:
name: quay.io/centos/centos:stream10-minimal
options:
package_manager_path: /usr/bin/microdnf
dependencies:
ansible_runner:
package_pip: ansible-runner
ansible_core:
package_pip: ansible-core
python: requirements.txt
system: binddep.txt
galaxy: requirements.yml
python_interpreter:
package_system: "python3"
python_path: "/usr/bin/python3"
additional_build_files:
- src: ../os_migrate-vmware_migration_kit-2.0.9.tar.gz
dest: tmp/
additional_build_steps:
prepend_base:
- "RUN mkdir -p /etc/sudoers.d"
- "RUN echo 'cloud-user ALL=(ALL) NOPASSWD: ALL' > /etc/sudoers.d/cloud-user"
Customizing AEE Images
To customize AEE images for specific requirements:
-
Modify the
execution-environment.ymlfile -
Add custom requirements or collections
-
Rebuild the image using ansible-builder
Building vmware-migration-kit AEE
Navigate to the vmware-migration-kit repository and build the AEE:
# Navigate to the repository
cd /path/to/vmware-migration-kit
# Activate virtual environment (if using one)
source .venv/bin/activate
# Navigate to AEE directory
cd aee
# Build the AEE image
ansible-builder build --tag vmware-migration-kit:latest
Automated Build Process
The repository includes a GitHub Actions workflow that automatically builds and tests AEE images:
-
vmware-migration-kit/.github/workflows/build-aee.yml
The workflow:
-
Trigger on pushes to main branch and pull requests
-
Build the AEE image using ansible-builder
-
Run basic validation tests
-
Push images to container registries (when configured)
2.6.2. Using AEE Images
Running Playbooks with AEE
AEE images are distributed in Quay; you can build it or use the following command to pull the latest stable image:
podman pull quay.io/os-migrate/vmware-migration-kit:stable
Set runner_from_aee: true in your vars file to prevent redundant installation of requirements already included in the migration container. To execute migration playbooks inside the AEE container, use the following command:
podman run --rm -it \
-v $(pwd):/runner:z \
-v ~/.ssh:/home/runner/.ssh:ro \
quay.io/os-migrate/vmware-migration-kit:stable \
ansible-playbook -i /runner/inventory \
-e @/runner/vars.yaml \
os_migrate.vmware_migration_kit.migration
Interactive Shell Access
To open a shell inside the AEE container for troubleshooting or inspecting the environment, run:
podman run --rm -it \
-v $(pwd):/runner:z \
-v ~/.ssh:/home/runner/.ssh:ro \
quay.io/os-migrate/vmware-migration-kit:stable \
/bin/bash
Volume Mounts
Common volume mounts for AEE usage:
-
$(pwd):/runner:z- Mount current directory as working directory, with SELinux context adjustment for accessing the files. -
~/.ssh:/home/runner/.ssh:ro- Mount SSH keys (read-only) -
~/.config/openstack:/home/runner/.config/openstack:ro- Mount OpenStack credentials -
/path/to/inventory:/runner/inventory:ro- Mount inventory files
2.6.3. Using AEE Images with AWX platform
This allows you to leverage the benefits of AEE—such as consistent execution environments, isolation from host dependencies, and simplified dependency management—while AWX for orchestration and management. For configuring AWX to launch a migration:
-
Create the inventory and add the conversion host
-
Create the execution environment using the desired AEE image
-
Create a project that points to the repository containing the migration playbooks
-
Create the job template adding the inventory created before, the project, the execution environment and adding the variables file as extra variables.
-
Run the migration
3. VMware to OpenStack OS Migrate Performance Expectations
This document outlines some performance expectations for VMware to OpenStack transfers via the os-migrate tooling. For the purposes of the these metrics, we are considering the following hardware for a Conversion Host:
-
6 vCPU
-
8GB RAM
-
20GB disk unless otherwise stated
All of the below metrics apply to VMware to OpenStack via the Ansible Automation Platform
3.1. Table 1
Time of execution to completion including instance boot & ping for RHEL machines in our lab.
VMs |
Conversion Hosts |
Threads |
Time |
1 |
1 |
1 |
2 minutes |
20 |
1 |
1 |
31 minutes |
20 |
1 |
2 |
15 minutes |
30 |
1 |
10 |
8 minutes |
100 |
1 |
10 |
20 minutes |
Note: The average time for 1 virtual machine is around 2 minutes. The migration can be parallelized on the same conversion host (threads) or on multiple conversion hosts.
3.2. Table 2
Time for RHEL machines with a larger disk for a full migration run with and without CBT[1] options:
Disk size |
CBT enabled |
Time total |
Cutover time |
Expected downtime |
100 GB |
yes |
8 minutes |
2 minutes |
2 minutes |
100 GB |
no |
7 minutes |
7 minutes |
|
200 GB |
yes |
10 minutes 30 secondes |
2 minutes |
2 minutes |
200 GB |
no |
9 minutes 30 secondes |
9 minutes 30 secondes |
|
300 GB |
yes |
17 minutes |
2 minutes |
2 minutes |
300 GB |
no |
15 minutes |
15 minutes |
|
1 TB |
yes |
39 minutes |
2 minutes |
2 minutes |
1 TB |
no |
35 minutes |
35 minutes |
Note 1: The disks are 99% full of random data during the test.
Note 2: The CBT option takes a bit more time overall but the downtime is actually lower due to the smaller data cutover time.
3.3. Table 3
Example of a Migration plan for 55 VMs with 200 GB of disk full. (estimation)
Disk size |
Conversion Hosts |
Threads |
CBT enabled |
Migration time |
Sync time |
200 GB |
1 |
1 |
yes |
115 minutes |
|
200 GB |
1 |
5 |
no |
105 minutes |
|
200 GB |
1 |
5 |
yes |
22 minutes |
110 minutes |
For this example, the best plan is to parallelize the migration on a single Conversion Host to 5 threads and use the CBT option to pre-migrate the volume data; this will shortern the cutover time (2 minutes in our lab environment), reducing the downtime overall.
3.4. Table 4
Migration of 1500 VMs: Time for RHEL machines in our lab with a disk capacity of 20GB, with a full migration run which comprises of migration of data, conversion and instance creation, boot time, and ability to ping.
VMs |
Conversion Hosts |
Threads |
Time |
250 |
2 |
6 |
60 minutes (without the preparation steps) |
1000 |
2 |
6 |
5 hours |
1500 |
2 |
6 |
Note 1: The conversion flavor was set to: 12GB of RAM and 6 vcpus which allow OS-Migrate to comfortably run 6 migrations in parallel on the same host. There is space to execute more parallel migrations with this configuration but it’s a best practice to let 2GB of RAM and 1 CPU for each migration.
3.5. Conversion host requirements and recommendations
Recommended guidelines:
-
For 1 migration allocate 2GB of RAM and 1 vCPU on your Conversion Host
-
RHEL 9.5 or CentOS 10 to benefit from the latest drivers (virtio-win) to convert the recent Windows distributions.
-
Fedora (38 and +) if you want to convert btrfs file system Minimum baseline for small concurrency: 2 vCPUs, 4GB RAM, 16GB disk on the conversion host.
3.6. How to proceed with a large workload
It’s important to first focus on sizing your target OpenStack environment adequately. It will of course receive large numbers of instances, ports, volumes, floating ips and all the OpenStack resources that the instances will require. So set the OpenStack quotas with the correct values.
Second you need to split your workload regarding the number of conversion hosts you will create.
For example, with a workload of 1000 VMs, you can use two conversion hosts which will run the migration in parallel and execute 5 or 6 migrations simultaneously on each conversion host.
If you followed the requirements for the conversion host above, the migration time will be linear:
If 1 virtual machine takes 5 minutes to be migrated then the 1000 Vms will take:
-
5*1000/12 (where 12 is 6 parallel migrations x 2 conversion hosts): 6 hours and 56 minutes
The more you divide your workload, the faster you will move it to OpenStack.
In the example above, if we decide to add 2 more conversion hosts then:
-
5*1000/24 = 3h and 28 minutes.
Finally, some known issues we have seen may happen during a large-scale migration.
-
First, when the conversion hosts are too solicited. For example, a single conversion host which has performed more than 5 000 of migrations in a very short time may have some issues with the device mount mechanism in the OS itself.
When this behavior appears, sometimes just a reboot might help to clean the /dev/ devices.
-
Another issue may appear on the Vmware side with the message "snapshot hierarchy is too deep" because OS-Migrate is working with Snapshot. If this error appears, then clean the guest snapshots hierarchy and re-run the migration.
Note 1 - Make sure all the OpenStack services are configured to support a large amount of requests. OS-Migrate is driven by Ansible but the core of the migration is a binary which does not consume a lot of resources. So the more you use the binary, the more the OpenStack Api will receive requests. For example Rabbitmq, Galera and also Nova or Cinder will be impacted.
4. Community
For issue reports please use the GitHub issue tracker