VMware Virtual Disk Development Kit (VDDK) provides optimized disk transfer capabilities for VMware vSphere migrations. This chapter covers creating VDDK container images and configuring them for maximum performance.
VDDK provides significant performance improvements over standard disk transfer methods:
Optimized Data Transfer: Direct access to VMware’s optimized disk I/O APIs
Reduced Network Overhead: Efficient data streaming and compression
Better Throughput: Can achieve 2-5x faster transfer speeds compared to standard methods
Resource Efficiency: Lower CPU and memory usage during transfers
Technical Advantages
Native VMware Integration: Uses VMware’s official SDK for optimal compatibility
Advanced Features: Support for changed block tracking (CBT) and incremental transfers
Error Handling: Better error detection and recovery mechanisms
Storage Array Integration: Support for storage array offloading when available
When to Use VDDK
Production Migrations: Always recommended for production VMware environments
Large VMs: Essential for VMs with large disk sizes (>100GB)
Performance-Critical: When migration time is a critical factor
Storage Array Offloading: When using compatible storage arrays with offloading capabilities
Prerequisites for Building the Image
System Requirements
Before building VDDK images, ensure you have:
Container Runtime: Podman or Docker installed and working
Kubernetes Registry Access: Access to a container registry (internal or external)
File System: A file system that preserves symbolic links (symlinks)
Network Access: If using external registries, ensure KubeVirt can access them
VMware License Compliance
Important License Notice
Storing VDDK images in public registries might violate VMware license terms. Always use private registries and ensure compliance with VMware licensing requirements.
--dockerfile PATH: Path to custom Dockerfile (uses default if not specified)
--push: Push image to registry after successful build
--push-insecure-skip-tls: Skip TLS verification when pushing to the registry (podman only, docker requires daemon configuration)
--set-controller-image: Configure the pushed image as the global vddk_image in ForkliftController (requires --push)
Detailed Build Examples
Standard Production Build
1
2
3
4
5
6
7
# Production VDDK image with push
kubectl mtv create vddk-image \--tar ~/downloads/VMware-vix-disklib-distrib-8.0.1.tar.gz \--tag quay.io/company/vddk:8.0.1 \--runtime podman \--platform amd64 \--push
Build, Push, and Configure ForkliftController
The --set-controller-image flag automatically configures the ForkliftController CR with the pushed VDDK image, setting it as the global default for all vSphere providers:
1
2
3
4
5
6
7
# Build, push, and configure as global VDDK image
kubectl mtv create vddk-image \--tar ~/downloads/VMware-vix-disklib-distrib-8.0.1.tar.gz \--tag quay.io/company/vddk:8.0.1 \--runtime podman \--push\--set-controller-image
This single command:
Builds the VDDK container image
Pushes it to the registry
Patches the ForkliftController CR to set spec.vddk_image to the pushed image
The global vddk_image setting applies to all vSphere providers unless they have a per-provider vddkInitImage override configured.
Custom Build Directory
1
2
3
4
5
6
7
# Use specific build directory for large environments
kubectl mtv create vddk-image \--tar ~/VMware-vix-disklib-distrib-8.0.1.tar.gz \--tag harbor.company.com/migration/vddk:latest \--build-dir /tmp/vddk-build \--runtime podman \--push
Multi-Architecture Build
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# Build for ARM64 architecture
kubectl mtv create vddk-image \--tar ~/VMware-vix-disklib-distrib-8.0.1.tar.gz \--tag quay.io/company/vddk:8.0.1-arm64 \--platform arm64 \--runtime podman \--push# Build for AMD64 architecture (default)
kubectl mtv create vddk-image \--tar ~/VMware-vix-disklib-distrib-8.0.1.tar.gz \--tag quay.io/company/vddk:8.0.1-amd64 \--platform amd64 \--runtime podman \--push
Custom Dockerfile Build
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# Create custom Dockerfile with additional toolscat> custom-vddk.dockerfile <<'EOF'
FROM registry.redhat.io/ubi8/ubi:latest
# Install additional debugging tools
RUN dnf install -y tcpdump netstat-ng && dnf clean all
# Copy VDDK libraries (will be handled by kubectl-mtv)
# Additional customizations can be added here
EOF
# Build with custom Dockerfile
kubectl mtv create vddk-image \--tar ~/VMware-vix-disklib-distrib-8.0.1.tar.gz \--tag quay.io/company/vddk:8.0.1-custom \--dockerfile custom-vddk.dockerfile \--push
# Force use of Docker
kubectl mtv create vddk-image \--tar ~/VMware-vix-disklib-distrib-8.0.1.tar.gz \--tag localhost:5000/vddk:8.0.1 \--runtime docker \--push# Force use of Podman
kubectl mtv create vddk-image \--tar ~/VMware-vix-disklib-distrib-8.0.1.tar.gz \--tag quay.io/company/vddk:8.0.1 \--runtime podman \--push# Auto-detect runtime (default)
kubectl mtv create vddk-image \--tar ~/VMware-vix-disklib-distrib-8.0.1.tar.gz \--tag quay.io/company/vddk:8.0.1 \--runtime auto \--push
Insecure Registry Push
For registries with self-signed certificates or internal registries without valid TLS certificates:
1
2
3
4
5
6
7
# Push to insecure registry with Podman (recommended)
kubectl mtv create vddk-image \--tar ~/VMware-vix-disklib-distrib-8.0.1.tar.gz \--tag internal-registry.local:5000/vddk:8.0.1 \--runtime podman \--push\--push-insecure-skip-tls
Note: The --push-insecure-skip-tls flag works natively with Podman by adding --tls-verify=false to the push command. Docker does not support per-command TLS skip and requires daemon configuration instead.
Docker Configuration for Insecure Registries:
If using Docker, configure your daemon before pushing:
# Verify image was built successfully
podman images | grep vddk
# or
docker images | grep vddk
# Test image functionality
podman run --rm quay.io/company/vddk:8.0.1 /usr/bin/vmware-vdiskmanager -h# Verify image layers and size
podman inspect quay.io/company/vddk:8.0.1
VDDK Configuration Hierarchy
VDDK images can be configured at multiple levels. Understanding the hierarchy helps you choose the right approach:
Configuration Levels (in order of precedence)
Per-Provider Setting (highest priority): Set via --vddk-init-image flag when creating a provider
ForkliftController Global Setting: Set via --set-controller-image flag or by patching the ForkliftController CR
Environment Variable Default: Set via MTV_VDDK_INIT_IMAGE (used as default when creating providers)
When to Use Each Level
Level
Use Case
Per-Provider
Different VDDK versions for specific vCenters, testing new versions
ForkliftController
Organization-wide default, single source of truth for all migrations
Environment Variable
Local development, CLI defaults for provider creation
Configuring ForkliftController Global VDDK Image
The ForkliftController CR can be configured with a global VDDK image that applies to all vSphere providers:
1
2
3
4
5
6
7
8
9
10
11
12
13
# Option 1: Set during VDDK image build (recommended)
kubectl mtv create vddk-image \--tar ~/VMware-vix-disklib-distrib-8.0.1.tar.gz \--tag quay.io/company/vddk:8.0.1 \--push\--set-controller-image# Option 2: Set the global VDDK image via settings
kubectl mtv settings set--setting vddk_image \--value quay.io/company/vddk:8.0.1
# Verify the configuration
kubectl mtv settings get --setting vddk_image
Setting the MTV_VDDK_INIT_IMAGE Environment Variable
Setting the Default VDDK Image
The MTV_VDDK_INIT_IMAGE environment variable provides a default for vSphere provider creation with kubectl mtv:
1
2
3
4
5
# Set the default VDDK imageexport MTV_VDDK_INIT_IMAGE=quay.io/your-registry/vddk:8.0.1
# Verify the environment variableecho$MTV_VDDK_INIT_IMAGE
Persistent Configuration
Shell Profile Configuration
1
2
3
4
5
6
7
# Add to ~/.bashrc for bash usersecho'export MTV_VDDK_INIT_IMAGE=quay.io/your-registry/vddk:8.0.1'>> ~/.bashrc
source ~/.bashrc
# Add to ~/.zshrc for zsh usersecho'export MTV_VDDK_INIT_IMAGE=quay.io/your-registry/vddk:8.0.1'>> ~/.zshrc
source ~/.zshrc
System-wide Configuration
1
2
3
4
5
6
7
8
9
# Create system-wide environment filesudo tee /etc/environment.d/mtv-vddk.conf <<EOF
MTV_VDDK_INIT_IMAGE=quay.io/your-registry/vddk:8.0.1
EOF
# Or add to /etc/profile.d/sudo tee /etc/profile.d/mtv-vddk.sh <<EOF
export MTV_VDDK_INIT_IMAGE=quay.io/your-registry/vddk:8.0.1
EOF
Container/Pod Environment
1
2
3
4
5
6
7
8
9
10
11
12
13
# For containerized kubectl-mtv usage
docker run -eMTV_VDDK_INIT_IMAGE=quay.io/your-registry/vddk:8.0.1 \
kubectl-mtv-image create provider --name vsphere-prod --type vsphere
# In Kubernetes pods
apiVersion: v1
kind: Pod
spec:
containers:
- name: kubectl-mtv
env:
- name: MTV_VDDK_INIT_IMAGE
value: "quay.io/your-registry/vddk:8.0.1"
Environment Variable Validation
1
2
3
4
5
6
7
8
9
10
11
12
13
# Verify environment variable is setif[-z"$MTV_VDDK_INIT_IMAGE"];then
echo"MTV_VDDK_INIT_IMAGE is not set"else
echo"MTV_VDDK_INIT_IMAGE is set to: $MTV_VDDK_INIT_IMAGE"fi# Test with provider creation (should use default image)
kubectl mtv create provider --name vsphere-test --type vsphere \--url https://vcenter.test.com/sdk \--username admin \--password password123 \--dry-run
Using the VDDK Image in Provider Creation
Setting the Global VDDK Image (Recommended)
The recommended way to configure VDDK is to set the image globally using the settings command. This ensures all vSphere providers use the VDDK image automatically, without specifying it on every provider:
1
2
3
4
5
6
# Set the global VDDK image
kubectl mtv settings set--setting vddk_image \--value quay.io/company/vddk:8.0.1
# Verify the setting
kubectl mtv settings get --setting vddk_image
Once the global image is configured, create providers without the --vddk-init-image flag:
1
2
3
4
5
# Provider automatically uses the global VDDK image
kubectl mtv create provider --name vsphere-auto --type vsphere \--url https://vcenter.example.com/sdk \--username administrator@vsphere.local \--password YourPassword
When the MTV_VDDK_INIT_IMAGE environment variable is set, providers also pick up the VDDK image automatically:
1
2
3
4
5
# This will automatically use the VDDK image from MTV_VDDK_INIT_IMAGE
kubectl mtv create provider --name vsphere-auto --type vsphere \--url https://vcenter.example.com/sdk \--username administrator@vsphere.local \--password YourPassword
Per-Provider VDDK Image (Fallback)
If you do not have permission to modify ForkliftController settings, you can specify the VDDK image directly on the provider:
1
2
3
4
5
6
# Use specific VDDK image for this provider
kubectl mtv create provider --name vsphere-custom --type vsphere \--url https://vcenter.example.com/sdk \--username administrator@vsphere.local \--password YourPassword \--vddk-init-image quay.io/company/vddk:8.0.2
VDDK Performance Optimization
Enable advanced VDDK optimization features. When the VDDK image is set globally, you only need to add the tuning flags:
The --vddk-buf-size-in-64k parameter controls the buffer size in 64KB units:
1
2
3
4
5
6
7
8
9
10
11
# Small VMs (default - automatic sizing)--vddk-buf-size-in-64k 0
# Medium VMs (8MB buffer)--vddk-buf-size-in-64k 128
# Large VMs (16MB buffer)--vddk-buf-size-in-64k 256
# Very large VMs (32MB buffer)--vddk-buf-size-in-64k 512
Buffer Count Tuning
The --vddk-buf-count parameter controls the number of parallel buffers:
1
2
3
4
5
6
7
8
9
10
11
# Low concurrency (default)--vddk-buf-count 0
# Medium concurrency--vddk-buf-count 8
# High concurrency--vddk-buf-count 16
# Maximum concurrency (use with caution)--vddk-buf-count 32
# Test container runtime
podman --version
podman run hello-world
# Check registry connectivity
podman login quay.io
podman pull registry.redhat.io/ubi8/ubi:latest
Build Directory Problems
1
2
3
4
5
6
7
8
9
# Check available disk spacedf-h /tmp
# Use custom build directory with more spacemkdir-p /data/vddk-build
kubectl mtv create vddk-image \--tar ~/VMware-vix-disklib-distrib-8.0.1.tar.gz \--tag quay.io/company/vddk:8.0.1 \--build-dir /data/vddk-build