Storage array offloading represents one of the most advanced optimization techniques available in kubectl-mtv, capable of dramatically reducing migration time and network overhead by leveraging direct storage array capabilities. This chapter provides comprehensive coverage of storage offloading concepts, supported platforms, configuration, and best practices.

For foundational information about migration performance, see the official Forklift performance recommendations.

Overview of Storage Array Offloading

What is Storage Array Offloading?

Storage array offloading is an advanced migration optimization technique that delegates the actual disk copying operation from the standard virt-v2v convertor pods directly to the storage infrastructure itself. Instead of streaming data through the Kubernetes cluster network, the storage array performs the copy operation internally, resulting in significant performance improvements.

Key Benefits

  • Significant Speed Improvement: Faster migration speeds compared to traditional network-based transfers
  • Reduced Network Overhead: Eliminates large data transfers over the management and migration networks
  • Lower CPU Usage: Reduces load on Kubernetes nodes and convertor pods
  • Improved Reliability: Direct storage operations are less prone to network-related failures
  • Scalability: Enables concurrent migrations without saturating network bandwidth

How It Works

Storage array offloading operates through the vSphere XCopy Volume Populator, which integrates with VMware’s vSphere and compatible storage arrays:

  1. Feature Flag Activation: The Forklift controller must have the feature_copy_offload flag enabled
  2. Storage Map Consultation: The controller consults the StorageMap’s offload plugin configuration to determine if VM disks can be offloaded
  3. Volume Populator Creation: When creating a PVC for the v2v pod, the controller also creates a VSphereXcopyVolumePopulator resource
  4. PVC Data Source Reference: The PVC’s dataSourceRef field is set to reference the volume populator
  5. XCOPY Execution: The populator uses the storage API to map the PVC to an ESXi host, then uses vSphere API to invoke vmkfstools XCOPY operations
  6. VAAI Integration: Requires vSphere APIs for Array Integration (VAAI) and storage acceleration to be enabled on the ESXi host
  7. Fallback Handling: Non-compatible volumes automatically fall back to standard virt-v2v processing

Supported Storage Vendors and Arrays

kubectl-mtv supports offloading for the following enterprise storage platforms:

IBM Storage Solutions

  • IBM FlashSystem (flashsystem): All-flash and hybrid arrays

Hitachi Vantara Solutions

  • Hitachi Vantara (vantara): Enterprise storage platform (formerly Hitachi Data Systems)

NetApp Solutions

  • NetApp ONTAP (ontap): Both AFF (All Flash FAS) and FAS hybrid arrays

HPE Solutions

  • HPE Primera (primera3par): Mission-critical storage arrays
  • HPE 3PAR (included in primera3par): Traditional enterprise arrays

Pure Storage Solutions

  • Pure Storage FlashArray (pureFlashArray): All-flash arrays with DirectFlash technology

Dell Technologies Solutions

  • Dell PowerFlex (powerflex): Software-defined storage platform
  • Dell PowerMax (powermax): High-end enterprise arrays
  • Dell PowerStore (powerstore): Mid-range unified storage

Infinidat Solutions

  • Infinidat InfiniBox (infinibox): Petascale storage platforms

Prerequisites and Requirements

Forklift Controller Configuration

  • Feature Flag: The feature_copy_offload flag must be enabled on the Forklift controller
  • Volume Populator Image: Ensure the vSphere XCopy volume populator image is properly configured

vSphere Environment Requirements

  • VMware vSphere: Version 6.7 or higher
  • VAAI Support: vSphere APIs for Array Integration (VAAI) must be enabled on ESXi hosts
  • Storage Acceleration: Array-based acceleration must be enabled
  • ESXi Host Access: Either VIB installation or SSH access to ESXi hosts
  • Storage Array Compatibility: Source datastores must reside on supported storage arrays
  • Network Connectivity: Direct network access between Kubernetes cluster and storage array management interfaces

Storage Array Requirements

  • Management API Access: RESTful API endpoint accessible from Kubernetes cluster
  • XCopy Support: Storage array must support SCSI Extended Copy (XCopy) operations
  • Compatible Protocols: iSCSI, Fibre Channel, or NFS connectivity to both vSphere and Kubernetes environments

Kubernetes Cluster Requirements

  • Network Access: Ability to reach both vCenter and storage array management interfaces
  • Storage Classes: Pre-configured storage classes targeting the same physical storage arrays
  • CSI Drivers: Compatible Container Storage Interface drivers for the target storage platform
  • Feature Flag: Copy offload must be enabled in the Forklift controller configuration

Critical Limitations and Constraints

Migration Plan Constraints

There are important limitations to understand:

VDDK and Copy-Offload Mutual Exclusivity

  • A migration plan cannot mix VDDK mappings with copy-offload mappings
  • The migration controller copies disks either through CDI volumes (VDDK) or through Volume Populators (copy-offload)
  • All storage pairs in a plan must either include copy-offload details OR none of them must
  • If you mix the two approaches, the plan will fail

Technical Constraints

  • Same Storage Array Requirement: For XCOPY to work, the source VMDK disk backing LUN (iSCSI or FC) must co-exist with the target PVC (backed by a LUN) on the same storage array
  • Volume Mode Dependency: Works with vVol, RDM, and traditional VMFS-backed disks
  • ESXi Host Access: Requires either VIB installation or SSH access with proper authentication

Clone Methods: VIB vs SSH Implementation

The vSphere XCopy volume populator supports two methods for executing vmkfstools clone operations on ESXi hosts:

VIB Method (Default)

Uses a custom VIB (vSphere Installation Bundle) installed on ESXi hosts to expose vmkfstools operations via the vSphere API.

Advantages:

  • Native vSphere API integration
  • No SSH service required
  • More secure (no SSH keys to manage)

Requirements:

  • Custom VIB installation on all target ESXi hosts
  • Administrative access to install VIBs

SSH Method (Alternative)

Uses SSH to directly execute vmkfstools commands on ESXi hosts with restricted command execution.

Advantages:

  • No VIB installation required
  • Uses standard ESXi SSH service
  • Easier troubleshooting and monitoring
  • Works with any ESXi version supporting SSH

Requirements:

  • SSH service enabled on ESXi hosts
  • SSH key authentication with command restrictions
  • Secure script deployment for restricted operations

Configuring Clone Method

The clone method is configured in the Provider settings using the esxiCloneMethod key:

1
2
3
4
5
6
7
8
9
10
11
12
13
apiVersion: forklift.konveyor.io/v1beta1
kind: Provider
metadata:
  name: my-vsphere-provider
  namespace: openshift-mtv
spec:
  type: vsphere
  url: https://vcenter.company.com
  secret:
    name: vsphere-secret
    namespace: openshift-mtv
  settings:
    esxiCloneMethod: "ssh"  # Options: "vib" (default) or "ssh"

Configuration and Setup

Step 1: Enable Copy Offload Feature

First, enable the copy offload feature in the Forklift controller:

1
2
3
4
5
6
7
8
9
# Enable the copy offload feature flag
kubectl patch forkliftcontrollers.forklift.konveyor.io forklift-controller \
  --type merge \
  -p '{"spec": {"feature_copy_offload": "true"}}' \
  -n openshift-mtv

# Set the volume-populator image (if needed)
kubectl set env -n openshift-mtv deployment forklift-volume-populator-controller \
  --all VSPHERE_XCOPY_VOLUME_POPULATOR_IMAGE=quay.io/kubev2v/vsphere-xcopy-volume-populator

Step 2: Configure vSphere User Privileges

The vSphere user requires a role with the following privileges (a role named StorageOffloader is recommended):

Global Permissions:

  • Settings

Datastore Permissions:

  • Browse datastore
  • Low level file operations

Host Configuration Permissions:

  • Advanced settings
  • Query patch
  • Storage partition configuration

Step 3: Create Storage Array Credentials Secret

Create a secret with storage provider credentials. All providers require these mandatory fields:

1
2
3
4
5
6
kubectl create secret generic storage-array-creds \
  --from-literal=STORAGE_HOSTNAME="storage-array.company.com" \
  --from-literal=STORAGE_USERNAME="admin" \
  --from-literal=STORAGE_PASSWORD="StoragePassword123" \
  --from-literal=STORAGE_SKIP_SSL_VERIFICATION="false" \
  -n openshift-mtv

Step 4: Verify Storage Array Compatibility

Before configuring offloading, ensure your environment meets the requirements:

1
2
3
4
5
# Check vSphere datastore details
kubectl mtv get inventory storage my-vsphere-provider -o json | jq '.[] | select(.name == "production-datastore") | {name, type, freeSpace, totalSpace}'

# Verify storage array API connectivity (example for FlashSystem)
curl -k -u admin:password https://flashsystem.company.com:7443/rest/auth

Step 5: Create Storage Mapping with Offloading

The most straightforward approach is to specify offload parameters directly in storage mapping pairs:

1
2
3
4
5
6
7
8
9
10
kubectl mtv create mapping storage offload-flashsystem \
  --source my-vsphere-provider \
  --target my-openshift-provider \
  --storage-pairs "production-ssd:flashsystem-gold;offloadPlugin=vsphere;offloadVendor=flashsystem" \
  --offload-vsphere-username "svc-migration@vsphere.local" \
  --offload-vsphere-password "VCenterPassword123" \
  --offload-vsphere-url "https://vcenter.company.com/sdk" \
  --offload-storage-username "flashsystem-admin" \
  --offload-storage-password "FlashSystemPassword123" \
  --offload-storage-endpoint "https://flashsystem.company.com:7443"

Step 6: Advanced Configuration with Multiple Vendors

For environments with multiple storage array vendors:

1
2
3
4
5
6
7
8
9
10
11
kubectl mtv create mapping storage multi-vendor-offload \
  --source vsphere-datacenter \
  --target openshift-production \
  --storage-pairs "flashsystem-tier1:premium-flash;offloadPlugin=vsphere;offloadVendor=flashsystem,ontap-tier2:standard-ssd;offloadPlugin=vsphere;offloadVendor=ontap,pure-tier0:ultra-performance;offloadPlugin=vsphere;offloadVendor=pureFlashArray" \
  --default-offload-plugin vsphere \
  --offload-vsphere-username vcenter-migration@company.local \
  --offload-vsphere-password $(cat /secure/vcenter-pass) \
  --offload-vsphere-url https://vcenter.prod.company.com \
  --offload-storage-username storage-admin \
  --offload-storage-password $(cat /secure/storage-pass) \
  --offload-storage-endpoint https://storage-mgmt.company.com

Step 7: Plan Creation with Offload-Optimized Settings

When creating migration plans, leverage offloading configurations:

1
2
3
4
5
6
7
kubectl mtv create plan high-performance-migration \
  --source vsphere-datacenter \
  --target openshift-production \
  --storage-mapping multi-vendor-offload \
  --vms "where sum(disks.capacityInBytes) > 107374182400" \  # VMs with >100GB total disk
  --migration-type warm \
  --convertor-affinity "REQUIRE nodes(node-role.kubernetes.io/storage=true) on node"

Vendor-Specific Secret Requirements

Each storage vendor requires specific additional fields in their secrets:

NetApp ONTAP

Key Value Description
ONTAP_SVM string The SVM to use in all client interactions. Can be taken from trident.netapp.io/v1/TridentBackend.config.ontap_config.svm resource field.
1
2
3
4
5
6
7
kubectl create secret generic ontap-offload-creds \
  --from-literal=STORAGE_HOSTNAME="ontap-cluster.company.com" \
  --from-literal=STORAGE_USERNAME="ontap-admin" \
  --from-literal=STORAGE_PASSWORD="ONTAPPassword123" \
  --from-literal=ONTAP_SVM="production-svm" \
  --from-literal=STORAGE_SKIP_SSL_VERIFICATION="false" \
  -n openshift-mtv

Pure Storage FlashArray

Key Value Description
PURE_CLUSTER_PREFIX string Cluster prefix set in the StorageCluster resource. Get it with printf "px_%.8s" $(oc get storagecluster -A -o=jsonpath='{.items[?(@.spec.cloudStorage.provider=="pure")].status.clusterUid}')
1
2
3
4
5
6
7
kubectl create secret generic pure-offload-creds \
  --from-literal=STORAGE_HOSTNAME="pure-array.company.com" \
  --from-literal=STORAGE_USERNAME="pureuser" \
  --from-literal=STORAGE_PASSWORD="PureStorageAPIKey" \
  --from-literal=PURE_CLUSTER_PREFIX="px_12345678" \
  --from-literal=STORAGE_SKIP_SSL_VERIFICATION="false" \
  -n openshift-mtv

Dell PowerMax

Key Value Description
POWERMAX_SYMMETRIX_ID string The Symmetrix ID of the storage array. Can be taken from the ConfigMap under the ‘powermax’ namespace, which the CSI driver uses.
POWERMAX_PORT_GROUP_NAME string The port group to use for masking view creation.
1
2
3
4
5
6
7
8
kubectl create secret generic powermax-offload-creds \
  --from-literal=STORAGE_HOSTNAME="powermax.company.com" \
  --from-literal=STORAGE_USERNAME="powermax-admin" \
  --from-literal=STORAGE_PASSWORD="PowerMaxPassword123" \
  --from-literal=POWERMAX_SYMMETRIX_ID="000197800123" \
  --from-literal=POWERMAX_PORT_GROUP_NAME="PG-OpenShift" \
  --from-literal=STORAGE_SKIP_SSL_VERIFICATION="false" \
  -n openshift-mtv

Dell PowerFlex

Key Value Description
POWERFLEX_SYSTEM_ID string The system ID of the storage array. Can be taken from vxflexos-config from the vxflexos namespace or the openshift-operators namespace.
1
2
3
4
5
6
7
kubectl create secret generic powerflex-offload-creds \
  --from-literal=STORAGE_HOSTNAME="powerflex.company.com" \
  --from-literal=STORAGE_USERNAME="powerflex-admin" \
  --from-literal=STORAGE_PASSWORD="PowerFlexPassword123" \
  --from-literal=POWERFLEX_SYSTEM_ID="1234567890abcdef" \
  --from-literal=STORAGE_SKIP_SSL_VERIFICATION="false" \
  -n openshift-mtv

Detailed Vendor Configurations

IBM FlashSystem Configuration

IBM FlashSystem arrays provide excellent offloading performance with their Spectrum Virtualize technology:

1
2
3
4
5
6
7
8
9
10
kubectl mtv create mapping storage ibm-flashsystem \
  --source vsphere-prod \
  --target openshift-target \
  --storage-pairs "flashsystem-gold:flashsystem-tier1;offloadPlugin=vsphere;offloadVendor=flashsystem;volumeMode=Block;accessMode=ReadWriteOnce" \
  --offload-vsphere-username "svc-flashsystem@vsphere.local" \
  --offload-vsphere-password "FlashSystemVCPassword" \
  --offload-vsphere-url "https://vcenter.prod.company.com" \
  --offload-storage-username "flashsystem-admin" \
  --offload-storage-password "FlashSystemPassword" \
  --offload-storage-endpoint "https://flashsystem.company.com:7443"

FlashSystem Optimization Tips:

  • Use Block volume mode for maximum performance
  • Ensure FlashSystem and vSphere share the same SAN fabric
  • Configure multiple paths for high availability
  • Enable compression and deduplication on FlashSystem for space efficiency

NetApp ONTAP Configuration

NetApp ONTAP provides robust offloading through FlexClone and SnapMirror technologies:

1
2
3
4
5
6
7
8
9
10
kubectl mtv create mapping storage netapp-ontap \
  --source vsphere-prod \
  --target openshift-target \
  --storage-pairs "netapp-nfs:ontap-nas;offloadPlugin=vsphere;offloadVendor=ontap;volumeMode=Filesystem,netapp-san:ontap-san;offloadPlugin=vsphere;offloadVendor=ontap;volumeMode=Block" \
  --offload-vsphere-username "ontap-svc@vsphere.local" \
  --offload-vsphere-password "ONTAPVCPassword" \
  --offload-vsphere-url "https://vcenter.company.com" \
  --offload-storage-username "ontap-admin" \
  --offload-storage-password "ONTAPPassword" \
  --offload-storage-endpoint "https://ontap-cluster.company.com"

ONTAP Optimization Tips:

  • Use NFS datastores for Filesystem volume mode
  • Use SAN (iSCSI/FC) datastores for Block volume mode
  • Enable NetApp deduplication and compression
  • Consider ONTAP FlexGroup volumes for very large datasets

Pure Storage FlashArray Configuration

Pure Storage FlashArray provides native integration with advanced data reduction:

1
2
3
4
5
6
7
8
9
10
11
kubectl mtv create mapping storage pure-flasharray \
  --source vsphere-prod \
  --target openshift-target \
  --storage-pairs "pure-vvol:pure-block-premium;offloadPlugin=vsphere;offloadVendor=pureFlashArray;volumeMode=Block;accessMode=ReadWriteOnce" \
  --offload-vsphere-username "pure-svc@vsphere.local" \
  --offload-vsphere-password "PureVCPassword" \
  --offload-vsphere-url "https://vcenter.company.com" \
  --offload-storage-username "pureuser" \
  --offload-storage-password "PureStorageAPIKey" \
  --offload-storage-endpoint "https://pure-array.company.com" \
  --offload-cacert @/certs/pure-ca.pem

Pure Storage Optimization Tips:

  • Leverage vVols (Virtual Volumes) for maximum integration
  • Use Pure’s native data reduction (always-on deduplication and compression)
  • Configure Pure CloudSnap for backup integration
  • Use DirectFlash modules for consistent low latency

Dell PowerMax Configuration

Dell PowerMax provides enterprise-grade offloading for mission-critical workloads:

1
2
3
4
5
6
7
8
9
10
kubectl mtv create mapping storage dell-powermax \
  --source vsphere-enterprise \
  --target openshift-target \
  --storage-pairs "powermax-diamond:powermax-tier0;offloadPlugin=vsphere;offloadVendor=powermax;volumeMode=Block;accessMode=ReadWriteOnce" \
  --offload-vsphere-username "powermax-svc@vsphere.local" \
  --offload-vsphere-password "PowerMaxVCPassword" \
  --offload-vsphere-url "https://vcenter.enterprise.com" \
  --offload-storage-username "powermax-admin" \
  --offload-storage-password "PowerMaxPassword" \
  --offload-storage-endpoint "https://powermax.company.com:8443"

PowerMax Optimization Tips:

  • Use PowerMax Diamond service levels for highest performance
  • Enable FAST (Fully Automated Storage Tiering) for optimization
  • Configure SRDF for disaster recovery integration
  • Use TimeFinder for snapshot-based testing

Security and Authentication

Credential Management Best Practices

Storage offloading requires credentials for both vSphere and storage arrays. Follow these security best practices:

1. Kubernetes Secrets for Credential Storage

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# Create separate secrets for each storage array
kubectl create secret generic flashsystem-offload-creds \
  --from-literal=vsphere-username=svc-migration@vsphere.local \
  --from-literal=vsphere-password=VCenterPassword \
  --from-literal=vsphere-url=https://vcenter.company.com \
  --from-literal=storage-username=flashsystem-admin \
  --from-literal=storage-password=FlashSystemPassword \
  --from-literal=storage-endpoint=https://flashsystem.company.com:7443 \
  -n konveyor-forklift

# Reference the secret in storage mapping
kubectl mtv create mapping storage secure-flashsystem \
  --source vsphere-prod \
  --target openshift-target \
  --storage-pairs "production-ssd:flashsystem-gold;offloadPlugin=vsphere;offloadVendor=flashsystem;offloadSecret=flashsystem-offload-creds"

2. Certificate Authority Configuration

For storage arrays with custom CA certificates:

1
2
3
4
5
6
7
8
9
10
11
12
# Store CA certificate in a ConfigMap
kubectl create configmap storage-ca-certs \
  --from-file=flashsystem-ca.pem=/path/to/flashsystem-ca.pem \
  --from-file=ontap-ca.pem=/path/to/ontap-ca.pem \
  -n konveyor-forklift

# Reference in mapping creation
kubectl mtv create mapping storage secure-multi-vendor \
  --source vsphere-prod \
  --target openshift-target \
  --storage-pairs "tier1:flashsystem-gold;offloadPlugin=vsphere;offloadVendor=flashsystem" \
  --offload-cacert @/certs/storage-ca-bundle.pem

3. Least Privilege Access

Configure minimal required permissions for offload operations:

vSphere Permissions Required:

  • Datastore.Browse
  • Datastore.LowLevelFileOperations
  • VirtualMachine.Config.DiskLease

Storage Array Permissions Required:

  • Read access to source volumes/LUNs
  • Write access to create target volumes/LUNs
  • XCopy operation permissions

Performance Optimization and Tuning

Optimal Configuration Strategies

1. Network Bandwidth Planning

While offloading reduces network usage, control plane communication still requires adequate bandwidth:

1
2
3
4
5
6
7
8
# Configure migration with bandwidth considerations
kubectl mtv create plan bandwidth-optimized \
  --source vsphere-datacenter \
  --target openshift-production \
  --storage-mapping offload-enabled \
  --migration-type warm \
  --convertor-affinity "REQUIRE nodes(network.topology.zone=storage-zone) on zone" \
  --vms "where disks.capacityInBytes > 53687091200"  # >50GB VMs benefit most

2. Concurrent Migration Tuning

Optimize for concurrent operations:

1
2
3
4
5
6
7
8
9
10
# Configure the Forklift controller for optimal throughput
kubectl patch forkliftcontroller forklift-controller \
  -n konveyor-forklift \
  --type merge \
  --patch '{
    "spec": {
      "controller_max_vm_inflight": 20,
      "controller_precopy_interval": 60
    }
  }'

3. Storage Array Specific Optimizations

IBM FlashSystem:

  • Enable Easy Tier for automatic data placement
  • Use Global Mirror for long-distance replication scenarios
  • Configure multiple host connections for redundancy

NetApp ONTAP:

  • Enable adaptive QoS policies for consistent performance
  • Use NFS v4.1 with session trunking for increased throughput
  • Configure ONTAP Cloud Backup for integrated backup strategies

Pure Storage:

  • Enable Pure1 analytics for performance monitoring
  • Use ActiveCluster for metro-distance clustering
  • Configure Pure CloudSnap for cloud integration

Monitoring and Troubleshooting

Monitoring Offload Operations

1. Checking Offload Status

1
2
3
4
5
6
7
8
# Verify offload plugin configuration in storage mapping
kubectl mtv describe mapping storage offload-enabled

# Check migration plan for offload utilization
kubectl mtv describe plan offload-migration

# Monitor migration progress with offload details
kubectl mtv get plan offload-migration --watch -o yaml

2. Storage Array Integration Status

1
2
3
4
5
6
# Check storage array connectivity from a test pod
kubectl run storage-test --image=curlimages/curl:latest --rm -it -- /bin/sh

# Inside the pod, test storage array API connectivity
curl -k -u admin:password https://flashsystem.company.com:7443/rest/auth
curl -k -u admin:password https://ontap-cluster.company.com/api/cluster

Common Issues and Resolutions

Common troubleshooting scenarios:

1. vSphere/ESXi Issues

Error: SOAP error with no apparent root cause message

Cause: VSphere invoking SOAP/REST endpoints on ESXi can fail due to standard network/API issues that resolve on retry.

Resolution:

  • Restart the migration to retry the populator
  • Check ESXi host connectivity and load
  • Verify vSphere API access and credentials

Error: VIB installation issues - The object or item referred to could not be found

Cause: The VIB is installed but /etc/init.d/hostd did not restart, so the vmkfstools namespace in esxcli is not updated.

Resolution:

1
2
3
4
5
# SSH into the ESXi host and restart hostd
ssh root@esxi-host
/etc/init.d/hostd restart

# Wait for ESXi to renew connection with vCenter (may take a few minutes)

2. SSH Method Issues

Error: manual SSH key configuration required or failed to connect via SSH

Causes and Solutions:

  1. SSH service disabled: Enable SSH on the ESXi host:
    1
    2
    3
    
    # On ESXi host
    vim-cmd hostsvc/enable_ssh
    vim-cmd hostsvc/start_ssh
    
  2. SSH keys not deployed: Follow the manual key installation instructions in the pod logs
  3. Network connectivity: Verify ESXi management network is accessible from migration pods
  4. Timeout issues: Increase SSH_TIMEOUT_SECONDS in the Provider secret (default: 30)

Verification:

1
2
3
4
5
# Check SSH service status on ESXi
vim-cmd hostsvc/get_ssh_status

# Test SSH connectivity
ssh -i /path/to/private_key root@esxi-host-ip

Error: SSH connection timeout or context deadline exceeded

Solutions:

  • Increase SSH_TIMEOUT_SECONDS in the Provider secret
  • Check network latency between migration pods and ESXi hosts
  • Verify ESXi host is not overloaded

3. NetApp ONTAP Issues

Error: cannot derive SVM to use; please specify SVM in config file

Cause: Configuration issue with ONTAP requiring explicit SVM specification.

Resolution:

1
2
3
4
5
# On ONTAP system, check SVM configuration
vserver show -vserver ${NAME_OF_SVM}

# Set management interface for the SVM
# Put that hostname in the STORAGE_HOSTNAME field of the secret

4. Storage Array Authentication Failures

Symptoms:

  • Migration fails with authentication errors
  • Storage array logs show failed login attempts

Resolution:

1
2
3
4
5
6
7
8
9
# Test credentials manually
curl -k -u username:password https://storage-array.company.com/api/auth

# Update credentials in secret
kubectl patch secret storage-offload-creds \
  --patch '{"data":{"storage-password":"'"$(echo -n 'NewPassword' | base64)"'"}}'

# Restart affected pods to pick up new credentials
kubectl delete pods -n konveyor-forklift -l app=forklift-controller

5. Mixed VDDK and Offload Configuration

Error: Plan fails when mixing VDDK and copy-offload mappings

Cause: Critical Limitation - A migration plan cannot mix VDDK mappings with copy-offload mappings.

Resolution:

  • Ensure ALL storage pairs in the plan either include copy-offload details OR none of them do
  • Create separate plans for VDDK-based and offload-based migrations
  • Review storage mapping configuration to ensure consistency

Performance Benefits

Storage array offloading provides significant performance improvements over traditional network-based migration methods. The actual performance gains will vary based on your specific environment factors:

Key Performance Factors

  1. Storage Array Performance: Higher-performance arrays (NVMe-based) provide better improvement ratios
  2. Network Topology: Reduced network congestion benefits other cluster operations
  3. Concurrent Operations: Better scaling for concurrent migrations up to storage array limits
  4. Data Characteristics: Highly compressible data shows additional benefits with compression-capable arrays

Best Practices and Recommendations

Planning and Design

  1. Assessment Phase:
    • Inventory all storage arrays in your vSphere environment
    • Verify XCopy support and API availability
    • Plan network connectivity for storage management interfaces
  2. Staged Rollout:
    • Start with non-production migrations to validate configuration
    • Test with various VM sizes and storage types
    • Monitor performance improvements and adjust configurations
  3. Security Integration:
    • Use Kubernetes secrets for all credentials
    • Implement certificate validation for production environments
    • Follow principle of least privilege for storage array access

Operational Excellence

  1. Monitoring Strategy:
    • Integrate storage array monitoring with Kubernetes observability
    • Set up alerts for offload operation failures
    • Monitor storage array performance during migration windows
  2. Backup and Recovery:
    • Leverage storage array snapshot capabilities for migration rollback
    • Coordinate with backup systems to avoid conflicts during migration windows
    • Test recovery procedures with offload-migrated VMs
  3. Capacity Planning:
    • Account for storage array capacity requirements
    • Plan for peak concurrent migration loads
    • Consider storage array replication requirements for disaster recovery

Cost Optimization

  1. Resource Efficiency:
    • Use storage array data reduction features (compression, deduplication)
    • Right-size Kubernetes storage classes based on workload requirements
    • Leverage tiering capabilities for cost-effective storage placement
  2. Network Cost Reduction:
    • Reduced bandwidth requirements lower network infrastructure costs
    • Minimize inter-site data transfer costs in multi-site deployments
    • Enable concurrent migrations without network capacity upgrades

Integration with Migration Strategies

Cold Migration with Storage Offload

For cold migrations, storage offloading provides the most dramatic performance improvements:

1
2
3
4
5
6
7
kubectl mtv create plan cold-offload-optimized \
  --source vsphere-datacenter \
  --target openshift-production \
  --storage-mapping enterprise-offload \
  --migration-type cold \
  --vms "where powerState = 'poweredOff' and sum(disks.capacityInBytes) > 10737418240" \  # >10GB VMs
  --convertor-affinity "REQUIRE nodes(node-role.kubernetes.io/storage=true) on node"

Warm Migration with Storage Offload

Warm migrations benefit from reduced precopy times and faster cutover operations:

1
2
3
4
5
6
7
kubectl mtv create plan warm-offload-optimized \
  --source vsphere-datacenter \
  --target openshift-production \
  --storage-mapping enterprise-offload \
  --migration-type warm \
  --vms "where powerState = 'poweredOn' and memory.sizeInBytes > 4294967296" \  # >4GB RAM
  --convertor-affinity "PREFER nodes(storage.topology.zone=primary-storage) on zone"

Future Considerations and Roadmap

Emerging Technologies

  1. NVMe-oF Integration: Future support for NVMe over Fabrics for even higher performance
  2. Container Storage Interface (CSI): Enhanced integration with Kubernetes-native storage
  3. AI/ML Optimization: Intelligent workload placement and migration scheduling
  4. Multi-Cloud Storage: Extended support for cloud-native storage arrays

Vendor Ecosystem Expansion

The storage offload ecosystem continues to expand with new vendor integrations and enhanced capabilities.

Conclusion

Storage array offloading represents a significant advancement in virtual machine migration technology, offering substantial performance improvements while reducing infrastructure load. By leveraging the native capabilities of enterprise storage arrays, organizations can achieve migration speeds previously impossible with traditional network-based approaches.

The key to successful implementation lies in thorough planning, proper configuration, and understanding the specific capabilities and requirements of your storage infrastructure. When properly implemented, storage array offloading transforms migration from a time-consuming, network-intensive operation into an efficient, storage-native process that enables rapid datacenter transformations.

Next Steps

After implementing storage array offloading:

  1. Advanced Planning: Explore detailed migration planning in Chapter 10: Migration Plan Creation
  2. VM Customization: Learn about individual VM customization in Chapter 11: Customizing Individual VMs
  3. Performance Optimization: Configure convertor pod optimization in Chapter 13: Migration Process Optimization
  4. Monitoring: Implement comprehensive monitoring in Chapter 17: Debugging and Troubleshooting

Previous: Chapter 9: Mapping Management
Next: Chapter 10: Migration Plan Creation