Migration hooks provide powerful custom automation capabilities that enable running custom code at specific points during the migration process. This chapter covers comprehensive hook development, deployment, and integration with migration workflows.
Overview: Enabling Custom Automation
What are Migration Hooks?
Migration hooks are Kubernetes resources that define custom automation to be executed during VM migration:
Pre-migration Hooks: Execute before VM conversion begins
Post-migration Hooks: Execute after VM migration completes
Custom Logic: Handle migration-specific requirements unique to your environment
Ansible-Based: Leverage Ansible playbooks for automation logic
Hook Execution Model
Hooks run as Kubernetes Jobs in the konveyor-forklift namespace:
Job Creation: Hook container is scheduled as a Kubernetes Job
Context Injection: Migration context is made available via mounted files
Playbook Execution: Ansible playbook runs with access to migration data
Result Handling: Job completion status determines hook success/failure
# Save as health-check-hook.yml-name:Post-Migration Health Checkhosts:localhosttasks:-name:Load migration contextsinclude_vars:file:"{{item}}"name:"{{item|basename|regex_replace('\\.yml$','')}}"loop:-plan.yml-workload.yml-name:Wait for VM to be readykubernetes.core.k8s_info:api_version:kubevirt.io/v1kind:VirtualMachinename:"{{workload.vm.name}}"namespace:"{{plan.spec.targetNamespace|default('default')}}"wait:truewait_condition:type:Readystatus:'True'wait_timeout:600-name:Check application endpointsuri:url:"http://{{workload.vm.ipaddress}}:{{item.port}}{{item.path}}"method:GETstatus_code:200timeout:30register:health_checksloop:-{port:8080,path:"/health"}-{port:8080,path:"/ready"}-{port:9090,path:"/metrics"}ignore_errors:true-name:Validate service responsesassert:that:-item.status == 200fail_msg:"Healthcheckfailedfor{{item.url}}"success_msg:"Healthcheckpassedfor{{item.url}}"loop:"{{health_checks.results}}"when:item.status is defined-name:Update monitoring configurationkubernetes.core.k8s:api_version:v1kind:ConfigMapname:monitoring-targetsnamespace:monitoringdefinition:data:"{{workload.vm.name}}":|- targets: ['{{ workload.vm.ipaddress }}:9090']labels:instance: '{{ workload.vm.name }}'environment: '{{ plan.metadata.labels.environment | default("production") }}'-name:Send notificationuri:url:"{{notification_webhook}}"method:POSTbody_format:jsonbody:text:"Migrationcompletedfor{{workload.vm.name}}-Healthcheckspassed"vm:"{{workload.vm.name}}"plan:"{{plan.metadata.name}}"status:"healthy"vars:notification_webhook:"{{lookup('kubernetes.core.k8s',api_version='v1',kind='Secret',namespace='migration',resource_name='notification-config')['data']['webhook_url']|b64decode}}"
# Add pre-hook to all VMs in the plan
kubectl mtv create plan --name hooked-migration \--source vsphere-prod \--pre-hook database-backup-pre \--vms"database-01,database-02,app-server-01"# Add both pre and post hooks
kubectl mtv create plan --name comprehensive-hooks \--source vsphere-prod \--pre-hook preparation-hook \--post-hook validation-hook \--vms"where name ~= '.*prod.*'"# Combined with other migration settings
kubectl mtv create plan --name production-with-hooks \--source vsphere-prod \--target-namespace production \--migration-type warm \--network-mapping prod-network-map \--storage-mapping prod-storage-map \--pre-hook backup-and-quiesce \--post-hook health-and-notify \--vms @production-vms.yaml
Hook Execution Order
When multiple hooks are configured:
Pre-hooks execute: Before VM conversion begins
Migration proceeds: VM conversion and data transfer
Post-hooks execute: After VM migration completes
Managing Hooks via PlanVM Configuration
Per-VM Hook Configuration
Individual VMs can have specific hooks using the PlanVMS format:
# Create plan with VM-specific hooks
kubectl mtv create plan --name vm-specific-hooks \--source vsphere-prod \--vms @vm-specific-hooks.yaml \--network-mapping prod-network-map \--storage-mapping prod-storage-map
Hook Management via Plan Patching
Hooks can be added or modified after plan creation using the patch planvm command:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# Add a pre-migration hook to a specific VM
kubectl mtv patch planvm --plan-name existing-plan \--vm-name additional-vm \--add-pre-hook new-preparation-hook
# Add a post-migration hook to a specific VM
kubectl mtv patch planvm --plan-name existing-plan \--vm-name additional-vm \--add-post-hook health-check-post
# Remove a specific hook from a VM
kubectl mtv patch planvm --plan-name existing-plan \--vm-name additional-vm \--remove-hook new-preparation-hook
# Clear all hooks from a VM
kubectl mtv patch planvm --plan-name existing-plan \--vm-name additional-vm \--clear-hooks
# Error handling in hook playbooks-name:Robust Hook with Error Handlinghosts:localhosttasks:-name:Set failure flagset_fact:hook_failed:false-name:Critical operation with error handlingblock:-name:Perform critical task# Task that might failshell:risky_command_hereregister:resultrescue:-name:Handle failureset_fact:hook_failed:true-name:Log failure detailsdebug:msg:"Hookfailed:{{ansible_failed_result.msg}}"-name:Send failure notification# Notification logic herealways:-name:Cleanup operations# Cleanup logic here-name:Fail hook if critical operations failedfail:msg:"Hookexecutionfailed"when:hook_failed
Secret and ConfigMap Access
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# Secure credential access in hooks-name:Secure Credential Managementhosts:localhosttasks:-name:Load database credentialskubernetes.core.k8s_info:api_version:v1kind:Secretname:database-credentialsnamespace:migration-secretsregister:db_creds-name:Use credentials securely# Use credentials from db_creds.resources[0].datano_log:true# Don't log sensitive operations
Timeout and Deadline Management
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# Timeout management in hook operations-name:Hook with Timeout Managementhosts:localhosttasks:-name:Operation with timeoutasync:600# 10 minute timeoutpoll:10# Check every 10 seconds# Long running operation here-name:Wait for background taskasync_status:jid:"{{operation_result.ansible_job_id}}"register:job_resultuntil:job_result.finishedretries:60delay:10