Artur Carter

0 %
Zak Abdel-Illah
Automation Enthusiast
  • 📍 Location
    🇬🇧 London
Certifications
  • AWS Solutions Architect - Associate
Languages
  • Python
  • Go
Technologies
  • Ansible
  • Linux
  • Terraform
  • Kubernetes

Building AMD64 QEMU Images remotely using Libvirt and Packer

May 24, 2024

I need to build images based off AMD64 architecture while working from an ARM64 machine. While this is possible directly using the qemu-system-x86_64 binary, it tends to be extremely slow due to the overhead of converting for the ARM architecture.

Workbench

  • Ubuntu 22.04 LTS with libvirt installed
  • MacBook Pro M2 with the Packer build files

Configuring the Libvirt Plugin

Connecting to the libvirt host

When using the libvirt plugin, I need to provide a Libvirt URI.

source "libvirt" "image" {
    libvirt_uri = "qemu+ssh://${var.user}@${var.host}/session?keyfile=${var.keyfile}&no_verify=1"
}
  • qemu+ssh:// denotes that I’ll be using the QEMU / KVM Backend and connecting via SSH. The connection method denotes the rest of the arguments of the string
  • ${var.user}@${var.host} is in the SSH syntax, this is the username and hostname of the machine that is running libvirt
  • /session is to isolate the running builds from those on the system level. /system would work just as well.
  • keyfile=${var.keyfile} is used to automatically authenticate to the remote machine without the need of a password. This is useful in the future when I automatically trigger the packer build from a Git repository
  • no_verify=1 is added so that I can throw the build at any machine and have it “just work”. This is usually guided against due to spoofing attacks.

Communicating with the libvirt guest

communicator {
    communicator                 = "ssh"
    ssh_username                 = var.username
    ssh_bastion_host             = var.host
    ssh_bastion_username         = var.user
    ssh_bastion_private_key_file = var.private_key
  }
  • The difference between ssh_* and ssh_bastion_* is that the first refers to the target virtual machine being built, and the latter refers to the “middle-man” machine.
    • I require this as I don’t plan to expose the VM to a network outside of the machine hosting it.
    • Since I won’t have access from my local workstation, I need to communicate with the virtual machine via the machine that is hosting it.
    • By adding ssh_bastion_* arguments, I’m telling packer that in-order to communicate with the VM, it needs to access the bastion machine first then execute all SSH commands through it.

Configuring the libvirt daemon

My Observations

I came across a “Permission Denied” error when attempting to upload an existing image (in my case, the KVM Ubuntu Server Image). This was due to AppArmor not being provided a trust rule upon creation of the domain. This error is first visible in the following form directly from Packer:

==> libvirt.example: DomainCreate.RPC: internal error: process exited while connecting to monitor: 2024-05-24T16:41:42.574660Z qemu-system-x86_64: -blockdev {"node-name":"libvirt-2-format","read-only":false,"driver":"qcow2","file":"libvirt-2-storage","backing":null}: Could not open '/var/lib/libvirt/images/packer-cp8c6ap1ijp2kss08iv0-ua-artifact': Permission denied

At first, I assumed that there was an obvious permissions problem, and at first glace there was in-fact that. When looking at this file upon creation, it had root permissions where only the root user can read/write.

# ls -lah /var/lib/libvirt/images
-rw------- 1 root root  925M May 24 16:41 packer-cp8c6ap1ijp2kss08iv0-ua-artifact

This makes sense since libvirtd is running under the root user, which is the default configuration from the Ubuntu repository. I didn’t see any configuration option to manipulate what the permissions should be after an upload with libvirt either. This was an assumed problem since all QEMU instances are running under a non-root user, libvirt-qemu

# ps -aux | grep libvirtd
# ps -aux | grep qemu

root      145945  0.4  0.1 1778340 28760 ?       Ssl  16:43   0:10 /usr/sbin/libvirtd
libvirt+    3312  2.2 11.1 4473856 1817572 ?     Sl   May12 405:19 /usr/bin/qemu-system-x86_64

My second observation was that all images created directly within libvirt (e.g: with virt-manager) had what looked like “correct” permissions, those that matched the user that QEMU would eventually run under;

# ls -lah /var/lib/libvirt/images
-rw-r--r-- 1 libvirt-qemu kvm   11G May 24 17:11 haos_ova-11.1.qcow2

Since no-one else had reported this particular issue when using the libvirt plugin, I had gone down the route of PEBKAC.

Allowing packer-uploaded images as backing store

Thanks to this discussion on Stack Overflow, I found that AppArmor had been blocking the request to the specific file in question.

# dmesg -w
[1081541.249157] audit: type=1400 audit(1716568577.970:119): apparmor="DENIED" operation="open" profile="libvirt-25106acc-cfd8-40f7-a7c6-f5c1c63bc16c" name="/var/lib/libvirt/images/packer-cp8c6ap1ijp2kss08iv0-ua-artifact" pid=43927 comm="qemu-system-x86" requested_mask="w" denied_mask="w" fsuid=64055 ouid=64055

Here, I can see that AppArmor is doing three things;

  • Denying an open request to the QEMU Image
    • apparmor="DENIED"
    • operation="open"
  • Denying writing to the QEMU Image
    • denied_mask="w"
  • Using a profile that is specific to the domain being launched
    • profile="libvirt-25106acc-cfd8-40f7-a7c6-f5c1c63bc16c"
    • This is achieved because libvirt will automatically push AppArmor rules upon creation of a domain. This also means that libvirt will be using some form of template file or specification to create rules.

This means that I need to find the template file that libvirt is using to design the rules, and allow for writing to packer-uploaded QEMU Images.

# /etc/apparmor.d/libvirt/TEMPLATE.qemu
# This profile is for the domain whose UUID matches this file.
# 

#include <tunables/global>

profile LIBVIRT_TEMPLATE flags=(attach_disconnected) {
  #include <abstractions/libvirt-qemu>
  /var/lib/libvirt/images/packer-** rwk,
}

As mentioned in the Stack Overflow post, simply adding /var/lib/libvirt/images/packer-** rwk, to the template file is enough to get past this issue.

End Result

By bringing everything together, I get a successful QCOW2 image visible in my default storage pool. I’m using the Ansible provisioner within the build block so that I can keep the execution steps separate from the Packer build script, and re-usable across different cloud providers.

Posted in DevOps, R&D, UNIX/LinuxTags:
Write a comment