AMD SEV usage in YAOOK

Preface: flavor and image properties for SEV

The following properties related to AMD SEV features can be set on Nova flavors and/or Glance images:

Property name

Flavor

Image

Comment

hw_machine_type

x

Must be q35.

hw_mem_encryption

x

x

Must be true.

hw_mem_encryption_model

x

x

Choices: amd-sev, amd-sev-es, amd-sev-snp.

hw_firmware_type

x

Must be uefi.

hw_firmware_stateless

x

For SEV-SNP, must be set to true.

Note: In case of flavors, the property names are prefixed with hw: instead of hw_, e.g., hw:mem_encryption.

When creating a virtual machine in OpenStack, these properties may appear on either flavor or image or - for some properties - on both. If the same property is specified on both flavor and image they must not have conflicting values.

When selecting/preparing image and flavor for a virtual machine make sure that all necessary values exist.

Creating a virtual machine with AMD SEV-SNP

To create a virtual machine with AMD SEV-SNP enabled, the following steps are necessary:

  1. Select a suitable flavor with the necessary attributes.

  2. Select or upload a suitable image with the necessary attributes.

  3. Create a Nova instance using the selected flavor and image.

Choosing a SEV-SNP-compatible flavor

Select a flavor that fulfills the following requirements:

  • in the flavor’s properties:

    • hw:mem_encryption is either not specified or set to true

    • hw:mem_encryption_model is either not specified or set to amd-sev-snp

Use the following process to select such flavor:

  1. Use openstack flavor list to enumerate the flavors available to your project.

  2. Use openstack flavor show $NAME_OR_ID to inspect an individual flavor.

  3. In the output of the openstack flavor show command, inspect the properties row and look for the property names mentioned above.

Choosing a SEV-SNP-compatible image

The image must fulfill the following requirements:

  • in the image’s properties:

    • hw_machine_type must be set to q35

    • hw_mem_encryption must* be set to true

    • hw_mem_encryption_model must* be set to amd-sev-snp

    • hw_firmware_type must be set to uefi

    • hw_firmware_stateless must be set to true

  • the operating system contained in the image:

    • must support UEFI boot

    • use Linux kernel version 5.19 or later

* If hw:mem_encryption and/or hw:mem_encryption_model are specified on the chosen flavor, their image property counterparts may be omitted. However, it is strongly recommended to always set those properties on the image to expose conflicting settings in the flavor.

Regarding the operating system requirements (UEFI boot, kernel version), this includes but is not limited to:

  • Ubuntu 24.04 or later

  • Debian 13 or later

  • RHEL 9.3 or later

  • SLES 15 SP4 or later

An image adhering to these requirements can either be selected from the pool of existing images (if any) or uploaded.

a) Selecting an existing image

Use the following process to select an image:

  1. Use openstack image list to enumerate the images available to your project.

  2. Use openstack image show $NAME_OR_ID to inspect an individual image.

  3. In the output of the openstack image show command, inspect the properties row and look for the property names mentioned above.

  4. Make sure that the operating system contained in the image fulfills the requirements by either deducing the version from the image name (if applicable), booting and inspecting the image once or asking the owner of the image.

b) Uploading a compatible image

A SEV-SNP-capable image can be prepared in one of two ways:

  1. As a single unified image, containing the kernel and init ramdisk.

  2. Three separate images, where the kernel and init ramdisk are dedicated and immutable images for direct kernel boot.

The second choice is especially useful for specifc security and attestation purposes as the booted ramdisk and kernel will be immutable and deterministic, improving the attestation measurement scope. Furthermore, the kernel cannot be replaced at runtime.

The following examples will address both cases based on a generic Debian 13 cloud image in QCOW2 format.

Uploading a single image is straightforward:

openstack image create $IMAGE_NAME \
    --disk-format qcow2 \
    --container-format bare \
    --property hw_firmware_type=uefi \
    --property hw_machine_type=q35 \
    --property hw_mem_encryption=true \
    --property hw_mem_encryption_model=amd-sev-snp \
    --property hw_firmware_stateless=true \
    --file debian-13-genericcloud-amd64-20250814-2204.qcow2

(the values for --file and --disk-format will vary depending on the source image file; replace the $IMAGE_NAME variable as desired)

To upload a split image for direct kernel boot first prepare the dedicated kernel and init ramdisk image file. Refer to the Extracting kernel and init ramdisk from an image appendix section for instructions on how to extract these from an existing image.

Next, upload the kernel and init ramdisk images first, determine their resulting image IDs and finally specify them as kernel_id and ramdisk_id properties on the main image. Furthermore, an os_command_line property must be added to the main image specifying the kernel boot arguments to ensure proper boot of the correct device.

Example:

# kernel image
openstack image create $IMAGE_NAME-kernel \
  --disk-format raw \
  --container-format bare \
  --file vmlinuz-6.12.41+deb13-cloud-amd64

# ramdisk image
openstack image create $IMAGE_NAME-initrd \
    --disk-format raw \
    --container-format bare \
    --file initrd.img-6.12.41+deb13-cloud-amd64

# retrieve resulting image IDs
export KERNEL_IMG_UUID=$(
    openstack image list -f value -c ID -c Name \
    | grep "$IMAGE_NAME-kernel" | cut -d' ' -f1
)
export INITRD_IMG_UUID=$(
    openstack image list -f value -c ID -c Name \
    | grep "$IMAGE_NAME-initrd" | cut -d' ' -f1
)

# specify appropriate kernel arguments
export KERNEL_ARGUMENTS="root=/dev/vda1 console=ttyS0"

openstack image create $IMAGE_NAME \
    --disk-format qcow2 \
    --container-format bare \
    --property hw_firmware_type=uefi \
    --property hw_machine_type=q35 \
    --property hw_mem_encryption=true \
    --property hw_mem_encryption_model=amd-sev-snp \
    --property hw_firmware_stateless=true \
    --property kernel_id=$KERNEL_IMG_UUID \
    --property ramdisk_id=$INITRD_IMG_UUID \
    --property os_command_line="$KERNEL_ARGUMENTS" \
    --file debian-13-genericcloud-amd64-20250814-2204.qcow2

(the values for --file and --disk-format will vary depending on the source image file; replace the $IMAGE_NAME variable as desired)

Starting the virtual machine

To start a SEV-SNP-enabled virtual machine, simply create a server instance in Nova and specify the flavor and image as selected or prepared above respectively. As long as the flavor and image specify the correct properties, the virtual machine will automatically boot with SEV-SNP enabled. No further actions are necessary.

Verify AMD SEV-SNP state of a virtual machine

SEV-SNP capability can be confirmed via the kernel log. It should be something like this:

user@vm:~$ sudo dmesg | grep SEV
[    1.151097] Memory Encryption Features active: AMD SEV SEV-ES SEV-SNP
[    1.153063] SEV: Status: SEV SEV-ES SEV-SNP
[    1.286999] SEV: APIC: wakeup_secondary_cpu() replaced with wakeup_cpu_via_vmgexit()
[    1.808122] SEV: Using SNP CPUID table, 33 entries present.
[    1.808678] SEV: SNP running at VMPL0.
[    2.766601] SEV: SNP guest platform device initialized.
[    7.209154] sev-guest sev-guest: Initialized SEV guest driver (using VMPCK0 communication key)
[    7.555961] kvm_amd: KVM is unsupported when running as an SEV guest

In case SSH access is not available, openstack console log show can be used instead:

openstack console log show $VM_ID_OR_NAME | grep SEV

If connecting to the virtual machine is possible, the state can also be confirmed in a more elaborate and robust fashion using the snpguest tool (see Getting the snpguest tool):

user@vm:~$ sudo snpguest ok

[ PASS ] - SEV: ENABLED
[ PASS ] - SEV-ES: ENABLED
[ PASS ] - SNP: ENABLED
[ PASS ] - Optional Features statuses:
[...]

Attestation procedures for SEV-SNP

Attestation of a virtual machine booted with SEV-SNP can help proving that the SEV feature is genuine and that the confidentiality is ensured. This includes measurement of the guest system and the SEV interface, which can be matched against precalculated or previously recorded values.

This usually consists of two parts:

  1. Executing measurement and generating an attestation report from within the virtual machine.

  2. Calculating reference measurement values outside of the virtual machine for comparison.

The latter is optional and provides an additional layer of security for ensuring guest system integrity.

Generating an attestation report from within a virtual machine

Requirements:

  • msr kernel module

  • sev-guest kernel module, often part of linux-modules-extra-*-generic

  • snpguest tool, see Getting the snpguest tool

Refer to the corresponding for acquiring the snpguest tool.

Make sure the kernel modules are loaded:

# check if loaded
lsmod | grep "sev\|msr"

# load any missing modules
sudo modprobe msr
sudo modprobe sev-guest

Then proceed with generating the attestation report.

First, check the status of the guest:

snpguest ok

Then, generate the report:

snpguest report report.bin request-file.txt -r

The report can be displayed with:

snpguest display report report.bin

Verifying an attestation report from within a virtual machine

The attestation report depends on a chain of certificates. To verify the certificate trust chain, some certificates must be retrived first:

  1. The AMD certificate chain consisting of ARK (AMD Root Key) and ASK (AMD SEV Key).

  2. The VCEK (Versioned Chip Endorsement Key).

The AMD certificate chain can be retrieved from the KDS (Key Distribution System) interface:

snpguest fetch ca -r report.bin pem ./

This reads the report.bin to identify the correct CPU model and retrieve the corresponding certificate. Alternatively, snpguest fetch ca pem ./ milan may be used without a report by specifying a CPU family (milan in this case).

The VCEK (Versioned Chip Endorsement Key) is embedded into the processor and can also be retrieved from the KDS (Key Distribution System) interface:

snpguest fetch vcek pem ./ ./report.bin

Using both the CA chain and the VCEK, the full chain can now be verified:

snpguest verify certs ./

The AMD ARK was self-signed!
The AMD ASK was signed by the AMD ARK!
The VCEK was signed by the AMD ASK!

The above example shows the expected output. The ARK (AMD Root Key) is self-signed as it is the root key. It signs the ASK (AMD SEV Key), which in turn signs the VCEK. Lastly, the attestation report is signed by the VCEK.

Finally, the attestation report itself can be verified:

snpguest verify attestation ./ ./report.bin

Reported TCB Boot Loader from certificate matches the attestation report.
Reported TCB TEE from certificate matches the attestation report.
Reported TCB SNP from certificate matches the attestation report.
Reported TCB Microcode from certificate matches the attestation report.
VEK signed the Attestation Report!

Notes:

  • The measurement depends on the CPU-type (EPYC, EPYC-Milan, etc.), but subtypes don’t matter (e.g. EPYC-Milan-v1 vs EPYC-Milan-v2).

  • Repeating the measurement within a different virtual machine under the same conditions (vCPU count, images used, physical host) will lead to the same result values and as such is deterministic.

Calculating reference measurement outside of a virtual machine

To verify that the guest system of a virtual machine has not been tampered with and the expected kernel is running with the correct boot arguments, an offline measurement can be calculated outside of the virtual machine for the purpose of comparing the resulting values. The expected measurement calculation should be done on another secure system.

To calculate the measurement, the following assets and values must be acquired beforehand:

  • CPU model family, example: EPYC-Milan

    • can be retrieved from within the virtual machine via lscpu or /proc/cpuinfo

    • if enabled by the infrastructure provider, it may be visible as the ATTESTATION:vcpu_model attribute [1] within the “properties” field of openstack server show (this requires a specific YAOOK configuration as per Enabling the visibility of attestation metadata properties in Nova)

  • number of vCPU cores

    • can be retrieved from the OpenStack flavor using openstack flavor show

  • OVMF firmware file

If direct kernel boot is used, the following is additionally required:

  • kernel image file

    • can be retrieved using openstack image save on the image with the ID referenced as kernel_id on the main image

  • initrd image file

    • can be retrieved using openstack image save on the image with the ID referenced as ramdisk_id on the main image

  • kernel boot parameters (cmdline)

    • for direct kernel boot images, can be retrieved from the os_command_line property of the main image

Note: This kind of attestation is primarily useful for direct kernel boot scenarios where kernel and initrd images are immutable separate from the main guest image. This ensures that the kernel cannot be changed from within the guest and an attestation measurement always matches the precalculated one.

Using the tool sev-snp-measure, a measurement value can be calculated offline independent of the SEV virtual machine or host, by supplying the correct data:

./sev-snp-measure.py --mode snp \
    --vcpus=2 \
    --vcpu-type=EPYC-Milan \
    --ovmf OVMF.amdsev.fd \
    --kernel vmlinuz-6.12.41+deb13-cloud-amd64.vmlinuz \
    --initrd initrd.img-6.12.41+deb13-cloud-amd64.initrd \
    --append "root=/dev/vda1 console=ttyS0"

(omit the parameters --kernel, --initrd and --append if direct kernel boot is not used)

The resulting measurement value is a long string in hexadecimal encoding. The representation of the measurement value via snpguest display report from within the virtual machine uses a different encoding. In order to properly compare the values, the following can be used from within the virtual machine:

snpguest display report report.bin \
    | grep -A3 Measurement | tail -n3 | \
    tr -d '\n ' | tr '[:upper:]' '[:lower:]'; \
    echo

This will output the same format as the sev-snp-measure command and the resulting strings should be identical.

This way, a measurement value can be calculated in a secure environment and compared against the measurement results of an actual virtual machine.

Appendix

Extracting kernel and init ramdisk from an image

To extract the kernel and initrd images contained in a regular Linux image, the guestmount tool may be used, which is often part of the “libguestfs-tools” package or a dedicated package called “guestmount”.

For example:

guestmount --ro -a debian-13-genericcloud-amd64-20250814-2204.qcow2 -i /mnt

# discover the filenames
ls /mnt/boot/

# copy the files
cp /mnt/boot/vmlinuz-6.12.41+deb13-cloud-amd64 .
cp /mnt/boot/initrd.img-6.12.41+deb13-cloud-amd64 .

guestunmount /mnt

(guestmount will likely require root privileges)

The above example uses a specific Debian 13 cloud image as an example. The exact names and paths to the kernel and ramdisk files will differ based on the Linux image used.

Getting the snpguest tool

Download the binary from https://github.com/virtee/snpguest For example:

wget https://github.com/virtee/snpguest/releases/download/v0.9.2/snpguest
sudo mv snpguest /usr/local/bin/snpguest
sudo chmod +x /usr/local/bin/snpguest

Or build the binary from source.

Note: Version 0.9.1 has a bug breaking signature verification, make sure to use >= 0.9.2.

Extracting the OVMF firmware file from YAOOK

The AMD SEV OVMF file is built directly into the nova-compute image of YAOOK. Since YAOOK is open source, the file can be extracted from the downloaded container image:

docker pull registry.yaook.cloud/yaook/nova-compute-2024.2-ubuntu:$VERSION
docker run --name temp-ovmf-extract registry.yaook.cloud/yaook/nova-compute-2024.2-ubuntu:$VERSION
docker cp temp-ovmf-extract:/usr/share/OVMF/OVMF_AMDSEV_4M.fd ./OVMF_AMDSEV_4M.fd
docker rm temp-ovmf-extract

For determining $VERSION, please refer to pinned_version.yml of the appropriate YAOOK release.