Getting Started Guide for ACRN Logical Partition Mode¶
The ACRN hypervisor supports a logical partition scenario in which the User OS, running in a pre-launched VM, can bypass the ACRN hypervisor and directly access isolated PCI devices. The following guidelines provide step-by-step instructions on how to set up the ACRN hypervisor logical partition scenario on Intel NUC while running two pre-launched VMs.
Validated Versions¶
Ubuntu version: 18.04
ACRN hypervisor tag: v2.4
ACRN kernel tag: v2.4
Prerequisites¶
NVMe disk
SATA disk
Storage device with USB interface (such as USB Flash or SATA disk connected with a USB 3.0 SATA converter).
Disable Intel Hyper Threading Technology in the BIOS to avoid interference from logical cores for the logical partition scenario.
In the logical partition scenario, two VMs (running Ubuntu OS) are started by the ACRN hypervisor. Each VM has its own root filesystem. Set up each VM by following the Ubuntu desktop installation instructions first on a SATA disk and then again on a storage device with a USB interface. The two pre-launched VMs will mount the root file systems via the SATA controller and the USB controller respectively.
Update Kernel Image and Modules of Pre-Launched VM¶
On your development workstation, clone the ACRN kernel source tree, and build the Linux kernel image that will be used to boot the pre-launched VMs:
$ git clone https://github.com/projectacrn/acrn-kernel.git Cloning into 'acrn-kernel'... ... $ cd acrn-kernel $ cp kernel_config_uos .config $ make olddefconfig scripts/kconfig/conf --olddefconfig Kconfig # # configuration written to .config # $ make $ make modules_install INSTALL_MOD_PATH=out/
The last two commands build the bootable kernel image as
arch/x86/boot/bzImage
, and loadable kernel modules under the./out/
folder. Copy these files to a removable disk for installing on the Intel NUC later.The current ACRN logical partition scenario implementation requires a multi-boot capable bootloader to boot both the ACRN hypervisor and the bootable kernel image built from the previous step. Install the Ubuntu OS on the onboard NVMe SSD by following the Ubuntu desktop installation instructions The Ubuntu installer creates 3 disk partitions on the onboard NVMe SSD. By default, the GRUB bootloader is installed on the EFI System Partition (ESP) that’s used to bootstrap the ACRN hypervisor.
After installing the Ubuntu OS, power off the Intel NUC. Attach the SATA disk and storage device with the USB interface to the Intel NUC. Power on the Intel NUC and make sure it boots the Ubuntu OS from the NVMe SSD. Plug in the removable disk with the kernel image into the Intel NUC and then copy the loadable kernel modules built in Step 1 to the
/lib/modules/
folder on both the mounted SATA disk and storage device with USB interface. For example, assuming the SATA disk and storage device with USB interface are assigned to/dev/sda
and/dev/sdb
respectively, the following commands set up the partition mode loadable kernel modules onto the root file systems to be loaded by the pre-launched VMs:# Mount the Ubuntu OS root filesystem on the SATA disk $ sudo mount /dev/sda3 /mnt $ sudo cp -r <kernel-modules-folder-built-in-step1>/lib/modules/* /mnt/lib/modules $ sudo umount /mnt # Mount the Ubuntu OS root filesystem on the USB flash disk $ sudo mount /dev/sdb3 /mnt $ sudo cp -r <path-to-kernel-module-folder-built-in-step1>/lib/modules/* /mnt/lib/modules $ sudo umount /mnt
Copy the bootable kernel image to the /boot directory:
$ sudo cp <path-to-kernel-image-built-in-step1>/bzImage /boot/
Update ACRN Hypervisor Image¶
Before building the ACRN hypervisor, find the I/O address of the serial port and the PCI BDF addresses of the SATA controller and the USB controllers on the Intel NUC. Enter the following command to get the I/O addresses of the serial port. The Intel NUC supports one serial port, ttyS0. Connect the serial port to the development workstation in order to access the ACRN serial console to switch between pre-launched VMs:
$ dmesg | grep ttyS0 [ 0.000000] console [ttyS0] enabled [ 1.562546] 00:01: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
The following command prints detailed information about all PCI buses and devices in the system:
$ sudo lspci -vv 00:14.0 USB controller: Intel Corporation Device 9ded (rev 30) (prog-if 30 [XHCI]) Subsystem: Intel Corporation Device 7270 00:17.0 SATA controller: Intel Corporation Device 9dd3 (rev 30) (prog-if 01 [AHCI 1.0]) Subsystem: Intel Corporation Device 7270 02:00.0 Non-Volatile memory controller: Intel Corporation Device f1a8 (rev 03) (prog-if 02 [NVM Express]) Subsystem: Intel Corporation Device 390d 03:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03) Subsystem: Intel Corporation I210 Gigabit Network Connection 04:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03) Subsystem: Intel Corporation I210 Gigabit Network Connection
Clone the ACRN source code and configure the build options.
Refer to Build ACRN From Source to set up the ACRN build environment on your development workstation.
Clone the ACRN source code and check out to the tag v2.4:
$ git clone https://github.com/projectacrn/acrn-hypervisor.git $ cd acrn-hypervisor $ git checkout v2.4
Check the
pci_devs
sections inmisc/config_tools/data/whl-ipc-i7/logical_partition.xml
for each pre-launched VM to ensure you are using the right PCI device BDF information (as reported bylspci -vv
). If you need to make changes to this file, create a copy of it and use it subsequently when building ACRN (SCENARIO=/path/to/newfile.xml
).Build the ACRN hypervisor and ACPI binaries for pre-launched VMs with default xmls:
$ make hypervisor BOARD=whl-ipc-i7 SCENARIO=logical_partition RELEASE=0
Note
The
acrn.bin
will be generated to./build/hypervisor/acrn.bin
. TheACPI_VM0.bin
andACPI_VM1.bin
will be generated to./build/hypervisor/acpi/
.Check the Ubuntu bootloader name.
In the current design, the logical partition depends on the GRUB boot loader; otherwise, the hypervisor will fail to boot. Verify that the default bootloader is GRUB:
$ sudo update-grub -V
The above command output should contain the
GRUB
keyword.Copy the artifact
acrn.bin
,ACPI_VM0.bin
, andACPI_VM1.bin
to the/boot
directory on NVME:Copy
acrn.bin
,ACPI_VM1.bin
andACPI_VM0.bin
to a removable disk.Plug the removable disk into the Intel NUC’s USB port.
Copy the
acrn.bin
,ACPI_VM0.bin
, andACPI_VM1.bin
from the removable disk to/boot
directory.
Update Ubuntu GRUB to Boot Hypervisor and Load Kernel Image¶
Append the following configuration to the
/etc/grub.d/40_custom
file:menuentry 'ACRN hypervisor Logical Partition Scenario' --id ACRN_Logical_Partition --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-e23c76ae-b06d-4a6e-ad42-46b8eedfd7d3' { recordfail load_video gfxmode $linux_gfx_mode insmod gzio insmod part_gpt insmod ext2 search --no-floppy --fs-uuid --set 9bd58889-add7-410c-bdb7-1fbc2af9b0e1 echo 'Loading hypervisor logical partition scenario ...' multiboot2 /boot/acrn.bin root=PARTUUID="e515916d-aac4-4439-aaa0-33231a9f4d83" module2 /boot/bzImage XXXXXX module2 /boot/ACPI_VM0.bin ACPI_VM0 module2 /boot/ACPI_VM1.bin ACPI_VM1 }
Note
Update the UUID (
--set
) and PARTUUID (root=
parameter) (or use the device node directly) of the root partition (e.g.``/dev/nvme0n1p2). Hint: usesudo blkid
. The kernel command-line arguments used to boot the pre-launched VMs isbootargs
in themisc/config_tools/data/whl-ipc-i7/logical_partition.xml
Themodule2 /boot/bzImage
paramXXXXXX
is the bzImage tag and must exactly match thekern_mod
in themisc/config_tools/data/whl-ipc-i7/logical_partition.xml
file. The module/boot/ACPI_VM0.bin
is the binary of ACPI tables for pre-launched VM0, the parameterACPI_VM0
is VM0’s ACPI tag and should not be modified. The module/boot/ACPI_VM1.bin
is the binary of ACPI tables for pre-launched VM1 the parameterACPI_VM1
is VM1’s ACPI tag and should not be modified.Modify the
/etc/default/grub
file as follows to make the GRUB menu visible when booting:GRUB_DEFAULT=ACRN_Logical_Partition #GRUB_HIDDEN_TIMEOUT=0 #GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" GRUB_CMDLINE_LINUX=""
Update GRUB:
$ sudo update-grub
Reboot the Intel NUC. Select the ACRN hypervisor Logical Partition Scenario entry to boot the logical partition of the ACRN hypervisor on the Intel NUC’s display. The GRUB loader will boot the hypervisor, and the hypervisor will automatically start the two pre-launched VMs.
Logical Partition Scenario Startup Check¶
Use these steps to verify that the hypervisor is properly running:
Log in to the ACRN hypervisor shell from the serial console.
Use the
vm_list
to check the pre-launched VMs.
Use these steps to verify that the two pre-launched VMs are running properly:
Use the
vm_console 0
to switch to VM0’s console.The VM0’s OS should boot and log in.
Use a Ctrl + Space to return to the ACRN hypervisor shell.
Use the
vm_console 1
to switch to VM1’s console.The VM1’s OS should boot and log in.
Refer to the ACRN hypervisor shell user guide for more information about available commands.