Getting Started Guide for ACRN Industry Scenario with Ubuntu Service VM

Verified version

  • Ubuntu version: 18.04
  • GCC version: 9.0
  • ACRN-hypervisor branch: release_2.0 (acrn-2020w23.6-180000p)
  • ACRN-Kernel (Service VM kernel): release_2.0 (5.4.43-PKT-200203T060100Z)
  • RT kernel for Ubuntu User OS: 4.19/preempt-rt (4.19.72-rt25)
  • HW: Maxtang Intel WHL-U i7-8665U (AX8665U-A2)

Prerequisites

Install Ubuntu for the Service and User VMs

Hardware Connection

Connect the WHL Maxtang with the appropriate external devices.

  1. Connect the WHL Maxtang board to a monitor via an HDMI cable.

  2. Connect the mouse, keyboard, ethernet cable, and power supply cable to the WHL Maxtang board.

  3. Insert the Ubuntu 18.04 USB boot disk into the USB port.

    ../_images/rt-ind-ubun-hw-1.png
    ../_images/rt-ind-ubun-hw-2.png

Install the Ubuntu User VM (RTVM) on the SATA disk

Install Ubuntu on the SATA disk

Note

The WHL Maxtang machine contains both an NVMe and SATA disk. Before you install the Ubuntu User VM on the SATA disk, either remove the NVMe disk or delete its blocks.

  1. Insert the Ubuntu USB boot disk into the WHL Maxtang machine.

  2. Power on the machine, then press F11 to select the USB disk as the boot device. Select UEFI: SanDisk to boot using UEFI. Note that the label depends on the brand/make of the USB stick.

  3. Install the Ubuntu OS.

  4. Select Something else to create the partition.

    ../_images/native-ubuntu-on-SATA-1.png
  5. Configure the /dev/sda partition. Refer to the diagram below:

    ../_images/native-ubuntu-on-SATA-2.png
    1. Select the /dev/sda partition, not /dev/nvme0p1.
    2. Select /dev/sda ATA KINGSTON RBUSNS4 as the device for the bootloader installation. Note that the label depends on the SATA disk used.
  6. Complete the Ubuntu installation on /dev/sda.

This Ubuntu installation will be modified later (see Build and Install the RT kernel for the Ubuntu User VM) to turn it into a Real-Time User VM (RTVM).

Install the Ubuntu Service VM on the NVMe disk

Install Ubuntu on the NVMe disk

Note

Before you install the Ubuntu Service VM on the NVMe disk, either remove the SATA disk or disable it in the BIOS. Disable it by going to: ChipsetPCH-IO Configuration -> SATA and RST Configuration -> SATA Controller [Disabled]

  1. Insert the Ubuntu USB boot disk into the WHL Maxtang machine.

  2. Power on the machine, then press F11 to select the USB disk as the boot device. Select UEFI: SanDisk to boot using UEFI. Note that the label depends on the brand/make of the USB stick.

  3. Install the Ubuntu OS.

  4. Select Something else to create the partition.

    ../_images/native-ubuntu-on-NVME-1.png
  5. Configure the /dev/nvme0n1 partition. Refer to the diagram below:

    ../_images/native-ubuntu-on-NVME-2.png
    1. Select the /dev/nvme0n1 partition, not /dev/sda.
    2. Select /dev/nvme0n1 FORESEE 256GB SSD as the device for the bootloader installation. Note that the label depends on the NVMe disk used.
  6. Complete the Ubuntu installation and reboot the system.

    Note

    Set acrn as the username for the Ubuntu Service VM.

Build and Install ACRN on Ubuntu

Pre-Steps

  1. Set the network configuration, proxy, etc.

  2. Update Ubuntu:

    $ sudo -E apt update
    
  3. Create a work folder:

    $ mkdir /home/acrn/work
    

Build the ACRN Hypervisor on Ubuntu

  1. Install the necessary libraries:

    $ sudo -E apt install gcc \
      git \
      make \
      gnu-efi \
      libssl-dev \
      libpciaccess-dev \
      uuid-dev \
      libsystemd-dev \
      libevent-dev \
      libxml2-dev \
      libusb-1.0-0-dev \
      python3 \
      python3-pip \
      libblkid-dev \
      e2fslibs-dev \
      pkg-config \
      libnuma-dev \
      liblz4-tool \
      flex \
      bison
    
    $ sudo pip3 install kconfiglib
    
  2. Get the ACRN source code:

    $ cd /home/acrn/work
    $ git clone https://github.com/projectacrn/acrn-hypervisor
    $ cd acrn-hypervisor
    
  3. Switch to the v2.0 version:

    $ git checkout -b v2.0 remotes/origin/release_2.0
    
  4. Build ACRN:

    $ make all BOARD_FILE=misc/acrn-config/xmls/board-xmls/whl-ipc-i7.xml SCENARIO_FILE=misc/acrn-config/xmls/config-xmls/whl-ipc-i7/industry.xml RELEASE=0
    $ sudo make install
    $ sudo cp build/hypervisor/acrn.bin /boot/acrn/
    

Enable network sharing for the User VM

In the Ubuntu Service VM, enable network sharing for the User VM:

$ sudo systemctl enable systemd-networkd
$ sudo systemctl start systemd-networkd

Build and install the ACRN kernel

  1. Build the Service VM kernel from the ACRN repo:

    $ cd /home/acrn/work/
    $ git clone https://github.com/projectacrn/acrn-kernel
    $ cd acrn-kernel
    
  2. Switch to the 5.4 kernel:

    $ git checkout -b v2.0 remotes/origin/release_2.0
    $ cp kernel_config_uefi_sos .config
    $ make olddefconfig
    $ make all
    

Install the Service VM kernel and modules

$ sudo make modules_install
$ sudo cp arch/x86/boot/bzImage /boot/bzImage

Update Grub for the Ubuntu Service VM

  1. Update the /etc/grub.d/40_custom file as shown below.

    Note

    Enter the command line for the kernel in /etc/grub.d/40_custom as a single line and not as multiple lines. Otherwise, the kernel will fail to boot.

    menuentry "ACRN Multiboot Ubuntu Service VM" --id ubuntu-service-vm {
      load_video
      insmod gzio
      insmod part_gpt
      insmod ext2
    
      search --no-floppy --fs-uuid --set 9bd58889-add7-410c-bdb7-1fbc2af9b0e1
      echo 'loading ACRN...'
      multiboot2 /boot/acrn/acrn.bin  root=PARTUUID="e515916d-aac4-4439-aaa0-33231a9f4d83"
      module2 /boot/bzImage Linux_bzImage
    }
    

    Note

    Update this to use the UUID (--set) and PARTUUID (root= parameter) (or use the device node directly) of the root partition (e.g. /dev/nvme0n1p2). Hint: use ``sudo blkid /dev/sda*.

    Update the kernel name if you used a different name as the source for your Service VM kernel.

  2. Modify the /etc/default/grub file to make the Grub menu visible when booting and make it load the Service VM kernel by default. Modify the lines shown below:

    GRUB_DEFAULT=ubuntu-service-vm
    #GRUB_TIMEOUT_STYLE=hidden
    GRUB_TIMEOUT=5
    
  3. Update Grub on your system:

    $ sudo update-grub
    

Reboot the system

Reboot the system. You should see the Grub menu with the new ACRN ubuntu-service-vm entry. Select it and proceed to booting the platform. The system will start Ubuntu and you can now log in (as before).

To verify that the hypervisor is effectively running, check dmesg. The typical output of a successful installation resembles the following:

$ dmesg | grep ACRN
[    0.000000] Hypervisor detected: ACRN
[    0.862942] ACRN HVLog: acrn_hvlog_init

Additional settings in the Service VM

BIOS settings of GVT-d for WaaG

Note

Skip this step if you are using a Kaby Lake (KBL) NUC.

Go to Chipset -> System Agent (SA) Configuration -> Graphics Configuration and make the following settings:

Set DVMT Pre-Allocated to 64MB:

../_images/DVMT-reallocated-64mb.png

Set PM Support to Enabled:

../_images/PM-support-enabled.png

Use OVMF to launch the User VM

The User VM will be launched by OVMF, so copy it to the specific folder:

$ sudo mkdir -p /usr/share/acrn/bios
$ sudo cp /home/acrn/work/acrn-hypervisor/devicemodel/bios/OVMF.fd  /usr/share/acrn/bios

Install IASL in Ubuntu for User VM launch

ACRN uses iasl to parse User VM ACPI information. The original iasl in Ubuntu 18.04 is too old to match with acrn-dm; update it using the following steps:

$ sudo -E apt-get install iasl
$ cd /home/acrn/work
$ wget https://acpica.org/sites/acpica/files/acpica-unix-20191018.tar.gz
$ tar zxvf acpica-unix-20191018.tar.gz
$ cd acpica-unix-20191018
$ make clean && make iasl
$ sudo cp ./generate/unix/bin/iasl /usr/sbin/

Build and Install the RT kernel for the Ubuntu User VM

Follow these instructions to build the RT kernel.

  1. Clone the RT kernel source code:

    Note

    This guide assumes you are doing this within the Service VM. This acrn-kernel repository was already cloned under /home/acrn/work earlier on so you can just cd into it and perform the git checkout directly.

    $ git clone https://github.com/projectacrn/acrn-kernel
    $ cd acrn-kernel
    $ git checkout 4.19/preempt-rt
    $ make mrproper
    

    Note

    The make mrproper is to make sure there is no .config file left from any previous build (e.g. the one for the Service VM kernel).

  2. Build the kernel:

    $ cp x86-64_defconfig .config
    $ make olddefconfig
    $ make targz-pkg
    
  3. Copy the kernel and modules:

    $ sudo mount /dev/sda2 /mnt
    $ sudo cp arch/x86/boot/bzImage /mnt/boot/
    $ sudo tar -zxvf linux-4.19.72-rt25-x86.tar.gz -C /mnt/lib/modules/
    $ sudo cp -r /mnt/lib/modules/lib/modules/4.19.72-rt25 /mnt/lib/modules/
    $ sudo cd ~ && sudo umount /mnt && sync
    

Launch the RTVM

Grub in the Ubuntu User VM (RTVM) needs to be configured to use the new RT kernel that was just built and installed on the rootfs. Follow these steps to perform this operation.

Update the Grub file

  1. Reboot into the Ubuntu User VM located on the SATA drive and log on.

  2. Update the /etc/grub.d/40_custom file as shown below.

    Note

    Enter the command line for the kernel in /etc/grub.d/40_custom as a single line and not as multiple lines. Otherwise, the kernel will fail to boot.

    menuentry "ACRN Ubuntu User VM" --id ubuntu-user-vm {
      load_video
      insmod gzio
      insmod part_gpt
      insmod ext2
    
      search --no-floppy --fs-uuid --set b2ae4879-c0b6-4144-9d28-d916b578f2eb
      echo 'loading ACRN...'
    
      linux  /boot/bzImage root=PARTUUID=<UUID of rootfs partition> rw rootwait nohpet console=hvc0 console=ttyS0 no_timer_check ignore_loglevel log_buf_len=16M consoleblank=0 clocksource=tsc tsc=reliable x2apic_phys processor.max_cstate=0 intel_idle.max_cstate=0 intel_pstate=disable mce=ignore_ce audit=0 isolcpus=nohz,domain,1 nohz_full=1 rcu_nocbs=1 nosoftlockup idle=poll irqaffinity=0
    }
    

    Note

    Update this to use the UUID (--set) and PARTUUID (root= parameter) (or use the device node directly) of the root partition (e.g. /dev/sda2). Hint: use ``sudo blkid /dev/sda*.

    Update the kernel name if you used a different name as the source for your Service VM kernel.

  3. Modify the /etc/default/grub file to make the grub menu visible when booting and make it load the RT kernel by default. Modify the lines shown below:

    GRUB_DEFAULT=ubuntu-user-vm
    #GRUB_TIMEOUT_STYLE=hidden
    GRUB_TIMEOUT=5
    
  4. Update Grub on your system:

    $ sudo update-grub
    
  5. Reboot into the Ubuntu Service VM

Launch the RTVM

$ sudo /usr/share/acrn/samples/nuc/launch_hard_rt_vm.sh

Configure RDT

In addition to setting the CAT configuration via HV commands, we allow developers to add CAT configurations to the VM config and configure automatically at the time of RTVM creation. Refer to Enable RDT Configuration for details on RDT configuration and RDT Allocation Feature Supported by Hypervisor for details on RDT high-level design.

Set up the core allocation for the RTVM

In our recommended configuration, two cores are allocated to the RTVM: core 0 for housekeeping and core 1 for RT tasks. In order to achieve this, follow the below steps to allocate all housekeeping tasks to core 0:

  1. Prepare the RTVM launch script

    Follow the Passthrough a hard disk to RTVM section to make adjustments to the /usr/share/acrn/samples/nuc/launch_hard_rt_vm.sh launch script.

  2. Launch the RTVM:

    $ sudo /usr/share/acrn/samples/nuc/launch_hard_rt_vm.sh
    
  3. Log in to the RTVM as root and run the script as below:

    #!/bin/bash
    # Copyright (C) 2019 Intel Corporation.
    # SPDX-License-Identifier: BSD-3-Clause
    # Move all IRQs to core 0.
    for i in `cat /proc/interrupts | grep '^ *[0-9]*[0-9]:' | awk {'print $1'} | sed 's/:$//' `;
    do
        echo setting $i to affine for core zero
        echo 1 > /proc/irq/$i/smp_affinity
    done
    
    # Move all rcu tasks to core 0.
    for i in `pgrep rcu`; do taskset -pc 0 $i; done
    
    # Change realtime attribute of all rcu tasks to SCHED_OTHER and priority 0
    for i in `pgrep rcu`; do chrt -v -o -p 0 $i; done
    
    # Change realtime attribute of all tasks on core 1 to SCHED_OTHER and priority 0
    for i in `pgrep /1`; do chrt -v -o -p 0 $i; done
    
    # Change realtime attribute of all tasks to SCHED_OTHER and priority 0
    for i in `ps -A -o pid`; do chrt -v -o -p 0 $i; done
    
    echo disabling timer migration
    echo 0 > /proc/sys/kernel/timer_migration
    

    Note

    Ignore the error messages that might appear while the script is running.

Run cyclictest

  1. Refer to the troubleshooting section below that discusses how to enable the network connection for RTVM.

  2. Launch the RTVM and log in as root.

  3. Install the rt-tests tool:

    # apt install rt-tests
    
  4. Use the following command to start cyclictest:

    # cyclictest -a 1 -p 80 -m -N -D 1h -q -H 30000 --histfile=test.log
    

    Parameter descriptions:

    -a 1:to bind the RT task to core 1
    -p 80:to set the priority of the highest prio thread
    -m:lock current and future memory allocations
    -N:print results in ns instead of us (default us)
    -D 1h:to run for 1 hour, you can change it to other values
    -q:quiet mode; print a summary only on exit
    -H 30000 –histfile=test.log:
     dump the latency histogram to a local file

Launch the Windows VM

  1. Follow this guide to prepare the Windows image file and then reboot with a new acrngt.conf.

  2. Modify the launch_uos_id1.sh script as follows and then launch the Windows VM as one of the post-launched standard VMs:

    acrn-dm -A -m $mem_size -s 0:0,hostbridge -s 1:0,lpc -l com1,stdio \
       -s 2,passthru,0/2/0,gpu \
       -s 3,virtio-blk,./win10-ltsc.img \
       -s 4,virtio-net,tap0 \
       --ovmf /usr/share/acrn/bios/OVMF.fd \
       --windows \
       $vm_name
    

Troubleshooting

Enabling the network on the RTVM

If you need to access the internet, you must add the following command line to the launch_hard_rt_vm.sh script before launching it:

acrn-dm -A -m $mem_size -s 0:0,hostbridge \
   --lapic_pt \
   --rtvm \
   --virtio_poll 1000000 \
   -U 495ae2e5-2603-4d64-af76-d4bc5a8ec0e5 \
   -s 2,passthru,02/0/0 \
   -s 3,virtio-console,@stdio:stdio_port \
   -s 8,virtio-net,tap0 \
   $pm_channel $pm_by_vuart \
   --ovmf /usr/share/acrn/bios/OVMF.fd \
   hard_rtvm

Passthrough a hard disk to RTVM

  1. Use the lspci command to ensure that the correct SATA device IDs will be used for the passthrough before launching the script:

    # lspci -nn | grep -i sata
    00:17.0 SATA controller [0106]: Intel Corporation Cannon Point-LP SATA Controller [AHCI Mode] [8086:9dd3] (rev 30)
    
  2. Modify the script to use the correct SATA device IDs and bus number:

    # vim /usr/share/acrn/samples/nuc/launch_hard_rt_vm.sh
    
    passthru_vpid=(
    ["eth"]="8086 156f"
    ["sata"]="8086 9dd3"
    ["nvme"]="8086 f1a6"
    )
    passthru_bdf=(
    ["eth"]="0000:00:1f.6"
    ["sata"]="0000:00:17.0"
    ["nvme"]="0000:02:00.0"
    )
    
    # SATA pass-through
    echo ${passthru_vpid["sata"]} > /sys/bus/pci/drivers/pci-stub/new_id
    echo ${passthru_bdf["sata"]} > /sys/bus/pci/devices/${passthru_bdf["sata"]}/driver/unbind
    echo ${passthru_bdf["sata"]} > /sys/bus/pci/drivers/pci-stub/bind
    
    # NVME pass-through
    #echo ${passthru_vpid["nvme"]} > /sys/bus/pci/drivers/pci-stub/new_id
    #echo ${passthru_bdf["nvme"]} > /sys/bus/pci/devices/${passthru_bdf["nvme"]}/driver/unbind
    #echo ${passthru_bdf["nvme"]} > /sys/bus/pci/drivers/pci-stub/bind
    
       --lapic_pt \
       --rtvm \
       --virtio_poll 1000000 \
       -U 495ae2e5-2603-4d64-af76-d4bc5a8ec0e5 \
       -s 2,passthru,00/17/0 \
       -s 3,virtio-console,@stdio:stdio_port \
       -s 8,virtio-net,tap0 \
       $pm_channel $pm_by_vuart \
       --ovmf /usr/share/acrn/bios/OVMF.fd \
       hard_rtvm
    
  3. Upon deployment completion, launch the RTVM directly onto your WHL NUC:

    $ sudo /usr/share/acrn/samples/nuc/launch_hard_rt_vm.sh