Configure ACRN using OpenStack and libvirt

Introduction

This document provides instructions for setting up libvirt to configure ACRN. We use OpenStack to use libvirt and we’ll install OpenStack in a container to avoid crashing your system and to take advantage of easy snapshots/restores so that you can quickly roll back your system in the event of setup failure. (You should only install OpenStack directly on Ubuntu if you have a dedicated testing machine.) This setup utilizes LXC/LXD on Ubuntu 16.04 or 18.04.

Install ACRN

  1. Install ACRN using Ubuntu 16.04 or 18.04 as its Service VM.

    Important

    Need instructions from deleted document (using Ubuntu as SOS)

  2. Make the acrn-kernel using the kernel_config_uefi_sos configuration file (from the acrn-kernel repo).

  3. Add the following kernel boot arg to give the Service VM more loop devices. Refer to Kernel Boot Parameters documentation:

    max_loop=16
    
  4. Boot the Service VM with this new acrn-kernel using the ACRN hypervisor.

  5. Use the command: losetup -a to verify that Ubuntu’s snap service is not using all available loop devices. Typically, OpenStack needs at least 4 available loop devices. Follow the snaps guide to clean up old snap revisions if you’re running out of loop devices.

  6. Make sure the networking bridge acrn-br0 is created. If not, create it using the instructions in XXX.

    Important

    need instructions from deleted document (using Ubuntu as SOS)

Set up and launch LXC/LXD

  1. Set up the LXC/LXD Linux container engine using these instructions provided by Ubuntu (for release 16.04).

    Refer to the following additional information for the setup procedure:

    • Disregard ZFS utils (we’re not going to use the ZFS storage backend).
    • Answer dir (and not zfs) when prompted for the name of the storage backend to use.
    • Set up lxdbr0 as instructed.
    • Before launching a container, make sure lxc-checkconfig | grep missing does not show any missing kernel features.
  2. Create an Ubuntu 18.04 container named openstack:

    $ lxc init ubuntu:18.04 openstack
    
  3. Export the kernel interfaces necessary to launch a Service VM in the openstack container:

    1. Edit the openstack config file using the command:

      $ lxc config edit openstack
      

      In the editor, add the following lines under config:

      linux.kernel_modules: iptable_nat, ip6table_nat, ebtables, openvswitch
      raw.lxc: |-
        lxc.cgroup.devices.allow = c 10:237 rwm
        lxc.cgroup.devices.allow = b 7:* rwm
        lxc.cgroup.devices.allow = c 243:0 rwm
        lxc.mount.entry = /dev/net/tun dev/net/tun none bind,create=file 0 0
        lxc.mount.auto=proc:rw sys:rw cgroup:rw
      security.nesting: "true"
      security.privileged: "true"
      

      Save and exit the editor.

    2. Run the following commands to configure openstack:

      $ lxc config device add openstack eth1 nic name=eth1 nictype=bridged parent=acrn-br0
      $ lxc config device add openstack acrn_vhm unix-char path=/dev/acrn_vhm
      $ lxc config device add openstack loop-control unix-char path=/dev/loop-control
      $ for n in {0..15}; do lxc config device add openstack loop$n unix-block path=/dev/loop$n; done;
      
  4. Launch the openstack container:

    $ lxc start openstack
    
  5. Log in to the openstack container:

    $ lxc exec openstack -- su -l
    
  6. Let systemd manage eth1 in the container, with eth0 as the default route:

    Edit /etc/netplan/50-cloud-init.yaml

    network:
        version: 2
        ethernets:
            eth0:
                dhcp4: true
            eth1:
                dhcp4: true
                dhcp4-overrides:
                    route-metric: 200
    
  7. Log off and restart the openstack container:

    $ lxc restart openstack
    
  8. Log in to the openstack container again:

    $ xc exec openstack -- su -l
    
  9. If needed, set up the proxy inside the openstack container via /etc/environment and make sure no_proxy is properly set up. Both IP addresses assigned to eth0 and eth1 and their subnets must be included. For example:

    no_proxy=xcompany.com,.xcompany.com,10.0.0.0/8,192.168.0.0/16,localhost,.local,127.0.0.0/8,134.134.0.0/16
    
  10. Add a new user named stack and set permissions:

    $ sudo useradd -s /bin/bash -d /opt/stack -m stack
    $ echo "stack ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
    
  11. Log off and restart the openstack container:

    $ lxc restart openstack
    

The openstack container is now properly configured for OpenStack. Use the lxc list command to verify that both eth0 and eth1 appear in the container.

Set up ACRN prerequisites inside the container

  1. Log in to the openstack container as the stack user:

    $ lxc exec openstack -- su -l stack
    
  2. Download and compile ACRN’s source code. Refer to Build ACRN from Source.

    Note

    All tools and build dependencies must be installed before you run the first make command.

    $ git clone https://github.com/projectacrn/acrn-hypervisor
    $ cd acrn-hypervisor
    $ git checkout v1.6.1
    $ make
    $ cd misc/acrn-manager/; make
    

    Install only the user-space components: acrn-dm, acrnctl, and acrnd

  3. Download, compile, and install iasl. Refer to XXX.

    Important

    need instructions from deleted document (using Ubuntu as SOS)

Set up libvirt

  1. Install the required packages:

    $ sudo apt install libdevmapper-dev libnl-route-3-dev libnl-3-dev python \
      automake autoconf autopoint libtool xsltproc libxml2-utils gettext \
      libxml2-dev libpciaccess-dev
    
  2. Download libvirt/ACRN:

    $ git clone https://github.com/projectacrn/acrn-libvirt.git
    
  3. Build and install libvirt:

    $ cd acrn-libvirt
    $ ./autogen.sh --prefix=/usr --disable-werror --with-test-suite=no \
      --with-qemu=no --with-openvz=no --with-vmware=no --with-phyp=no \
      --with-vbox=no --with-lxc=no --with-uml=no --with-esx=no
    
    $ make
    $ sudo make install
    
  4. Edit and enable these options in /etc/libvirt/libvirtd.conf:

    unix_sock_ro_perms = "0777"
    unix_sock_rw_perms = "0777"
    unix_sock_admin_perms = "0777"
    
  5. Restart the libvirt daemon:

    $ sudo systemctl daemon-reload
    

Set up OpenStack

Use DevStack to install OpenStack. Refer to the DevStack instructions.

  1. Use the latest maintenance branch stable/train to ensure OpenStack stability:

    $ git clone https://opendev.org/openstack/devstack.git -b stable/train
    
  2. Go into the devstack directory, download an ACRN patch from doc/tutorials/0001-devstack-installation-for-acrn.patch, and apply it

    $ cd devstack
    $ git apply 0001-devstack-installation-for-acrn.patch
    
  3. Edit lib/nova_plugins/hypervisor-libvirt:

    Change xen_hvmloader_path to the location of your OVMF image file. A stock image is included in the ACRN source tree (devicemodel/bios/OVMF.fd).

  4. Create a devstack/local.conf file as shown below (setting the passwords as appropriate):

    [[local|localrc]]
    PUBLIC_INTERFACE=eth1
    
    ADMIN_PASSWORD=<password>
    DATABASE_PASSWORD=<password>
    RABBIT_PASSWORD=<password>
    SERVICE_PASSWORD=<password>
    
    ENABLE_KSM=False
    VIRT_DRIVER=libvirt
    LIBVIRT_TYPE=acrn
    DEBUG_LIBVIRT=True
    DEBUG_LIBVIRT_COREDUMPS=True
    USE_PYTHON3=True
    

    Note

    Now is a great time to take a snapshot of the container using lxc snapshot. If the OpenStack installation fails, manually rolling back to the previous state can be difficult. Currently, no step exists to reliably restart OpenStack after restarting the container.

  5. Install OpenStack:

    execute ./stack.sh in devstack/
    

    The installation should take about 20-30 minutes. Upon successful installation, the installer reports the URL of OpenStack’s management interface. This URL is accessible from the native Ubuntu.

    ...
    
    Horizon is now available at http://<IP_address>/dashboard
    
    ...
    
    2020-04-09 01:21:37.504 | stack.sh completed in 1755 seconds.
    
  6. Verify using the command systemctl status libvirtd.service that libvirtd is active and running.

  7. Set up SNAT for OpenStack instances to connect to the external network.

    1. Inside the container, use the command ip a to identify the br-ex bridge interface. br-ex should have two IPs. One should be visible to the native Ubuntu’s acrn-br0 interface (e.g. iNet 192.168.1.104/24). The other one is internal to OpenStack (e.g. iNet 172.24.4.1/24). The latter corresponds to the public network in OpenStack.

    2. Set up SNAT to establish a link between acrn-br0 and OpenStack. For example:

      $ sudo iptables -t nat -A POSTROUTING -s 172.24.4.1/24 -o br-ex -j SNAT --to-source 192.168.1.104
      

Configure and create OpenStack Instance

We’ll be using the Clear Linux Cloud Guest as the OS image (qcow2 format). Download the Cloud Guest image from https://clearlinux.org/downloads and uncompress it, for example:

$ wget https://cdn.download.clearlinux.org/releases/33110/clear/clear-33110-cloudguest.img.xz
$ unxz clear-33110-cloudguest.img.xz

This will leave you with the uncompressed OS image clear-33110-cloudguest.img we’ll use later.

Use the OpenStack management interface URL reported in a previous step to finish setting up the network and configure and create an OpenStack instance.

  1. Begin by using your browser to login as admin to the OpenStack management dashboard (using the URL reported previously). Use the admin password you set in the devstack/local.conf file:

    ../_images/OpenStack-01-login.png

    Click on the Project / Network Topology and then the Topology tab to view the existing public (external) and shared (internal) networks:

    ../_images/OpenStack-02-topology.png
  2. A router acts as a bridge between the internal and external networks. Create a router using Project / Network / Routers / +Create Router:

    ../_images/OpenStack-03-create-router.png

    Give it a name (acrn_router), select public for the external network, and select create router:

    ../_images/OpenStack-03a-create-router.png

    That added the external network to the router. Now add the internal network too. Click on the acrn_router name:

    ../_images/OpenStack-03b-created-router.png

    Go to the interfaces tab, and click on +Add interface:

    ../_images/OpenStack-04a-add-interface.png

    Select the subnet of the shared (private) network and click submit:

    ../_images/OpenStack-04b-add-interface.png

    The router now has interfaces between the external and internal networks:

    ../_images/OpenStack-04c-add-interface.png

    View the router graphically by clicking on the “Network Topology” tab:

    ../_images/OpenStack-05-topology.png

    With the router set up, we’ve completed configuring the OpenStack networking.

  3. Next, we’ll prepare for launching an OpenStack instance. Click on the Admin / Compute/ Image tab and then the +Create image button:

    ../_images/OpenStack-06-create-image.png

    Browse for and select the Clear Linux Cloud Guest image file we downloaded earlier:

    ../_images/OpenStack-06a-create-image-browse.png
    ../_images/OpenStack-06b-create-image-select.png

    Give the image a name (acrnImage), select the QCOW2 - QEMU Emulator format, and click on Create Image:

    ../_images/OpenStack-06e-create-image.png

    This will take a few minutes to complete.

  4. Next, click on the Admin / Computer / Flavors tabs and then the +Create Flavor button. This is where you’ll define a machine flavor name (acrn4vcpu), and specify its resource requirements: the number of vCPUs (4), RAM size (256MB), and root disk size (2GB):

    ../_images/OpenStack-07a-create-flavor.png

    Click on Create Flavor and you’ll return to see a list of available flavors plus the new one you created (acrn4vcpu):

    ../_images/OpenStack-07b-flavor-created.png
  5. OpenStack security groups act as a virtual firewall controlling connections between instances, allowing connections such as SSH, and HTTPS. These next steps create a security group allowing SSH and ICMP connections.

    Go to Project / Network / Security Groups and click on the +Create Security Group button:

    ../_images/OpenStack-08-security-group.png

    Name this security group (acrnSecuGroup) and click Create Security Group:

    ../_images/OpenStack-08a-create-security-group.png

    You’ll return to a rule management screen for this new group. Click on the +Add Rule button:

    ../_images/OpenStack-08b-add-rule.png

    Select SSH from the Rule list and click Add:

    ../_images/OpenStack-08c-add-SSH-rule.png

    Similarly, add another rule to add a All ICMP rule too:

    ../_images/OpenStack-08d-add-All-ICMP-rule.png
  6. Create a public/private keypair used to access the created instance. Go to Project / Compute / Key Pairs and click on +Create Key Pair, give the keypair a name (acrnKeyPair) and Key Type (SSH Key) and click on Create Key Pair:

    ../_images/OpenStack-09a-create-key-pair.png

    You should save the private keypair file safely, for future use:

    ../_images/OpenStack-09c-key-pair-private-key.png
  7. Now we’re ready to launch an instance. Go to Project / Compute / Instance, click on the Launch Instance button, give it a name (acrn4vcpuVM) and click Next:

    ../_images/OpenStack-10a-launch-instance-name.png

    Select No for “Create New Volume”, and click the up-arrow button for uploaded (acrnImage) image as the “Available source” for this instance:

    ../_images/OpenStack-10b-no-new-vol-select-allocated.png

    Click Next, and select the machine flavor you created earlier (acrn4vcpu):

    ../_images/OpenStack-10c-select-flavor.png

    Click on > next to the Allocated acrn4vcpu flavor and see details about your choice:

    ../_images/OpenStack-10d-flavor-selected.png

    Click on the Networks tab, and select the internal shared network from the “Available” list:

    ../_images/OpenStack-10e-select-network.png

    Click on the Security Groups tab and select the acrnSecuGroup security group you created earlier. Remove the default security group if it’s in the “Allocated” list:

    ../_images/OpenStack-10d-only-acrn-security-group.png

    Click on the Key Pair tab and verify the acrnKeyPair you created earlier is in the “Allocated” list, and click on Launch Instance:

    ../_images/OpenStack-10g-show-keypair-launch.png

    It will take a few minutes to complete launching the instance.

  8. Click on the Project / Compute / Instances tab to monitor progress. When the instance status is “Active” and power state is “Running”, associate a floating IP to the instance so you can access it:

    ../_images/OpenStack-11-wait-for-running-create-snapshot.png

    On the Manage Floating IP Associations screen, click on the + to add an association:

    ../_images/OpenStack-11a-manage-floating-ip.png

    Select public pool, and click on Allocate IP:

    ../_images/OpenStack-11b-allocate-floating-ip.png

    Finally, click Associate after the IP address is assigned:

    ../_images/OpenStack-11c-allocate-floating-ip-success-associate.png

Final Steps

With that, the OpenStack instance is running and connected to the network. You can graphically see this by returning to the Project / Network / Network Topology view:

../_images/OpenStack-12b-running-topology-instance.png

You can also see a hypervisor summary by clicking on Admin / Compute / Hypervisors:

../_images/OpenStack-12d-compute-hypervisor.png

Note

OpenStack logs to the systemd journal and libvirt logs to /var/log/libvirt/libvirtd.log.

Here are some other tasks you can try when the instance is created and running:

  • Use the hypervisor console to verify the instance is running by using the vm_list command.

  • Ping the instance inside the container using the instance’s floating IP address.

  • Clear Linux prohibits root SSH login by default. Use libvirt’s virsh console to configure the instance. Inside the container, using:

    $ sudo virsh -c acrn:///system
    list   #you should see the instance listed as running
    console <instance_name>
    

    Log in to the Clear Linux instance and set up the root SSH. Refer to the Clear Linux instructions on enabling root login.

    • If needed, set up the proxy inside the instance.
    • Configure systemd-resolved to use the correct DNS server.
    • Install ping: swupd bundle-add clr-network-troubleshooter.

    The ACRN instance should now be able to ping acrn-br0 and another ACRN instance. It should also be accessible inside the container via SSH and its floating IP address.

The ACRN instance can be deleted via the OpenStack management interface.

For more advanced CLI usage, refer to this OpenStack cheat sheet.