Configure ACRN using OpenStack and libvirt¶
Introduction¶
This document provides instructions for setting up libvirt to configure ACRN. We use OpenStack to use libvirt and we’ll install OpenStack in a container to avoid crashing your system and to take advantage of easy snapshots/restores so that you can quickly roll back your system in the event of setup failure. (You should only install OpenStack directly on Ubuntu if you have a dedicated testing machine.) This setup utilizes LXC/LXD on Ubuntu 16.04 or 18.04.
Install ACRN¶
Install ACRN using Ubuntu 16.04 or 18.04 as its Service VM. Refer to Run Ubuntu as the Service VM.
Make the acrn-kernel using the kernel_config_uefi_sos configuration file (from the
acrn-kernel
repo).Add the following kernel bootarg to give the Service VM more loop devices. Refer to Kernel Boot Parameters documentation:
max_loop=16
Boot the Service VM with this new
acrn-kernel
using the ACRN hypervisor.Use the command:
losetup -a
to verify that Ubuntu’s snap service is not using all available loop devices. Typically, OpenStack needs at least 4 available loop devices. Follow the snaps guide to clean up old snap revisions if you’re running out of loop devices.Make sure the networking bridge
acrn-br0
is created. If not, create it using the instructions in Enable network sharing.
Set up and launch LXC/LXD¶
Set up the LXC/LXD Linux container engine using these instructions provided by Ubuntu (for release 16.04).
Refer to the following additional information for the setup procedure:
- Disregard ZFS utils (we’re not going to use the ZFS storage backend).
- Answer
dir
(and notzfs
) when prompted for the name of the storage backend to use. - Set up
lxdbr0
as instructed. - Before launching a container, make sure
lxc-checkconfig | grep missing
does not show any missing kernel features.
Create an Ubuntu 18.04 container named openstack:
$ lxc init ubuntu:18.04 openstack
Export the kernel interfaces necessary to launch a Service VM in the openstack container:
Edit the openstack config file using the command:
$ lxc config edit openstack
In the editor, add the following lines under config:
linux.kernel_modules: iptable_nat, ip6table_nat, ebtables, openvswitch raw.lxc: |- lxc.cgroup.devices.allow = c 10:237 rwm lxc.cgroup.devices.allow = b 7:* rwm lxc.cgroup.devices.allow = c 243:0 rwm lxc.mount.entry = /dev/net/tun dev/net/tun none bind,create=file 0 0 lxc.mount.auto=proc:rw sys:rw cgroup:rw security.nesting: "true" security.privileged: "true"
Save and exit the editor.
Run the following commands to configure OpenStack:
$ lxc config device add openstack eth1 nic name=eth1 nictype=bridged parent=acrn-br0 $ lxc config device add openstack acrn_vhm unix-char path=/dev/acrn_vhm $ lxc config device add openstack loop-control unix-char path=/dev/loop-control $ for n in {0..15}; do lxc config device add openstack loop$n unix-block path=/dev/loop$n; done;
Launch the openstack container:
$ lxc start openstack
Log in to the openstack container:
$ lxc exec openstack -- su -l
Let
systemd
manage eth1 in the container, with eth0 as the default route:Edit
/etc/netplan/50-cloud-init.yaml
network: version: 2 ethernets: eth0: dhcp4: true eth1: dhcp4: true dhcp4-overrides: route-metric: 200
Log out and restart the openstack container:
$ lxc restart openstack
Log in to the openstack container again:
$ xc exec openstack -- su -l
If needed, set up the proxy inside the openstack container via
/etc/environment
and make sureno_proxy
is properly set up. Both IP addresses assigned to eth0 and eth1 and their subnets must be included. For example:no_proxy=xcompany.com,.xcompany.com,10.0.0.0/8,192.168.0.0/16,localhost,.local,127.0.0.0/8,134.134.0.0/16
Add a new user named stack and set permissions:
$ sudo useradd -s /bin/bash -d /opt/stack -m stack $ echo "stack ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
Log out and restart the openstack container:
$ lxc restart openstack
The openstack container is now properly configured for OpenStack.
Use the lxc list
command to verify that both eth0 and eth1
appear in the container.
Set up ACRN prerequisites inside the container¶
Log in to the openstack container as the stack user:
$ lxc exec openstack -- su -l stack
Download and compile ACRN’s source code. Refer to Build ACRN from Source.
Note
All tools and build dependencies must be installed before you run the first
make
command.$ git clone https://github.com/projectacrn/acrn-hypervisor $ cd acrn-hypervisor $ git checkout v1.6.1 $ make $ cd misc/acrn-manager/; make
Install only the user-space components: acrn-dm, acrnctl, and acrnd
Download, compile, and install
iasl
. Refer to Prepare the User VM.
Set up libvirt¶
Install the required packages:
$ sudo apt install libdevmapper-dev libnl-route-3-dev libnl-3-dev python \ automake autoconf autopoint libtool xsltproc libxml2-utils gettext
Download libvirt/ACRN:
$ git clone https://github.com/projectacrn/acrn-libvirt.git
Build and install libvirt:
$ cd acrn-libvirt $ ./autogen.sh --prefix=/usr --disable-werror --with-test-suite=no \ --with-qemu=no --with-openvz=no --with-vmware=no --with-phyp=no \ --with-vbox=no --with-lxc=no --with-uml=no --with-esx=no $ make $ sudo make install
Edit and enable these options in
/etc/libvirt/libvirtd.conf
:unix_sock_ro_perms = "0777" unix_sock_rw_perms = "0777" unix_sock_admin_perms = "0777"
Restart the libvirt daemon:
$ sudo systemctl daemon-reload
Set up OpenStack¶
Use DevStack to install OpenStack. Refer to the DevStack instructions.
Use the latest maintenance branch stable/train to ensure OpenStack stability:
$ git clone https://opendev.org/openstack/devstack.git -b stable/train
Go to the devstack directory (
cd devstack
) and apply the following patch:0001-devstack-installation-for-acrn.patch
Edit
lib/nova_plugins/hypervisor-libvirt
:Change
xen_hvmloader_path
to the location of your OVMF image file. A stock image is included in the ACRN source tree (devicemodel/bios/OVMF.fd
).Create a
devstack/local.conf
file as shown below (setting the password as appropriate):[[local|localrc]] PUBLIC_INTERFACE=eth1 ADMIN_PASSWORD=<password> DATABASE_PASSWORD=<password> RABBIT_PASSWORD=<password> SERVICE_PASSWORD=<password> ENABLE_KSM=False VIRT_DRIVER=libvirt LIBVIRT_TYPE=acrn DEBUG_LIBVIRT=True DEBUG_LIBVIRT_COREDUMPS=True USE_PYTHON3=True
Note
Now is a great time to take a snapshot of the container using
lxc snapshot
. If the OpenStack installation fails, manually rolling back to the previous state can be difficult. Currently, no step exists to reliably restart OpenStack after restarting the container.Install OpenStack:
execute ./stack.sh in devstack/
The installation should take about 20-30 minutes. Upon successful installation, the installer reports the URL of OpenStack’s management interface. This URL is accessible from the native Ubuntu.
... Horizon is now available at http://<IP_address>/dashboard ... 2020-04-09 01:21:37.504 | stack.sh completed in 1755 seconds.
Verify using the command
systemctl status libvirtd.service
that libvirtd is active and running.Set up SNAT for OpenStack instances to connect to the external network.
Inside the container, use the command
ip a
to identify thebr-ex
bridge interface.br-ex
should have two IPs. One should be visible to the native Ubuntu’sacrn-br0
interface (e.g. inet 192.168.1.104/24). The other one is internal to OpenStack (e.g. inet 172.24.4.1/24). The latter corresponds to the public network in OpenStack.Set up SNAT to establish a link between
acrn-br0
and OpenStack. For example:$ sudo iptables -t nat -A POSTROUTING -s 172.24.4.1/24 -o br-ex -j SNAT --to-source 192.168.1.104
Final Steps¶
Create OpenStack instances.
- OpenStack logs to systemd journal
- libvirt logs to /var/log/libvirt/libvirtd.log
You can now use the URL to manage OpenStack in your native Ubuntu as admin and using the password set in the local.conf file when you set up OpenStack earlier.
Create a router between public (external network) and shared (internal network) using OpenStack’s network instructions.
Launch an ACRN instance using OpenStack’s launch instructions.
- Use Clear Linux Cloud Guest as the image (qcow2 format): https://clearlinux.org/downloads
- Skip Create Key Pair as it’s not supported by Clear Linux.
- Select No for Create New Volume when selecting the instance boot source image.
- Use shared as the instance’s network.
After the instance is created, use the hypervisor console to verify that it is running (
vm_list
).Ping the instance inside the container using the instance’s floating IP address.
Clear Linux prohibits root SSH login by default. Use libvirt’s
virsh
console to configure the instance. Inside the container, run:$ sudo virsh -c acrn:///system list #you should see the instance listed as running console <instance_name>
Log in to the Clear Linux instance and set up the root SSH. Refer to the Clear Linux instructions on enabling root login.
- If needed, set up the proxy inside the instance.
- Configure
systemd-resolved
to use the correct DNS server. - Install ping:
swupd bundle-add clr-network-troubleshooter
.
The ACRN instance should now be able to ping
acrn-br0
and another ACRN instance. It should also be accessible inside the container via SSH and its floating IP address.The ACRN instance can be deleted via the OpenStack management interface.
For more advanced CLI usage, refer to this OpenStack cheat sheet.