ACRN Shared Memory Based Inter-VM Communication¶
ACRN supports inter-virtual machine communication based on a shared
memory mechanism. The ACRN device model or hypervisor emulates a virtual
PCI device (called an
ivshmem device) to expose the base address and
size of this shared memory.
Inter-VM Communication Overview¶
ivshmem device is emulated in the ACRN device model (dm-land)
and its shared memory region is allocated from the Service VM’s memory
space. This solution only supports communication between post-launched
In a future implementation, the
ivshmem device could
instead be emulated in the hypervisor (hypervisor-land) and the shared
memory regions reserved in the hypervisor’s memory space. This solution
would work for both pre-launched and post-launched VMs.
- ivshmem hv:
- The ivshmem hv implements register virtualization and shared memory mapping in the ACRN hypervisor. It will support notification/interrupt mechanism in the future.
- ivshmem dm:
- The ivshmem dm implements register virtualization
and shared memory mapping in the ACRN Device Model (
acrn-dm). It will support notification/interrupt mechanism in the future.
- ivshmem server:
- A daemon for inter-VM notification capability that will work with ivshmem dm. This is currently not implemented, so the inter-VM communication doesn’t support a notification mechanism.
Ivshmem Device Introduction¶
ivshmem device is a virtual standard PCI device consisting of
two Base Address Registers (BARs): BAR0 is used for emulating interrupt
related registers, and BAR2 is used for exposing shared memory region. The
ivshmem device doesn’t support any extra capabilities.
Configuration Space Definition
MMIO Registers Definition
|IVSHMEM_IRQ_MASK_REG||0x0||R/W||Interrupt Status register is used for legacy interrupt. ivshmem doesn’t support interrupts, so this is reserved.|
|IVSHMEM_IRQ_STA_REG||0x4||R/W||Interrupt Mask register is used for legacy interrupt. ivshmem doesn’t support interrupts, so this is reserved.|
|IVSHMEM_IV_POS_REG||0x8||RO||Inter-VM Position register is used to identify the VM ID. Currently its value is zero.|
|IVSHMEM_DOORBELL_REG||0xC||WO||Doorbell register is used to trigger an interrupt to the peer VM. ivshmem doesn’t support interrupts.|
To support two post-launched VMs communicating via an
add this line as an
acrn-dm boot parameter:
-s slot- Specify the virtual PCI slot number
ivshmem- Virtual PCI device name
shm_name- Specify a shared memory name. Post-launched VMs with the same
shm_nameshare a shared memory region.
shm_size- Specify a shared memory size. The two communicating VMs must define the same size.
This device can be used with Real-Time VM (RTVM) as well.
Inter-VM Communication Example¶
The following example uses inter-vm communication between two Linux-based post-launched VMs (VM1 and VM2).
ivshmem Windows driver exists and can be found here
Add a new virtual PCI device for both VMs: the device type is
ivshmem, shared memory name is
test, and shared memory size is 4096 bytes. Both VMs must have the same shared memory name and size:
VM1 Launch Script Sample
acrn-dm -A -m $mem_size -s 0:0,hostbridge \ -s 2,pci-gvt -G "$2" \ -s 5,virtio-console,@stdio:stdio_port \ -s 6,virtio-hyper_dmabuf \ -s 3,virtio-blk,/home/clear/uos/uos1.img \ -s 4,virtio-net,tap0 \ -s 6,ivshmem,test,4096 \ -s 7,virtio-rnd \ --ovmf /usr/share/acrn/bios/OVMF.fd \ $vm_name
VM2 Launch Script Sample
acrn-dm -A -m $mem_size -s 0:0,hostbridge \ -s 2,pci-gvt -G "$2" \ -s 3,virtio-blk,/home/clear/uos/uos2.img \ -s 4,virtio-net,tap0 \ -s 5,ivshmem,test,4096 \ --ovmf /usr/share/acrn/bios/OVMF.fd \ $vm_name
Boot two VMs and use
lspci | grep "shared memory"to verify that the virtual device is ready for each VM.
- For VM1, it shows
00:06.0 RAM memory: Red Hat, Inc. Inter-VM shared memory (rev 01)
- For VM2, it shows
00:05.0 RAM memory: Red Hat, Inc. Inter-VM shared memory (rev 01)
- For VM1, it shows
Use these commands to probe the device:
$ sudo modprobe uio $ sudo modprobe uio_pci_generic $ sudo echo "1af4 1110" > /sys/bus/pci/drivers/uio_pci_generic/new_id
Finally, a user application can get the shared memory base address from the
ivshmemdevice BAR resource (
/sys/class/uio/uioX/device/resource2) and the shared memory size from the
ivshmemdevice config resource (
uioXabove, is a number that can be retrieved using the
- For VM1 use
ls -lh /sys/bus/pci/devices/0000:00:06.0/uio
- For VM2 use
ls -lh /sys/bus/pci/devices/0000:00:05.0/uio
- For VM1 use
Inter-VM Communication Security hardening (BKMs)¶
As previously highlighted, ACRN 2.0 provides the capability to create shared memory regions between Post-Launch user VMs known as “Inter-VM Communication”. This mechanism is based on ivshmem v1.0 exposing virtual PCI devices for the shared regions (in Service VM’s memory for this release). This feature adopts a community-approved design for shared memory between VMs, following same specification for KVM/QEMU (Link).
Following the ACRN threat model, the policy definition for allocation and assignment of these regions is controlled by the Service VM, which is part of ACRN’s Trusted Computing Base (TCB). However, to secure inter-VM communication between any userspace applications that harness this channel, applications will face more requirements for the confidentiality, integrity, and authenticity of shared or transferred data. It is the application development team’s responsibility to define a threat model and security architecture for the application and utilize custom or public libraries accordingly. In this document we provide an overview about potential hardening techniques from a userspace application’s perspective. Consider these techniques when defining the security architecture and threat model for your application.
This is not a definitive guide on all security technologies or how to implement security. We provide general pointers not bounded to a specific OS or use-case.
Secure Feature Configurability
- ACRN ensure a minimal control plane for the configuration of the memory region’s boundaries and name handles. This is managed only by the Service VM during the creation of the guest VM through the Device Model (DM).
- Service VM Admin should refer to the usage guide for secure configuration flow.
- Create different permissions or groups for the
adminrole to isolate it from other entities that might have access to the Service VM. For example only admin permissions allow R/W/X on DM binary.
- Reference: ACRN Shared Memory Based Inter-VM Communication
Apply Access Control
Add restrictions based on behavior or subject and object rules around information flow and accesses.
In Service VM, consider the
/dev/shmdevice node as a critical interface with special access requirement. Those requirements can be fulfilled using any of the existing opensource MAC technologies or even ACLs depending on the OS compatibility (Ubuntu, Windows, etc..) and integration complexity.
In the User VM, the shared memory region can be accessed using
mmap()of UIO device node. Other complementary info can be found under:
/sys/class/uio/uioX/device/resource2–> shared memory base address
/sys/class/uio/uioX/device/config–> shared memory Size.
For Linux-based User VMs, we recommend using the standard
UIO_PCI_GENERICdrivers through the device node (for example,
Crypto Support and Secure Applied Crypto
- According to the application’s threat model and the defined assets that need to be shared securely, define the requirements for crypto algorithms.Those algorithms should enable operations such as authenticated encryption and decryption, secure key exchange, true random number generation, and seed extraction. In addition, consider the landscape of your attack surface and define the need for security engine (for example CSME services.
- Don’t implement your own crypto functions. Use available compliant crypto libraries as applicable, such as. (Intel IPP or TinyCrypt)
- Utilize the platform/kernel infrastructure and services (e.g., Security high-level design , Kernel Crypto backend/APIs , keyring subsystem, etc..).
- Implement necessary flows for key lifecycle management including wrapping,revocation and migration, depending on the crypto key type used and if there are requirements for key persistence across system and power management events.
- Follow open source secure crypto coding guidelines for secure wrappers and marshalling data structures: Secure Applied Crypto
- References: NIST Crypto Standards and Guidelines, OpenSSL
- For use cases implemented in static environments (for example, Industrial and Automotive usages), follow application whitelist techniques and disable any third-party or native app stores.
- This mechanism can be chained with the access control policies to protect access to whitelisting rules and configuration files (refer to opensource or implement your custom solution).
- References: NIST SP800-167, fapolicyd
Secure boot and File System Integrity Verification
- The previously highlighted technologies rely on the kernel, as a secure component, to enforce such policies. Because of this, we strongly recommend enabling secure boot for the Service VM, and extend the secureboot chain to any post-launched VM kernels.
- To ensure no malicious software is introduced or persists, utilize the filesystem (FS) verification methods on every boot to extend the secure boot chain for post-launch VMs (kernel/FS).
- Reference: ACRN secure boot extension guide (ClearLinux, Windows)
- Reference Stack: dm-verity
All the mentioned hardening techniques might require minor extra development efforts.