ACRN GVT-g APIs¶
GVT-g is Intel’s open source GPU virtualization solution and is up-streamed to the Linux kernel. Its implementation over KVM is named KVMGT, over Xen it is named XenGT, and over ACRN it is named AcrnGT. GVT-g can exports multiple virtual GPU (vGPU) instances for virtual machine system (VM). A VM could be assigned one vGPU instance. The guest OS graphic driver needs minor modification to drive the vGPU adapter in a VM. Every vGPU instance will adopt the full HW GPU’s accelerate capability for 3D render and display.
In the following document, AcrnGT refers to the glue layer between ACRN
hypervisor and GVT-g core device model. It works as the agent of
hypervisor-related services. It is the only layer that needs to get rewritten
when porting GVT-g to another hypervisor. For simplicity, in the rest of this
document, GVT is used to refer to the core device model component of GVT-g,
specifically corresponding to gvt.ko
when built as a module.
Core Driver Infrastructure¶
This section covers core driver infrastructure API used by both the display and the Graphics Execution Manager(GEM) parts of i915 driver.
Intel GVT-g Guest Support (vGPU)¶
Intel GVT-g is a graphics virtualization technology which shares the GPU among multiple virtual machines on a time-sharing basis. Each virtual machine is presented a virtual GPU (vGPU), which has equivalent features as the underlying physical GPU (pGPU), so i915 driver can run seamlessly in a virtual machine. This file provides vGPU specific optimizations when running in a virtual machine, to reduce the complexity of vGPU emulation and to improve the overall performance.
A primary function introduced here is so-called “address space ballooning” technique. Intel GVT-g partitions global graphics memory among multiple VMs, so each VM can directly access a portion of the memory without hypervisor’s intervention, e.g. filling textures or queuing commands. However with the partitioning an unmodified i915 driver would assume a smaller graphics memory starting from address ZERO, then requires vGPU emulation module to translate the graphics address between ‘guest view’ and ‘host view’, for all registers and command opcodes which contain a graphics memory address. To reduce the complexity, Intel GVT-g introduces “address space ballooning”, by telling the exact partitioning knowledge to each guest i915 driver, which then reserves and prevents non-allocated portions from allocation. Thus vGPU emulation module only needs to scan and validate graphics addresses without complexity of address translation.
-
void
i915_detect_vgpu
(struct drm_i915_private *dev_priv)¶ detect virtual GPU
Parameters
struct drm_i915_private * dev_priv
i915 device private
Description
This function is called at the initialization stage, to detect whether running on a vGPU.
-
void
intel_vgt_deballoon
(struct i915_ggtt *ggtt)¶ deballoon reserved graphics address trunks
Parameters
struct i915_ggtt * ggtt
the global GGTT from which we reserved earlier
Description
This function is called to deallocate the ballooned-out graphic memory, when driver is unloaded or when ballooning fails.
-
int
intel_vgt_balloon
(struct i915_ggtt *ggtt)¶ balloon out reserved graphics address trunks
Parameters
struct i915_ggtt * ggtt
the global GGTT from which to reserve
Description
This function is called at the initialization stage, to balloon out the graphic address space allocated to other vGPUs, by marking these spaces as reserved. The ballooning related knowledge(starting address and size of the mappable/unmappable graphic memory) is described in the vgt_if structure in a reserved mmio range.
To give an example, the drawing below depicts one typical scenario after ballooning. Here the vGPU1 has 2 pieces of graphic address spaces ballooned out each for the mappable and the non-mappable part. From the vGPU1 point of view, the total size is the same as the physical one, with the start address of its graphic space being zero. Yet there are some portions ballooned out( the shadow part, which are marked as reserved by drm allocator). From the host point of view, the graphic address space is partitioned by multiple vGPUs in different VMs.
vGPU1 view Host view
0 ------> +-----------+ +-----------+
^ |###########| | vGPU3 |
| |###########| +-----------+
| |###########| | vGPU2 |
| +-----------+ +-----------+
mappable GM | available | ==> | vGPU1 |
| +-----------+ +-----------+
| |###########| | |
v |###########| | Host |
+=======+===========+ +===========+
^ |###########| | vGPU3 |
| |###########| +-----------+
| |###########| | vGPU2 |
| +-----------+ +-----------+
unmappable GM | available | ==> | vGPU1 |
| +-----------+ +-----------+
| |###########| | |
| |###########| | Host |
v |###########| | |
total GM size ------> +-----------+ +-----------+
Return
zero on success, non-zero if configuration invalid or ballooning failed
Intel GVT-g Host Support (vGPU Device Model)¶
Intel GVT-g is a graphics virtualization technology which shares the GPU among multiple virtual machines on a time-sharing basis. Each virtual machine is presented a virtual GPU (vGPU), which has equivalent features as the underlying physical GPU (pGPU), so i915 driver can run seamlessly in a virtual machine.
To virtualize GPU resources GVT-g driver depends on hypervisor technology e.g KVM/VFIO/mdev, Xen, etc. to provide resource access trapping capability and be virtualized within GVT-g device module. More architectural design doc is available on https://01.org/group/2230/documentation-list.
-
void
intel_gvt_sanitize_options
(struct drm_i915_private *dev_priv)¶ sanitize GVT related options
Parameters
struct drm_i915_private * dev_priv
drm i915 private data
Description
This function is called at the i915 options sanitize stage.
-
int
intel_gvt_init
(struct drm_i915_private *dev_priv)¶ initialize GVT components
Parameters
struct drm_i915_private * dev_priv
drm i915 private data
Description
This function is called at the initialization stage to create a GVT device.
Return
Zero on success, negative error code if failed.
-
void
intel_gvt_driver_remove
(struct drm_i915_private *dev_priv)¶ cleanup GVT components when i915 driver is unbinding
Parameters
struct drm_i915_private * dev_priv
drm i915 private *
Description
This function is called at the i915 driver unloading stage, to shutdown GVT components and release the related resources.
VHM APIs Called From AcrnGT¶
The Virtio and Hypervisor Service Module (VHM) is a kernel module in the Service OS acting as a middle layer to support the device model. (See the ACRN I/O Mediator introduction for details.)
VHM requires an interrupt (vIRQ) number, and exposes some APIs to external
kernel modules such as GVT-g and the Virtio back-end (BE) service running in
kernel space. VHM exposes a char
device node in user space, and only
interacts with DM. The DM routes I/O request and response from and to other
modules via the char
device to and from VHM. DM may use VHM for hypervisor
service (including remote memory map). VHM may directly service the request
such as for the remote memory map, or invoke hypercall. VHM also sends I/O
responses to user space modules, notified by vIRQ injections.
-
void
put_vm
(struct vhm_vm *vm)¶ release vhm_vm of guest according to guest vmid If the latest reference count drops to zero, free vhm_vm as well
Parameters
struct vhm_vm * vm
pointer to vhm_vm which identify specific guest
-
int
vhm_get_vm_info
(unsigned long vmid, struct vm_info *info)¶ get vm_info of specific guest
Parameters
unsigned long vmid
guest vmid
struct vm_info * info
pointer to vm_info for returned vm_info
Return
0 on success, <0 on error
-
int
vhm_inject_msi
(unsigned long vmid, unsigned long msi_addr, unsigned long msi_data)¶ inject MSI interrupt to guest
Parameters
unsigned long vmid
guest vmid
unsigned long msi_addr
MSI addr matches MSI spec
unsigned long msi_data
MSI data matches MSI spec
Return
0 on success, <0 on error
-
unsigned long
vhm_vm_gpa2hpa
(unsigned long vmid, unsigned long gpa)¶ convert guest physical address to host physical address
Parameters
unsigned long vmid
guest vmid
unsigned long gpa
guest physical address
Return
host physical address, <0 on error
-
int
acrn_ioreq_create_client
(unsigned long vmid, ioreq_handler_t handler, char *name)¶ create ioreq client
Parameters
unsigned long vmid
ID to identify guest
ioreq_handler_t handler
ioreq_handler of ioreq client If client want request handled in client thread context, set this parameter to NULL. If client want request handled out of client thread context, set handler function pointer of its own. VHM will create kernel thread and call handler to handle request
char * name
the name of ioreq client
Return
client id on success, <0 on error
-
void
acrn_ioreq_destroy_client
(int client_id)¶ destroy ioreq client
Parameters
int client_id
client id to identify ioreq client
-
int
acrn_ioreq_add_iorange
(int client_id, uint32_t type, long start, long end)¶ add iorange monitored by ioreq client
Parameters
int client_id
client id to identify ioreq client
uint32_t type
iorange type
long start
iorange start address
long end
iorange end address
Return
0 on success, <0 on error
-
int
acrn_ioreq_del_iorange
(int client_id, uint32_t type, long start, long end)¶ del iorange monitored by ioreq client
Parameters
int client_id
client id to identify ioreq client
uint32_t type
iorange type
long start
iorange start address
long end
iorange end address
Return
0 on success, <0 on error
-
struct vhm_request *
acrn_ioreq_get_reqbuf
(int client_id)¶ get request buffer request buffer is shared by all clients in one guest
Parameters
int client_id
client id to identify ioreq client
Return
pointer to request buffer, NULL on error
-
int
acrn_ioreq_attach_client
(int client_id, bool check_kthread_stop)¶ start handle request for ioreq client If request is handled out of client thread context, this function is only called once to be ready to handle new request.
Parameters
int client_id
client id to identify ioreq client
bool check_kthread_stop
whether check current kthread should be stopped
Description
If request is handled in client thread context, this function must be called every time after the previous request handling is completed to be ready to handle new request.
Return
0 on success, <0 on error, 1 if ioreq client is destroying
-
int
acrn_ioreq_distribute_request
(struct vhm_vm *vm)¶ deliver request to corresponding client
Parameters
struct vhm_vm * vm
pointer to guest
Return
0 always
-
int
acrn_ioreq_complete_request
(int client_id, uint64_t vcpu, struct vhm_request *vhm_req)¶ notify guest request handling is completed
Parameters
int client_id
client id to identify ioreq client
uint64_t vcpu
identify request submitter
struct vhm_request * vhm_req
the request for fast grab
Return
0 on success, <0 on error
-
void
acrn_ioreq_clear_request
(struct vhm_vm *vm)¶ clear all guest requests
Parameters
struct vhm_vm * vm
pointer to guest VM
-
void
acrn_ioreq_intercept_bdf
(int client_id, int bus, int dev, int func)¶ set intercept bdf info of ioreq client
Parameters
int client_id
client id to identify ioreq client
int bus
bus number
int dev
device number
int func
function number
-
void
acrn_ioreq_unintercept_bdf
(int client_id)¶ clear intercept bdf info of ioreq client
Parameters
int client_id
client id to identify ioreq client
-
unsigned long
acrn_hpa2gpa
(unsigned long hpa)¶ physical address conversion
Parameters
unsigned long hpa
host physical address
Description
convert host physical address (hpa) to guest physical address (gpa) gpa and hpa is 1:1 mapping for service OS
Return
guest physical address
-
void *
map_guest_phys
(unsigned long vmid, u64 uos_phys, size_t size)¶ map guest physical address to SOS kernel virtual address
Parameters
unsigned long vmid
guest vmid
u64 uos_phys
physical address in guest
size_t size
the memory size mapped
Return
SOS kernel virtual address, NULL on error
-
int
unmap_guest_phys
(unsigned long vmid, u64 uos_phys)¶ unmap guest physical address
Parameters
unsigned long vmid
guest vmid
u64 uos_phys
physical address in guest
Return
0 on success, <0 for error.
-
int
add_memory_region
(unsigned long vmid, unsigned long gpa, unsigned long host_gpa, unsigned long size, unsigned int mem_type, unsigned int mem_access_right)¶ add a guest memory region
Parameters
unsigned long vmid
guest vmid
unsigned long gpa
gpa of UOS
unsigned long host_gpa
gpa of SOS
unsigned long size
memory region size
unsigned int mem_type
memory mapping type. Possible value could be: MEM_TYPE_WB MEM_TYPE_WT MEM_TYPE_UC MEM_TYPE_WC MEM_TYPE_WP
unsigned int mem_access_right
memory mapping access. Possible value could be: MEM_ACCESS_READ MEM_ACCESS_WRITE MEM_ACCESS_EXEC MEM_ACCESS_RWX
Return
0 on success, <0 for error.
-
int
del_memory_region
(unsigned long vmid, unsigned long gpa, unsigned long size)¶ delete a guest memory region
Parameters
unsigned long vmid
guest vmid
unsigned long gpa
gpa of UOS
unsigned long size
memory region size
Return
0 on success, <0 for error.
-
int
write_protect_page
(unsigned long vmid, unsigned long gpa, unsigned char set)¶ change one page write protection
Parameters
unsigned long vmid
guest vmid
unsigned long gpa
gpa in guest vmid
unsigned char set
set or clear page write protection
Return
0 on success, <0 for error.
AcrnGT Mediated Passthrough (MPT) Interface¶
AcrnGT receives request from GVT module through MPT interface. Refer to the Mediated Passthrough page.
A collection of function callbacks in the MPT module will be attached to GVT host at the driver loading stage. AcrnGT MPT function callbacks are described as below:
struct intel_gvt_mpt acrn_gvt_mpt = {
.host_init = acrngt_host_init,
.host_exit = acrngt_host_exit,
.attach_vgpu = acrngt_attach_vgpu,
.detach_vgpu = acrngt_detach_vgpu,
.inject_msi = acrngt_inject_msi,
.from_virt_to_mfn = acrngt_virt_to_mfn,
.enable_page_track = acrngt_page_track_add,
.disable_page_track = acrngt_page_track_remove,
.read_gpa = acrngt_read_gpa,
.write_gpa = acrngt_write_gpa,
.gfn_to_mfn = acrngt_gfn_to_pfn,
.map_gfn_to_mfn = acrngt_map_gfn_to_mfn,
.dma_map_guest_page = acrngt_dma_map_guest_page,
.dma_unmap_guest_page = acrngt_dma_unmap_guest_page,
.set_trap_area = acrngt_set_trap_area,
.set_pvmmio = acrngt_set_pvmmio,
.dom0_ready = acrngt_dom0_ready,
};
EXPORT_SYMBOL_GPL(acrn_gvt_mpt);
GVT-g core logic will call these APIs through wrap functions with prefix
intel_gvt_hypervisor_
to request specific services from hypervisor through
VHM.
This section describes the wrap functions:
-
int
intel_gvt_hypervisor_host_init
(struct device *dev, void *gvt, const void *ops)¶ init GVT-g host side
Parameters
struct device * dev
i915 device
void * gvt
GVT device
const void * ops
intel_gvt_ops interface
Return
Zero on success, negative error code if failed
-
void
intel_gvt_hypervisor_host_exit
(struct device *dev)¶ exit GVT-g host side
Parameters
struct device * dev
i915 device
-
int
intel_gvt_hypervisor_attach_vgpu
(struct intel_vgpu *vgpu)¶ call hypervisor to initialize vGPU related stuffs inside hypervisor.
Parameters
struct intel_vgpu * vgpu
a vGPU
Return
Zero on success, negative error code if failed.
-
void
intel_gvt_hypervisor_detach_vgpu
(struct intel_vgpu *vgpu)¶ call hypervisor to release vGPU related stuffs inside hypervisor.
Parameters
struct intel_vgpu * vgpu
a vGPU
Return
Zero on success, negative error code if failed.
-
int
intel_gvt_hypervisor_inject_msi
(struct intel_vgpu *vgpu)¶ inject a MSI interrupt into vGPU
Parameters
struct intel_vgpu * vgpu
a vGPU
Return
Zero on success, negative error code if failed.
-
unsigned long
intel_gvt_hypervisor_virt_to_mfn
(void *p)¶ translate a host VA into MFN
Parameters
void * p
host kernel virtual address
Return
MFN on success, INTEL_GVT_INVALID_ADDR if failed.
-
int
intel_gvt_hypervisor_enable_page_track
(struct intel_vgpu *vgpu, unsigned long gfn)¶ track a guest page
Parameters
struct intel_vgpu * vgpu
a vGPU
unsigned long gfn
the gfn of guest
Return
Zero on success, negative error code if failed.
-
int
intel_gvt_hypervisor_disable_page_track
(struct intel_vgpu *vgpu, unsigned long gfn)¶ untrack a guest page
Parameters
struct intel_vgpu * vgpu
a vGPU
unsigned long gfn
the gfn of guest
Return
Zero on success, negative error code if failed.
-
int
intel_gvt_hypervisor_read_gpa
(struct intel_vgpu *vgpu, unsigned long gpa, void *buf, unsigned long len)¶ copy data from GPA to host data buffer
Parameters
struct intel_vgpu * vgpu
a vGPU
unsigned long gpa
guest physical address
void * buf
host data buffer
unsigned long len
data length
Return
Zero on success, negative error code if failed.
-
int
intel_gvt_hypervisor_write_gpa
(struct intel_vgpu *vgpu, unsigned long gpa, void *buf, unsigned long len)¶ copy data from host data buffer to GPA
Parameters
struct intel_vgpu * vgpu
a vGPU
unsigned long gpa
guest physical address
void * buf
host data buffer
unsigned long len
data length
Return
Zero on success, negative error code if failed.
-
unsigned long
intel_gvt_hypervisor_gfn_to_mfn
(struct intel_vgpu *vgpu, unsigned long gfn)¶ translate a GFN to MFN
Parameters
struct intel_vgpu * vgpu
a vGPU
unsigned long gfn
guest pfn
Return
MFN on success, INTEL_GVT_INVALID_ADDR if failed.
-
int
intel_gvt_hypervisor_dma_map_guest_page
(struct intel_vgpu *vgpu, unsigned long gfn, unsigned long size, dma_addr_t *dma_addr)¶ setup dma map for guest page
Parameters
struct intel_vgpu * vgpu
a vGPU
unsigned long gfn
guest pfn
unsigned long size
page size
dma_addr_t * dma_addr
retrieve allocated dma addr
Return
0 on success, negative error code if failed.
-
void
intel_gvt_hypervisor_dma_unmap_guest_page
(struct intel_vgpu *vgpu, dma_addr_t dma_addr)¶ cancel dma map for guest page
Parameters
struct intel_vgpu * vgpu
a vGPU
dma_addr_t dma_addr
the mapped dma addr
-
int
intel_gvt_hypervisor_map_gfn_to_mfn
(struct intel_vgpu *vgpu, unsigned long gfn, unsigned long mfn, unsigned int nr, bool map)¶ map a GFN region to MFN
Parameters
struct intel_vgpu * vgpu
a vGPU
unsigned long gfn
guest PFN
unsigned long mfn
host PFN
unsigned int nr
amount of PFNs
bool map
map or unmap
Return
Zero on success, negative error code if failed.
-
int
intel_gvt_hypervisor_set_trap_area
(struct intel_vgpu *vgpu, u64 start, u64 end, bool map)¶ Trap a guest PA region
Parameters
struct intel_vgpu * vgpu
a vGPU
u64 start
the beginning of the guest physical address region
u64 end
the end of the guest physical address region
bool map
map or unmap
Return
Zero on success, negative error code if failed.
GVT-g intel_gvt_ops Interface¶
This section contains APIs for GVT-g intel_gvt_ops interface. Sources are found in the ACRN kernel GitHub repo
static const struct intel_gvt_ops intel_gvt_ops = {
.emulate_cfg_read = intel_vgpu_emulate_cfg_read,
.emulate_cfg_write = intel_vgpu_emulate_cfg_write,
.emulate_mmio_read = intel_vgpu_emulate_mmio_read,
.emulate_mmio_write = intel_vgpu_emulate_mmio_write,
.vgpu_create = intel_gvt_create_vgpu,
.vgpu_destroy = intel_gvt_destroy_vgpu,
.vgpu_reset = intel_gvt_reset_vgpu,
.vgpu_activate = intel_gvt_activate_vgpu,
.vgpu_deactivate = intel_gvt_deactivate_vgpu,
};
-
int
intel_vgpu_emulate_cfg_read
(struct intel_vgpu *vgpu, unsigned int offset, void *p_data, unsigned int bytes)¶ emulate vGPU configuration space read
Parameters
struct intel_vgpu * vgpu
target vgpu
unsigned int offset
offset
void * p_data
return data ptr
unsigned int bytes
number of bytes to read
Return
Zero on success, negative error code if failed.
-
int
intel_vgpu_emulate_cfg_write
(struct intel_vgpu *vgpu, unsigned int offset, void *p_data, unsigned int bytes)¶ emulate vGPU configuration space write
Parameters
struct intel_vgpu * vgpu
target vgpu
unsigned int offset
offset
void * p_data
write data ptr
unsigned int bytes
number of bytes to write
Return
Zero on success, negative error code if failed.
-
int
intel_vgpu_emulate_mmio_read
(struct intel_vgpu *vgpu, u64 pa, void *p_data, unsigned int bytes)¶ emulate MMIO read
Parameters
struct intel_vgpu * vgpu
a vGPU
u64 pa
guest physical address
void * p_data
data return buffer
unsigned int bytes
access data length
Return
Zero on success, negative error code if failed
-
int
intel_vgpu_emulate_mmio_write
(struct intel_vgpu *vgpu, u64 pa, void *p_data, unsigned int bytes)¶ emulate MMIO write
Parameters
struct intel_vgpu * vgpu
a vGPU
u64 pa
guest physical address
void * p_data
write data buffer
unsigned int bytes
access data length
Return
Zero on success, negative error code if failed
-
void
intel_gvt_activate_vgpu
(struct intel_vgpu *vgpu)¶ activate a virtual GPU
Parameters
struct intel_vgpu * vgpu
virtual GPU
Description
This function is called when user wants to activate a virtual GPU.
-
void
intel_gvt_deactivate_vgpu
(struct intel_vgpu *vgpu)¶ deactivate a virtual GPU
Parameters
struct intel_vgpu * vgpu
virtual GPU
Description
This function is called when user wants to deactivate a virtual GPU. The virtual GPU will be stopped.
-
void
intel_gvt_destroy_vgpu
(struct intel_vgpu *vgpu)¶ destroy a virtual GPU
Parameters
struct intel_vgpu * vgpu
virtual GPU
Description
This function is called when user wants to destroy a virtual GPU.
-
void
intel_gvt_reset_vgpu
(struct intel_vgpu *vgpu)¶ reset a virtual GPU (Function Level)
Parameters
struct intel_vgpu * vgpu
virtual GPU
Description
This function is called when user wants to reset a virtual GPU.
AcrnGT sysfs Interface¶
This section contains APIs for the AcrnGT sysfs interface. Sources are found in the ACRN kernel GitHub repo
sysfs Nodes¶
In the following examples, all accesses to these interfaces are via bash command
echo
or cat
. This is a quick and easy way to get or control things. But
when these operations fail, it is impossible to get respective error code by
this way.
When accessing sysfs entries, use library functions such as
read()
or write()
instead.
On success, the returned value of read()
or write()
indicates how
many bytes have been transferred. On error, the returned value is -1
and the global errno
will be set appropriately. This is the only way to
figure out what kind of error occurs.
The
/sys/kernel/gvt/
class sub-directory belongs to AcrnGT and provides a centralized sysfs interface for configuring vGPU properties.The
/sys/kernel/gvt/control/
sub-directory contains all the necessary switches for different purposes.The
/sys/kernel/gvt/control/create_gvt_instance
node is used by ACRN-DM to create/destroy a vGPU instance.After a VM is created, a new sub-directory
/sys/kernel/GVT/vmN
(“N” is the VM id) will be created.The
/sys/kernel/gvt/vmN/vgpu_id
node is to get vGPU id from VM which id is N.