xen.rst 5.9 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144
  1. Xen HVM guest support
  2. =====================
  3. Description
  4. -----------
  5. KVM has support for hosting Xen guests, intercepting Xen hypercalls and event
  6. channel (Xen PV interrupt) delivery. This allows guests which expect to be
  7. run under Xen to be hosted in QEMU under Linux/KVM instead.
  8. Using the split irqchip is mandatory for Xen support.
  9. Setup
  10. -----
  11. Xen mode is enabled by setting the ``xen-version`` property of the KVM
  12. accelerator, for example for Xen 4.17:
  13. .. parsed-literal::
  14. |qemu_system| --accel kvm,xen-version=0x40011,kernel-irqchip=split
  15. Additionally, virtual APIC support can be advertised to the guest through the
  16. ``xen-vapic`` CPU flag:
  17. .. parsed-literal::
  18. |qemu_system| --accel kvm,xen-version=0x40011,kernel-irqchip=split --cpu host,+xen-vapic
  19. When Xen support is enabled, QEMU changes hypervisor identification (CPUID
  20. 0x40000000..0x4000000A) to Xen. The KVM identification and features are not
  21. advertised to a Xen guest. If Hyper-V is also enabled, the Xen identification
  22. moves to leaves 0x40000100..0x4000010A.
  23. Properties
  24. ----------
  25. The following properties exist on the KVM accelerator object:
  26. ``xen-version``
  27. This property contains the Xen version in ``XENVER_version`` form, with the
  28. major version in the top 16 bits and the minor version in the low 16 bits.
  29. Setting this property enables the Xen guest support. If Xen version 4.5 or
  30. greater is specified, the HVM leaf in Xen CPUID is populated. Xen version
  31. 4.6 enables the vCPU ID in CPUID, and version 4.17 advertises vCPU upcall
  32. vector support to the guest.
  33. ``xen-evtchn-max-pirq``
  34. Xen PIRQs represent an emulated physical interrupt, either GSI or MSI, which
  35. can be routed to an event channel instead of to the emulated I/O or local
  36. APIC. By default, QEMU permits only 256 PIRQs because this allows maximum
  37. compatibility with 32-bit MSI where the higher bits of the PIRQ# would need
  38. to be in the upper 64 bits of the MSI message. For guests with large numbers
  39. of PCI devices (and none which are limited to 32-bit addressing) it may be
  40. desirable to increase this value.
  41. ``xen-gnttab-max-frames``
  42. Xen grant tables are the means by which a Xen guest grants access to its
  43. memory for PV back ends (disk, network, etc.). Since QEMU only supports v1
  44. grant tables which are 8 bytes in size, each page (each frame) of the grant
  45. table can reference 512 pages of guest memory. The default number of frames
  46. is 64, allowing for 32768 pages of guest memory to be accessed by PV backends
  47. through simultaneous grants. For guests with large numbers of PV devices and
  48. high throughput, it may be desirable to increase this value.
  49. Xen paravirtual devices
  50. -----------------------
  51. The Xen PCI platform device is enabled automatically for a Xen guest. This
  52. allows a guest to unplug all emulated devices, in order to use paravirtual
  53. block and network drivers instead.
  54. Those paravirtual Xen block, network (and console) devices can be created
  55. through the command line, and/or hot-plugged.
  56. To provide a Xen console device, define a character device and then a device
  57. of type ``xen-console`` to connect to it. For the Xen console equivalent of
  58. the handy ``-serial mon:stdio`` option, for example:
  59. .. parsed-literal::
  60. -chardev stdio,mux=on,id=char0,signal=off -mon char0 \\
  61. -device xen-console,chardev=char0
  62. The Xen network device is ``xen-net-device``, which becomes the default NIC
  63. model for emulated Xen guests, meaning that just the default NIC provided
  64. by QEMU should automatically work and present a Xen network device to the
  65. guest.
  66. Disks can be configured with '``-drive file=${GUEST_IMAGE},if=xen``' and will
  67. appear to the guest as ``xvda`` onwards.
  68. Under Xen, the boot disk is typically available both via IDE emulation, and
  69. as a PV block device. Guest bootloaders typically use IDE to load the guest
  70. kernel, which then unplugs the IDE and continues with the Xen PV block device.
  71. This configuration can be achieved as follows:
  72. .. parsed-literal::
  73. |qemu_system| --accel kvm,xen-version=0x40011,kernel-irqchip=split \\
  74. -drive file=${GUEST_IMAGE},if=xen \\
  75. -drive file=${GUEST_IMAGE},file.locking=off,if=ide
  76. VirtIO devices can also be used; Linux guests may need to be dissuaded from
  77. umplugging them by adding '``xen_emul_unplug=never``' on their command line.
  78. Booting Xen PV guests
  79. ---------------------
  80. Booting PV guest kernels is possible by using the Xen PV shim (a version of Xen
  81. itself, designed to run inside a Xen HVM guest and provide memory management
  82. services for one guest alone).
  83. The Xen binary is provided as the ``-kernel`` and the guest kernel itself (or
  84. PV Grub image) as the ``-initrd`` image, which actually just means the first
  85. multiboot "module". For example:
  86. .. parsed-literal::
  87. |qemu_system| --accel kvm,xen-version=0x40011,kernel-irqchip=split \\
  88. -chardev stdio,id=char0 -device xen-console,chardev=char0 \\
  89. -display none -m 1G -kernel xen -initrd bzImage \\
  90. -append "pv-shim console=xen,pv -- console=hvc0 root=/dev/xvda1" \\
  91. -drive file=${GUEST_IMAGE},if=xen
  92. The Xen image must be built with the ``CONFIG_XEN_GUEST`` and ``CONFIG_PV_SHIM``
  93. options, and as of Xen 4.17, Xen's PV shim mode does not support using a serial
  94. port; it must have a Xen console or it will panic.
  95. The example above provides the guest kernel command line after a separator
  96. (" ``--`` ") on the Xen command line, and does not provide the guest kernel
  97. with an actual initramfs, which would need to listed as a second multiboot
  98. module. For more complicated alternatives, see the command line
  99. :ref:`documentation <system/invocation-qemu-options-initrd>` for the
  100. ``-initrd`` option.
  101. Host OS requirements
  102. --------------------
  103. The minimal Xen support in the KVM accelerator requires the host to be running
  104. Linux v5.12 or newer. Later versions add optimisations: Linux v5.17 added
  105. acceleration of interrupt delivery via the Xen PIRQ mechanism, and Linux v5.19
  106. accelerated Xen PV timers and inter-processor interrupts (IPIs).