- 22 Jan, 2019 12 commits
-
-
Jean-Philippe Brucker authored
Virtio allows to reset individual virtqueues. For legacy devices, it's done by writing an address of 0 into the PFN register. Modern devices have an "enable" register. Add an exit_vq() callback to all devices. A lot more work is required by each device to clean up their virtqueue state, and by the core to reset things like MSI routes and ioeventfds. Signed-off-by:
Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by:
Julien Thierry <julien.thierry@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Jean-Philippe Brucker authored
To ease future changes to the core, replace get_pfn_vq() with get_vq(). This way adding new generic operation on virtqueues won't require modifying every virtio device. Signed-off-by:
Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by:
Julien Thierry <julien.thierry@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Jean-Philippe Brucker authored
Modern virtio requires devices to report how many queues they support. Add an operation to query all devices about their capacities. Signed-off-by:
Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by:
Julien Thierry <julien.thierry@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Jean-Philippe Brucker authored
Modern virtio require proper status handling and reset. A "notify_status" callback is already present in the virtio ops, but isn't implemented by any device. Instead they currently use "set_guest_feature" to reset the device and deal with endianess. This isn't sufficient for proper device reset, so add the notify_status callback to all devices that need it. To add useful hints like "start" and "stop", extend the status variable to 32-bits. Signed-off-by:
Jean-Philippe Brucker <jean-philippe.brucker@arm.com> [Julien T: Remove VIRTIO_CONFIG_S_NEEDS_RESET from config mask, as it is virtio v1+ macro and kvmtool only implements v0.9, this macro should not be referenced for now] Signed-off-by:
Julien Thierry <julien.thierry@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Jean-Philippe Brucker authored
Fix three bugs that prevent removal of ioeventfds in KVM. Store the flags in the right structure, check the datamatch parameter, and pass the fd to KVM. Signed-off-by:
Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by:
Julien Thierry <julien.thierry@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Julien Thierry authored
Implement firmware image loading for arm and set the boot start address to the firmware address. Add an option for the user to specify where to map the firmware. Signed-off-by:
Julien Thierry <julien.thierry@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Julien Thierry authored
When a firmware file is provided, kvmtool is not responsible for loading a kernel image. There is no reason for looking for a default kernel image when loading a firmware. Signed-off-by:
Julien Thierry <julien.thierry@arm.com> Reviewed-by:
Andre Przywara <andre.przywara@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Julien Thierry authored
Firmware loading/setup function are in fdt file while it is not very related to this. Move them to the file that does the main init/setup for memory. Signed-off-by:
Julien Thierry <julien.thierry@arm.com> Reviewed-by:
Andre Przywara <andre.przywara@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Sami Mujawar authored
Some software drivers check the VRT bit (BIT7) of Register D before using the MC146818 RTC. Initialized the VRT bit in rtc__init() to indicate that the RAM and time contents are valid. Signed-off-by:
Sami Mujawar <sami.mujawar@arm.com> Signed-off-by:
Julien Thierry <julien.thierry@arm.com> Reviewed-by:
Andre Przywara <andre.przywara@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Dave Martin authored
ARM64_CORE_REG() is currently only used to generate the KVM register IDs for registers that happen to be 64 bits in size, so KVM_REG_SIZE_U64 is hard-coded in the definition. To enable this macro to generate correct encodings for the FPSIMD registers too (which are a mix of 128-bit and 32-bit registers), this patch extends the macro to encode the correct size for each class of register in KVM_REG_ARM_CORE. The approach is crude, but because the KVM_REG_ARM_CORE ID arrangement is ABI, it's not expected to evolve. Signed-off-by:
Dave Martin <Dave.Martin@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Dave Martin authored
The local copies of the kvm user API headers are getting stale. In preparation for some arch-specific updated, this patch reflects a re-run of util/update_headers.sh to pull in upstream updates from linux v5.0-rc2. Signed-off-by:
Dave Martin <Dave.Martin@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Dave Martin authored
guest/guest_init.c is a generated file, but git doesn't currently ignore it. This can be annoying when running git status etc. This patch adds a suitable .gitignore entry for this file. Signed-off-by:
Dave Martin <Dave.Martin@arm.com> [will: Do the same for guest/guest_pre_init.c] Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- 02 Nov, 2018 3 commits
-
-
Julien Thierry authored
Currently, the handling a pause signal only sets a state that will be checked at the begining of the CPU run loop. At the checking point the vCPU sends the notification that it is actually paused allowing the pause requester to confirm all vCPUs are paused. Receiving the pause signal during a KVM_RUN ioctl will make KVM exit to userspace. However, there is a small window between that check on cpu->paused and the execution of KVM_RUN where the signal has been received but the vCPU does not go back through the notification and starts KVM_RUN. Since there is no guarantee the vCPU will come back to userspace, the pause requester might deadlock. Perform the pause directly from the signal handler. This relies on a vCPU thread never receiving a pause signal while being pause, but such scenario would have caused a deadlock for the pause requester anyway. Signed-off-by:
Julien Thierry <julien.thierry@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Julien Thierry authored
With the following sequence: kvm__pause(); kvm__continue(); kvm__pause(); There is a chance that not all paused threads have been resumed, and the second kvm__pause will attempt to pause them again. Since the paused thread is waiting to own the pause_lock, it won't write its second pause notification. kvm__pause will be waiting for that notification while owning pause_lock, so... deadlock. Simple solution is not to try to pause thread that had not the chance to resume. Signed-off-by:
Julien Thierry <julien.thierry@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Jean-Philippe Brucker authored
After adding buffers to the virtio queue, the guest increments the avail index. It then reads the event index to check if it needs to notify the host. If the event index corresponds to the previous avail value, then the guest notifies the host. Otherwise it means that the host is still processing the queue and hasn't had a chance to increment the event index yet. Once it gets there, the host will see the new avail index and process the descriptors, so there is no need for a notification. This is only guaranteed to work if both threads write and read the indices in the right order. Currently a barrier is missing from virt_queue__available(), and the host may not see an up-to-date value of event index after writing avail. HOST | GUEST | | write avail = 1 | mb() | read event -> 0 write event = 0 | == prev_avail -> notify read avail -> 1 | | write event = 1 | read avail -> 1 | wait() | write avail = 2 | mb() | read event -> 0 | != prev_avail -> no notification By adding a memory barrier on the host side, we ensure that it doesn't miss any notification. Reviewed-By:
Steven Price <steven.price@arm.com> Signed-off-by:
Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- 16 Aug, 2018 1 commit
-
-
Julien Thierry authored
Ioport register bus devices when they registered. These devices are not unregistered when the ioports entries containing their headers are unregistered. This results in dangling pointers in the device rb_tree. Unregister ioport bus devices when the ioport is unregistered. Signed-off-by:
Julien Thierry <julien.thierry@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- 13 Jul, 2018 2 commits
-
-
Julien Thierry authored
On Debian Stretch/Ubuntu 14.04, the libbfd provided by libbfd-dev or binutils-dev packages does not like being linked statically. Add a dynamic linkage test when detecting libbfd. Signed-off-by:
Julien Thierry <julien.thierry@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Julien Thierry authored
For some optional dependencies, both static and dynamic linking is tested. But if the first one being tested fails, the dependency is added to the NOTFOUND list and reported as being skipped while it might still be built with another linkage. Add optional dependencies to NOTFOUND only if both linkage are invalid. Signed-off-by:
Julien Thierry <julien.thierry@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- 06 Jul, 2018 1 commit
-
-
Jean-Philippe Brucker authored
When building an object "foo.o", kvmtool also creates a ".foo.o.d" file, using the dependency generation feature of CPP. This file describes in Makefile format all headers included by foo.c. When one header is modified, make rebuilds all objects that include it. Dependency files in subfolders are currently ignored by make, because the target doesn't contain the right prefix. For example virtio/.blk.o.d has target "blk.o" instead of "virtio/blk.o". As a result, rebuilding kvmtool without first issuing a make clean can introduce sneaky bugs, where different objects use mismatched headers. To write the right targets in dependency files, add a -MT argument to CPP. Signed-off-by:
Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- 19 Jun, 2018 13 commits
-
-
Jean-Philippe Brucker authored
Use the new reserved_regions API to ensure that RAM doesn't overlap any reserved region. This prevents for instance from mapping an MSI doorbell into the guest IPA space. For the moment we reject any overlapping. In the future, we might carve reserved regions out of the guest physical space. Reviewed-by:
Punit Agrawal <punit.agrawal@arm.com> Signed-off-by:
Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Jean-Philippe Brucker authored
When passing devices to the guest, there might be address ranges unavailable to the device. For instance, if address 0x10000000 corresponds to an MSI doorbell, any transaction from a device to that address will be directed to the MSI controller and might not even reach the IOMMU. In that case 0x10000000 is reserved by the physical IOMMU in the guest's physical space. This patch introduces a simple API to register reserved ranges of addresses that should not or cannot be provided to the guest. For the moment it only checks that a reserved range does not overlap any user memory (we don't consider MMIO) and aborts otherwise. It should be possible instead to poke holes in the guest-physical memory map and report them via the architecture's preferred route: * ARM and PowerPC can add reserved-memory nodes to the DT they provide to the guest. * x86 could poke holes in the memory map reported with e820. This requires to postpone creating the memory map until at least VFIO is initialized. * MIPS could describe the reserved ranges with the "memmap=mm$ss" kernel parameter. This would also require to call KVM_SET_USER_MEMORY_REGION for all memory regions at the end of kvmtool initialisation. Extra care should be taken to ensure we don't break any architecture, since they currently rely on having a linear address space with at most two memory blocks. This patch doesn't implement any address space carving. If an abort is encountered, user can try to rebuild kvmtool with different addresses or change its IOMMU resv regions if possible. Reviewed-by:
Punit Agrawal <punit.agrawal@arm.com> Signed-off-by:
Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Jean-Philippe Brucker authored
In some cases device regions don't support mmap. They can still be made available to the guest by trapping all accesses and forwarding reads or writes to VFIO. Such regions may be: * PCI I/O port BARs. * Sub-page regions, for example a 4kB region on a host with 64k pages. * Similarly, sparse mmap regions. For example when VFIO allows to mmap fragments of a PCI BAR and forbids accessing things like MSI-X tables. We don't support the sparse capability at the moment, so trap these regions instead (if VFIO rejects the mmap). Reviewed-by:
Punit Agrawal <punit.agrawal@arm.com> Signed-off-by:
Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Jean-Philippe Brucker authored
Allow guests to use the MSI capability in devices that support it. Emulate the MSI capability, which is simpler than MSI-X as it doesn't rely on external tables. Reuse most of the MSI-X code. Guests may choose between MSI and MSI-X at runtime since we present both capabilities, but they cannot enable MSI and MSI-X at the same time (forbidden by PCI). Reviewed-by:
Punit Agrawal <punit.agrawal@arm.com> Signed-off-by:
Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Jean-Philippe Brucker authored
Add virtual MSI-X tables for PCI devices, and create IRQFD routes to let the kernel inject MSIs from a physical device directly into the guest. It would be tempting to create the MSI routes at init time before starting vCPUs, when we can afford to exit gracefully. But some of it must be initialized when the guest requests it. * On the KVM side, MSIs must be enabled after devices allocate their IRQ lines and irqchips are operational, which can happen until late_init. * On the VFIO side, hardware state of devices may be updated when setting up MSIs. For example, when passing a virtio-pci-legacy device to the guest: (1) The device-specific configuration layout (in BAR0) depends on whether MSIs are enabled or not in the device. If they are enabled, the device-specific configuration starts at offset 24, otherwise it starts at offset 20. (2) Linux guest assumes that MSIs are initially disabled (doesn't actually check the capability). So it reads the device config at offset 20. (3) Had we enabled MSIs early, host would have enabled the MSI-X capability and device would return the config at offset 24. (4) The guest would read junk and explode. Therefore we have to create MSI-X routes when the guest requests MSIs, and enable/disable them in VFIO when the guest pokes the MSI-X capability. We have to follow both physical and virtual state of the capability, which makes the state machine a bit complex, but I think it works. An important missing feature is the absence of pending MSI handling. When a vector or the function is masked, we should rewire the IRQFD to a special thread that keeps note of pending interrupts (or just poll the IRQFD before recreating the route?). And when the vector is unmasked, one MSI should be injected if it was pending. At the moment no MSI is injected, we simply disconnect the IRQFD and all messages are lost. Reviewed-by:
Punit Agrawal <punit.agrawal@arm.com> Signed-off-by:
Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Jean-Philippe Brucker authored
Assigning devices using VFIO allows the guest to have direct access to the device, whilst filtering accesses to sensitive areas by trapping config space accesses and mapping DMA with an IOMMU. This patch adds a new option to lkvm run: --vfio-pci=<BDF>. Before assigning a device to a VM, some preparation is required. As described in Linux Documentation/vfio.txt, the device driver needs to be changed to vfio-pci: $ dev=0000:00:00.0 $ echo $dev > /sys/bus/pci/devices/$dev/driver/unbind $ echo vfio-pci > /sys/bus/pci/devices/$dev/driver_override $ echo $dev > /sys/bus/pci/drivers_probe Adding --vfio-pci=$dev to lkvm-run will pass the device to the guest. Multiple devices can be passed to the guest by adding more --vfio-pci parameters. This patch only implements PCI with INTx. MSI-X routing will be added in a subsequent patch, and at some point we might add support for passing platform devices to guests. Reviewed-by:
Punit Agrawal <punit.agrawal@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Robin Murphy <robin.murphy@arm.com> Signed-off-by:
Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Jean-Philippe Brucker authored
It's always nice to have a log2 handy, and the vfio-pci code will need to perform power of two allocation from an arbitrary size. Add fls_long and roundup_pow_of_two, based on the GCC builtin. Reviewed-by:
Punit Agrawal <punit.agrawal@arm.com> Signed-off-by:
Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Jean-Philippe Brucker authored
To ensure consistency between kvmtool and the kernel, import the UAPI headers of the VFIO version we implement. This is from Linux v4.12. Reviewed-by:
Punit Agrawal <punit.agrawal@arm.com> Signed-off-by:
Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Jean-Philippe Brucker authored
Add a way to iterate over all capabilities in a config space. Add a search function for getting a specific capability. Reviewed-by:
Punit Agrawal <punit.agrawal@arm.com> Signed-off-by:
Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Jean-Philippe Brucker authored
Introduce memory types RAM and DEVICE, along with a way for subsystems to query the global memory banks. This is required by VFIO, which will need to pin and map guest RAM so that assigned devices can safely do DMA to it. Depending on the architecture, the physical map is made of either one or two RAM regions. In addition, this new memory types API paves the way to reserved memory regions introduced in a subsequent patch. For the moment we put vesa and ivshmem memory into the DEVICE category, so they don't have to be pinned. This means that physical devices assigned with VFIO won't be able to DMA to the vesa frame buffer or ivshmem. In order to do that, simply changing the type to "RAM" would work. But to keep the types consistent, it would be better to introduce flags such as KVM_MEM_TYPE_DMA that would complement both RAM and DEVICE type. We could then reuse the API for generating firmware information (that is, for x86 bios; DT supports reserved-memory description). Reviewed-by:
Punit Agrawal <punit.agrawal@arm.com> Signed-off-by:
Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Jean-Philippe Brucker authored
Add helpers to add and remove IRQFD routing for both irqchips and MSIs. We have to make a special case of IRQ lines on ARM where the initialisation order goes like this: (1) Devices reserve their IRQ lines (2) VGIC is setup with VGIC_CTRL_INIT (in a late_init call) (3) MSIs are reserved lazily, when the guest needs them Since we cannot setup IRQFD before (2), store the IRQFD routing for IRQ lines temporarily until we're ready to submit them. Reviewed-by:
Punit Agrawal <punit.agrawal@arm.com> Signed-off-by:
Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Jean-Philippe Brucker authored
Currently all our virtual device interrupts are edge-triggered. But we're going to need level-triggered interrupts when passing physical devices. Let the device configure its interrupt kind. Keep edge as default, to avoid changing existing users. Reviewed-by:
Punit Agrawal <punit.agrawal@arm.com> Signed-off-by:
Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Jean-Philippe Brucker authored
When implementing PCI device passthrough, we will need to forward config accesses from a guest to the VFIO driver. Add a private cfg_ops structure to the PCI header, and use it in the PCI config access functions. A read from the guest first calls into the device's cfg_ops.read, to let the backend update the local header before filling the guest register. Same happens for a write, we let the backend perform the write and replace the guest-provided register with whatever sticks, before updating the local header. Try to untangle the PCI config access logic while we're at it. Reviewed-by:
Punit Agrawal <punit.agrawal@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com> [JPB: moved to a separate patch] Signed-off-by:
Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- 23 May, 2018 3 commits
-
-
Andre Przywara authored
The header files in arm/aarch*/include/asm/ are directly copied from Linux, so we can't just put our own definitions in there. Move the GICv2M MMIO frame size into a more private header, to avoid breaking the build once the header files are synced from Linux. Signed-off-by:
Andre Przywara <andre.przywara@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Andre Przywara authored
Currently we accidentally overlap the GICv2m MMIO frame with the CPU interface region. Fix this by moving the v2m frame below the CPUI region. Signed-off-by:
Andre Przywara <andre.przywara@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Andre Przywara authored
The KVM_VGIC_V3_ITS_SIZE macro from the Linux API header file already covers the doorbell page, so we don't need to add that extra page size in our code. Signed-off-by:
Andre Przywara <andre.przywara@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- 06 Apr, 2018 3 commits
-
-
Jean-Philippe Brucker authored
Vhost supports a single eventfd as the kick mechanism. Registering a second one will override the first. To ensure vhost works with our virtio-pci, only register the kick eventfd that is used by the guest. Fixes: a508ea95 ("virtio/pci: Use port I/O for configuration registers by default") Signed-off-by:
Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Jean-Philippe Brucker authored
virtio/pci.c registers a notification ioeventfd on both PIO and MMIO buses. But architectures other than x86 cannot differentiate MMIO from PIO traps, and the kernel always calls kvm_io_bus_read/write with KVM_MMIO_BUS as argument. As a result kvmtool's ioeventfd isn't used with virtio PCI, because the kernel can't find it and all accesses to the doorbell return to userspace. To fix it, don't set the PIO flag if the architecture doesn't support it. Fixes: a508ea95 ("virtio/pci: Use port I/O for configuration registers by default") Signed-off-by:
Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Jean-Philippe Brucker authored
With vhost, the USER_POLL flags isn't passed to ioeventfd__add_event, the function returns early and doesn't add the new event to the used_ioevents list. As a result ioeventfd__del_event doesn't remove the KVM event or free the structure. Always add the event to the list. Signed-off-by:
Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- 19 Mar, 2018 2 commits
-
-
Jean-Philippe Brucker authored
The wmb() in next_desc seems out of place and the comments are inaccurate. Remove the unnecessary barrier and clean up next_desc(). next_desc() is called by virt_queue__get_head_iov() when filling the iov with desciptor addresses. It reads the descriptor's flag and next index. The virt_queue__get_head_iov() only reads the direct and indirect descriptors, and doesn't write any shared memory except from iov and cursors that will be read by the caller. As far as I can see, vhost (the kernel implementation of virtio device) does well without any barrier here, so I think it might be safe to remove. Signed-off-by:
Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Jean-Philippe Brucker authored
One barrier seems to be missing from kvmtool's virtio implementation, between virt_queue__available() and virt_queue__pop(). In the following scenario "avail" represents the shared "available" structure in the virtio queue: Guest | Host | avail.ring[shadow] = desc_idx | while (avail.idx != shadow) smp_wmb() | /* missing smp_rmb() */ avail.idx = ++shadow | desc_idx = avail.ring[shadow++] If the host observes the avail.idx write before the avail.ring update, then it will fetch the wrong desc_idx. Add the missing barrier. This seems to fix the horrible bug I'm often seeing when running netperf in a guest (virtio-net + tap) on AMD Seattle. The TX thread reads the wrong descriptor index and either faults when accessing the TX buffer, or pushes the wrong index to the used ring. In that case the guest complains that "id %u is not a head!" and stops the queue. Signed-off-by:
Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-