Recommand · June 5, 2021 0

Create a DirectX staging texture backed by a custom RAM allocation

I am the author of Looking Glass (https://looking-glass.io) and I am looking for a way to improve our DXGI Desktop Duplication capture performance. This question is specifically about how to avoid an extra CPU memory copy as part of our pipeline.

If you are not familiar with Looking Glass, what we are doing is using a virtual hardware device (IVSHMEM) to map a block of shared memory into a Windows Virtual Machine. We then use this shared RAM to pass the captured desktop back to the host so that it can be rendered on screen. We do this so that we can acquire the video output of a GPU that has been passed through to the guest by means of VFIO and integrate it into the Linux desktop.

Currently, our pipeline works as follows:

  1. AcquireNextFrame
  2. CopyResource to staging texture (system RAM)
  3. mmap staging texture
  4. memcpy to IVSHMEM memory
  5. unmap staging texture

Is it possible to avoid this extra copy by creating a DX11 staging texture backed by the IVSHMEM device directly, removing the need to copy the texture again?

Ie:

  1. AcquireNextFrame
  2. CopyResource to texture backed by IVSHMEM ram.

If this needs to be done in kernel space (the IVSHMEM driver) this is possible, but it would be preferable if there is a userspace method of performing this.

Note, the IVSHMEM device is simply a dumb virtual device that provides the shared memory as one of its base address registers (BARs).