The XRAn umbrella term encompassing Virtual Reality (VR), Augmented Reality (AR) and Mixed Reality (MR) applications. Devices supporting these forms of interactive applications can be referred to as XR devices. More info
See in Glossary SDK Display subsystem provides an interface for texture allocation, frame lifecycle, and blocking for cadence.
Several device SDKs require that a texture is allocated through the SDK itself rather than the usual graphics APIs. If you use the XR SDK Display subsystem, you no longer need to rely on external plug-insA set of code created outside of Unity that creates functionality in Unity. There are two kinds of plug-ins you can use in Unity: Managed plug-ins (managed .NET assemblies created with tools like Visual Studio) and Native plug-ins (platform-specific native code libraries). More info
See in Glossary to blitA shorthand term for “bit block transfer”. A blit operation is the process of transferring blocks of data from one place in memory to another.
See in Glossary or copy into the SDK texture.
The Display subsystem enables a plug-in provider to allocate the texture. Where possible, Unity renders directly to the texture to avoid unnecessary copies. Unity can also allocate the texture for you if needed.
In the following cases, Unity can’t render directly to the texture and instead renders to intermediate textures and then blits or copies to your texture:
EXT_multisampled_render_to_texture
extension.kUnityXRRenderTextureFlagsLockedWidthHeight
flag is set and renderScale is not 1.0.kUnityXRRenderTextureFlagsWriteOnly
flag is set and Unity needs to read back from the texture.On both PC and mobile, the engine always resolves to the provider’s texture. The engine performs implicit resolve (on mobiles with multi-sample render to texture extension) or explicit resolve.
On mobile, providers should enable the kUnityXRRenderTextureFlagsAutoResolve
flag and create their textures with 1 sample.
Use UnityXRFrameSetupHints.appSetup.sRGB
to check if Unity expects to render to sRGB texture formats. The provider ultimately selects the output texture formatA file format for handling textures during realtime rendering by 3D graphics hardware, such as a graphics card or mobile device. More info
See in Glossary from the colorFormat
field of UnityXRRenderTextureDesc
. If the format is an sRGB type, Unity turns sRGB writes on or off depending on the color space that the active project selects. You should always sample from any sRGB textures with sRGB to linear conversion in your compositor.
If your SDK needs depth information, you can obtain the depth bufferA memory store that holds the z-value depth of each pixel in an image, where the z-value is the depth for each rendered pixel from the projection plane. More info
See in Glossary in the same way as the color buffer above. The nativeDepthTex
value on the UnityXRRenderTextureDesc
specifies the native resource. By default, Unity tries to share the depth buffer between textures with a similar desc if nativeDepthTex
is set to kUnityXRRenderTextureIdDontCare
.
If your SDK does not need depth information, you should set UnityXRRenderTextureDesc::depthFormat
to kUnityXRDepthTextureFormatNone
to avoid unnecessary resolves.
During submission (see the Submitting frames in-flight section below), you can specify a different texture ID each frame in order to handle the case where the SDK needs to double- or triple-buffer images that Unity is rendering to. The provider plug-in is responsible for managing the collection of UnityXRRenderTextureId
s.
There are two methods responsible for the lifecycle of a frame: PopulateNextFrameDesc
, which happens right before rendering begins, and SubmitCurrentFrame
, which happens immediately after rendering has completed. Both methods are called on the graphics thread.
During PopulateNextFrameDesc
, the display provider is expected to do the following:
SubmitCurrentFrame
.nextFrame
parameter.During the SubmitCurrentFrame
method, the display provider is expected to do the following:
PopulateNextFrameDesc
.To maintain the lowest possible latency and maximal throughput when rendering to an HMD display, you need to ensure precise timing when you obtain poses and submit textures. Each HMD has a native refresh rate that their compositor runs at. Rendering any faster than that rate results in a sub-optimal experience because of mismatched timing or redundant work.
Unity expects the display provider to block, or wait for frame cadence, during the frame lifecycle. Unity starts submitting rendering commands shortly after ‘waking up’ from the blocking call. You should synchronize the wake-up time to your compositor within a particular window. Some SDKs provide a floating wake-up time window based on heuristics. Oculus calls this the “queue ahead” (see Oculus developer documentation for more information). Valve calls it “running start” (see slides 18 and 19 of this presentation).
Unity waits on the frame lifecycle to complete before it starts submitting pose-dependent graphics commands.
Providers can wait for cadence in either PopulateNextFrameDesc
or in SubmitCurrentFrame
.
While Unity submits graphics commands for a frame on the graphics thread, the next frame’s simulation loop is running on the main thread. It contains physics, script logic, etc. PopulateNextFrameDesc
is called on the graphics thread after all rendering commands have been submitted, and only after the simulation of the next frame and all graphics jobs scheduled on it are complete. One of the graphics job that PopulateNextFrameDesc
waits for is SubmitCurrentFrame
for the current frame. This is why it’s valid to wait for cadence in SubmitCurrentFrame
. Furthermore, Unity doesn’t start rendering until PopulateNextFrameDesc
is complete.
With these details in mind, there are some trade-offs to waiting for cadence in SubmitCurrentFrame
as opposed to PopulateNextFrameDesc
. For example, waiting for cadence in SubmitCurrentFrame
can cause performance issues if the application schedules expensive graphics jobs during simulation. Because SubmitCurrentFrame
is scheduled to run after rendering, the graphics jobs that the application scheduled will run after SubmitCurrentFrame
, but before PopulateNextFrameDesc
. In this case, the provider is waiting in SubmitCurrentFrame
, then it wakes up expecting Unity to begin rendering. However, Unity processes the graphics jobs the application scheduled before it calls PopulateNextFrameDesc
, which in turn allows Unity to start rendering. This delay between waking up for rendering and processing the graphics jobs scheduled in the update method could cause latency. Developers can optimize this by scheduling their graphics jobs after rendering to ensure the graphics jobs are scheduled before SubmitCurrentFrame
.
While the Provider waiting for cadence in SubmitCurrentFrame
allows computing graphics jobs to run in parallel to the main thread, waiting for cadence in PopulateNextFrameDesc
blocks the Unity main thread entirely. This is acceptable because simulation and other graphics jobs have already completed. Problems might occur when the simulation or graphics thread take up far too much time and exceed the device’s target frame rate. This can cause frame rates to be cut in half while PopulateNextFrameDesc
waits for the next cycle in the cadence.
When Unity calls SubmitCurrentFrame
, the textures that you’ve set up last frame have been rendered to, or Unity has submitted render commands to the graphics driver to render them. Unity is now done with them and you can pass them on to your compositor.
After blocking or acquiring the next frame to render to, you must tell Unity which textures to render to in the next frame, and what’s the layout of the render passes (see Render Passes below).
A UnityXRRenderPass
can involve a culling pass and a sceneA Scene contains the environments and menus of your game. Think of each unique Scene file as a unique level. In each Scene, you place your environments, obstacles, and decorations, essentially designing and building your game in pieces. More info
See in Glossary graph traversal. This is a resource-intensive operation, and you should try to limit the number of times Unity performs it via tricks like single-pass rendering.
Each UnityXRRenderPass
contains an output texture (which can be a texture array), and output UnityXRRenderParams
such as view, projection matrices, and the rect to render to or the texture array slice.
For each frame, the display Provider sets up a UnityXRRenderPass
and fills out the UnityXRRenderTextureId
s that Unity will render to the next frame.
Use cases for UnityXRRenderPass
include the following:
The API supports these additional cases (but Unity might not react correctly right now):
It’s safe to make these assumptions:
Note: The Unity project and XR SDK must use the same setting (enabled/disabled) for single-pass rendering, because this setting affects user shadersA small script that contains the mathematical calculations and algorithms for calculating the Color of each pixel rendered, based on the lighting input and the Material configuration. More info
See in Glossary. To check if single-pass rendering is enabled, use UnityXRFrameSetupHints.appSetup.singlePassRendering
.
Two rendering passes can share a culling pass if their cullingPassIndex
es are set to the same value. The cullingPassIndex
selects which UnityXRCullingPass
to use. Culling passes must be filled out in UnityXRNextFrameDesc
.