To optimize a render graph, merge or reduce the number of render passes. The more render passes you have, the more data the CPU and GPU need to store and retrieve from memory. This slows down rendering, especially on devices that use tile-based deferred randering (TBDR).
If you need a copy of the color or depth buffersA memory store that holds the z-value depth of each pixel in an image, where the z-value is the depth for each rendered pixel from the projection plane. More info
See in Glossary, avoid copying them yourself if you can. Use the copies URP creates by default instead, to avoid creating unnecessary render passes.
Use the ConfigureInput API to make sure URP generates the texture you need in the frame data.
To check if URP creates copies during the frame that you can use, check for the following passes in the Render Graph Viewer:
_CameraTargetAttachment
to cameraOpaqueTexture
in the frame data._CameraDepthAttachment
to cameraDepthTexture
in the frame data.Use the Render Graph Viewer to check the reason why URP can’t merge render passes, and fix the issue if you can. On devices that use tile-based deferred randering (TBDR), merging passes helps the device use less energy and run for longer.
You can do the following to make sure URP merges render passes:
AddRasterRenderPass
instead of other types of render pass as much as possible.SetInputAttachment
API and the LOAD_FRAMEBUFFER_X_INPUT
macro. For more information, refer to Get the current framebuffer from GPU memory.Don’t create unnecessary render passes to organize your code into smaller, more manageable chunks. Each render pass you create requires more processing time on the CPU.
To write combined render passes, you can use the AddUnsafePass
API and Compatibility Mode APIs such as SetRenderTarget
, but rendering might be slower because URP can’t optimize the render pass. For more information, refer to Use Compatibility Mode APIs in render graph passes.
To avoid creating two render passes that blitA shorthand term for “bit block transfer”. A blit operation is the process of transferring blocks of data from one place in memory to another.
See in Glossary from and to the cameraA component which creates an image of a particular viewpoint in your scene. The output is either drawn to the screen or captured as a texture. More info
See in Glossary color texture, use the ContextContainer
object to read and write to the color buffer directly.
For example:
public override void RecordRenderGraph(RenderGraph renderGraph, ContextContainer frameData)
{
// Fetch the frame data textures
var resourceData = frameData.Get<UniversalResourceData>();
// Set the source as the color texture the camera currently targets
var source = resourceData.activeColorTexture;
// Create a destination texture, with the same dimensions as the source
var destinationDescriptor = renderGraph.GetTextureDesc(source);
destinationDescriptor.name = "DestinationTexture";
destinationDescriptor.clearBuffer = false;
TextureHandle destination = renderGraph.CreateTexture(destinationDescriptor);
// Use the AddBlitPass API to create a simple blit from the source to the destination
RenderGraphUtils.BlitMaterialParameters parameters = new(source, destination, BlitMaterial, 0);
renderGraph.AddBlitPass(parameters, passName: "MyRenderPass");
// Set the main color texture for the camera as the destination texture
resourceData.cameraColor = destination;
}