Good performance is critical to the success of many games. Below are some simple guidelines for maximizing the speed of your game’s renderingThe process of drawing graphics to the screen (or to a render texture). By default, the main camera in Unity renders its view to the screen. More info
See in Glossary.
The graphical parts of your game can primarily impact on two systems of the computer: the GPU and the CPU. The first rule of any optimization is to find where the performance problem is, because strategies for optimizing for GPU vs. CPU are quite different (and can even be opposite - for example, it’s quite common to make the GPU do more work while optimizing for CPU, and vice versa).
Common bottlenecks and ways to check for them:
Less-common bottlenecks:
To render objects on the screen, the CPU has a lot of processing work to do: working out which lights affect that object, setting up the shaderA small script that contains the mathematical calculations and algorithms for calculating the Color of each pixel rendered, based on the lighting input and the Material configuration. More info
See in Glossary and shader parameters, and sending drawing commands to the graphics driver, which then prepares the commands to be sent off to the graphics card.
All this “per object” CPU usage is resource-intensive, so if you have lots of visible objects, it can add up. For example, if you have a thousand triangles, it is much easier on the CPU if they are all in one meshThe main graphics primitive of Unity. Meshes make up a large part of your 3D worlds. Unity supports triangulated or Quadrangulated polygon meshes. Nurbs, Nurms, Subdiv surfaces must be converted to polygons. More info
See in Glossary, rather than in one mesh per triangle (adding up to 1000 meshes). The cost of both scenarios on the GPU is very similar, but the work done by the CPU to render a thousand objects (instead of one) is significantly higher.
Reduce the visible object count. To reduce the amount of work the CPU needs to do:
Combine objects together so that each mesh has at least several hundred triangles and uses only one MaterialAn asset that defines how a surface should be rendered, by including references to the Textures it uses, tiling information, Color tints and more. The available options for a Material depend on which Shader the Material is using. More info
See in Glossary for the entire mesh. Note that combining two objects which don’t share a material does not give you any performance increase at all. The most common reason for requiring multiple materials is that two meshes don’t share the same textures; to optimize CPU performance, ensure that any objects you combine share the same textures.
When using many pixelThe smallest unit in a computer image. Pixel size depends on your screen resolution. Pixel lighting is calculated at every screen pixel. More info
See in Glossary lights in the Forward rendering path, there are situations where combining objects may not make sense. See the Lighting performance section below to learn how to manage this.
Use OnDemandRendering to improve CPU performance by controlling your application’s rendering speed.
You might want to lower the frame rate in the following scenarios:
Adjusting the rendering speed helps you manage power usage and device thermals to maximize battery life and prevent CPU throttling. It works particularly well with the Adaptive Performance package. Even though frames don’t render as often, the application still sends events to scriptsA piece of code that allows you to create your own Components, trigger game events, modify Component properties over time and respond to user input in any way you like. More info
See in Glossary at a normal pace (for example, it might receive input during a frame that isn’t rendered). To prevent input lag, you can call OnDemandRendering.renderFrameInterval = 1
for the duration of the input so that movements, buttons, etc. still appear to be responsive.
Situations that are very heavy in areas such as scripting, physics, animation, but not rendering, don’t benefit from using this API. Your application’s visuals might stutter with minimal impact on power usage.
Note: VR applications don’t support On Demand Rendering. Not rendering every frame causes the visuals to be out of sync with head movement and might increase the risk of motion sickness.
There are two basic rules for optimizing the geometry of a Model:
Note that the actual number of vertices that graphics hardware has to process is usually not the same as the number reported by a 3D application. Modeling applications usually display the number of distinct corner points that make up a model (known as the geometric vertex count). For a graphics card, however, some geometric vertices need to be split into two or more logical vertices for rendering purposes. A vertex must be split if it has multiple normals, UV coordinates or vertex colors. Consequently, the vertex count in Unity is usually higher than the count given by the 3D application.
While the amount of geometry in the Models is mostly relevant for the GPU, some features in Unity also process Models on the CPU (for example, Mesh skinning).
For more tips on improving performance while creating Assets in 3D applications outside of Unity, see Modeling characters for optimal performance.
The fastest option is always to create lighting that doesn’t need to be computed at all. To do this, use Lightmapping to “bake” static lighting just once, instead of computing it each frame. The process of generating a lightmapped environment takes only a little longer than just placing a light in the scene in Unity, but:
In many cases you can apply simple tricks instead of adding multiple extra lights. For example, instead of adding a light that shines straight into the cameraA component which creates an image of a particular viewpoint in your scene. The output is either drawn to the screen or captured as a texture. More info
See in Glossary to give a Rim Lighting effect, add a dedicated Rim Lighting
computation directly into your shaders (see Surface Shader Examples to learn how to do this).
Also see: Forward renderingA rendering path that renders each object in one or more passes, depending on lights that affect the object. Lights themselves are also treated differently by Forward Rendering, depending on their settings and intensity. More info
See in Glossary
Per-pixel dynamic lighting adds significant rendering work to every affected pixel, and can lead to objects being rendered in multiple passes. Avoid having more than one Pixel Light illuminating any single object on less powerful devices, like mobile or low-end PC GPUs, and use lightmapsA pre-rendered texture that contains the effects of light sources on static objects in the scene. Lightmaps are overlaid on top of scene geometry to create the effect of lighting. More info
See in Glossary to light static objects instead of calculating their lighting every frame. Per-vertex dynamic lighting can add significant work to vertex transformations, so try to avoid situations where multiple lights illuminate a single object.
Avoid combining meshes that are far enough apart to be affected by different sets of pixel lights. When you use pixel lighting, each mesh has to be rendered as many times as there are pixel lights illuminating it. If you combine two meshes that are very far apart, it increase the effective size of the combined object. All pixel lights that illuminate any part of this combined object are taken into account during rendering, so the number of rendering passes that need to be made could be increased. Generally, the number of passes that must be made to render the combined object is the sum of the number of passes for each of the separate objects, so nothing is gained by combining meshes.
During rendering, Unity finds all lights surrounding a mesh and calculates which of those lights affect it most. The settings on the Quality window are used to modify how many of the lights end up as pixel lights, and how many as vertex lights. Each light calculates its importance based on how far away it is from the mesh and how intense its illumination is - and some lights are more important than others purely from the game context. For this reason, every light has a Render Mode setting which can be set to Important or Not Important; lights marked as Not Important have a lower rendering overhead.
Example: Consider a driving game in which the player’s car is driving in the dark with headlights switched on. The headlights are probably the most visually significant light source in the game, so their Render Mode should be set to Important. There may be other lights in the game that are less important, like other cars’ rear lights or distant lampposts, and which don’t improve the visual effect much by being pixel lights. The Render Mode for such lights can safely be set to Not Important to avoid wasting rendering capacity in places where it has little benefit.
Optimizing per-pixel lighting saves both the CPU and GPU work: the CPU has fewer draw calls to do, and the GPU has fewer vertices to process and pixels to rasterize for all the additional object renders.
Use Compressed textures to decrease the size of your textures. This can result in faster load times, a smaller memory footprint, and dramatically increased rendering performance. Compressed textures only use a fraction of the memory bandwidth needed for uncompressed 32-bit RGBA textures.
Always enable Generate mipmaps for textures used in a 3D scene. A mipmap texture enables the GPU to use a lower resolution texture for smaller triangles.This is similar to how texture compressionA method of storing data that reduces the amount of storage space it requires. See Texture Compression, Animation Compression, Audio Compression, Build Compression.
See in Glossary can help limit the amount of texture data transfered when the GPU is rendering.
The only exception to this rule is when a texel (texture pixel) is known to map 1:1 to the rendered screen pixel, as with UI elements or in a 2D game.
Culling objects involves making objects invisible. This is an effective way to reduce both the CPU and GPU load.
In many games, a quick and effective way to do this without compromising the player experience is to cull small objects more aggressively than large ones. For example, small rocks and debris could be made invisible at long distances, while large buildings would still be visible.
There are a number of ways you can achieve this:
Use the Level Of DetailThe Level Of Detail (LOD) technique is an optimization that reduces the number of triangles that Unity has to render for a GameObject when its distance from the Camera increases. Each LOD level has either a Mesh with a Mesh Renderer component (Mesh LOD level) or a Billboard Asset with a Billboard Renderer component (Billboard LOD level). Typically a single GameObject has three or four Mesh LOD levels and one optional Billboard LOD level to represent the same GameObject with decreasing detail in the geometry. More info
See in Glossary system
Manually set per-layer culling distances on the camera
Put small objects into a separate layer and set up per-layer cull distances using the Camera.layerCullDistances script function
Realtime shadows are nice, but they can have a high impact on performance, both in terms of extra draw calls for the CPU and extra processing on the GPU. For further details, see the Light Performance page.
Different platforms have vastly different performance capabilities; a high-end PC GPU can handle much more in terms of graphics and shaders than a low-end mobile GPU. The same is true even on a single platform; a fast GPU is dozens of times faster than a slow integrated GPU.
GPU performance on mobile platforms and low-end PCs is likely to be much lower than on your development machine. It’s recommended that you manually optimize your shaders to reduce calculations and texture reads, in order to get good performance across low-end GPU machines. For example, some built-in Unity shaders have “mobile” equivalents that are much faster, but have some limitations or approximations.
Below are some guidelines for mobile and low-end PC graphics cards:
Transcendental mathematical functions (such as pow
, exp
, log
, cos
,
sin
, tan
) are quite resource-intensive, so avoid using them where possible. Consider using lookup textures as an alternative to complex math calculations if applicable.
Avoid writing your own operations (such as normalize
, dot
, inversesqrt
). Unity’s built-in options ensure that the driver can generate much better code. Remember that the Alpha Test (discard
) operation often makes your fragment shaderThe “per-pixel” part of shader code, performed every pixel that an object occupies on-screen. The fragment shader part is usually used to calculate and output the color of each pixel. More info
See in Glossary slower.
While the precision (float
vs half
vs fixed
) of floating point
variables is largely ignored on desktop GPUs, it is quite
important to get a good performance on mobile GPUs. See the
Shader Data Types and Precision
page for details.
For further details about shader performance, see the Shader Performance page.
Static
property on a non-moving object to allow internal optimizations like static batchingA technique Unity uses to draw GameObjects on the screen that combines static (non-moving) GameObjects into big Meshes, and renders them in a faster way. More infopixel light
affecting your geometry, rather than multiples.half
precision variables where possible.pow
, sin
and cos
in pixel shaders.