The standard ShaderA program that runs on the GPU. More info
See in Glossary language in Unity is HLSL, and general HLSL data types are supported. However, Unity handles some data types differently from HLSL, particularly to provide better support on mobile platforms.
Shaders carry out the majority of calculations using floating point numbers (which are float
in regular programming languages like C#). In Unity’s implementation of HLSL, the scalar floating point data types are float
, half
, and fixed
. These data types differ in precision and, consequently, performance or power usage. There are also several related data types for vectors and matrices such as half3
and float4x4
.
float
This is the highest precision floating point data type. On most platforms, float
values are 32 bits like in regular programming languages.
Full float
precision is generally useful for world space positions, texture coordinates, or scalar calculations that involve complex functions such as trigonometry or power/exponentiation. If you use lower precision floating point data types for these purposes, it can cause precision-related artifacts. For example with texture coordinates, a half
doesn’t have enough precision to accurately represent 1-texel offsets of larger textures.
half
This is a medium precision floating point data type. On platforms that support half
values, they are generally 16 bits. On other platforms, this becomes float
.
half
values have a smaller range and precision than float
values.
Half precision is useful to get better shader performance for values that don’t require high precision such as short vectors, directions, object space positions, and high dynamic range colors.
fixed
This is only supported by the OpenGL ES 2.0 Graphics API. On other APIs it becomes the lowest supported precision (half
or float
).
This is the lowest precision fixed point value and is generally 11 bits. fixed
values range from –2.0 to +2.0 and have a precision of 1/256.
Fixed precision is useful for regular colors (as typically stored in regular textures) and performing simple operations on them.
Unity’s shader compiler ignores floating point number suffixes from HLSL. Floating point numbers with a suffix therefore all become float
.
This code shows a possible negative impact of numbers with the h
suffix in Unity:
half3 packedNormal = ...;
half3 normal = packedNormal * 2.0h - 1.0h;
Since the h
suffix is ignored, the shader compiler generates code that executes these steps:
1. Calculate an intermediary normal
value in high precision (float3
)
2. Convert the intermediary value to half3
.
This reduces your shader’s performance.
This code is more efficient because it only uses half
values in its calculations:
half3 packedNormal = ...;
half3 normal = packedNormal * half(2.0) - half(1.0);
Integers (int
data type) are often used as loop counters or array indices. For this purpose, they generally work fine across various platforms.
Depending on the platform, integer types might not be supported by the GPU. For example, Direct3D 9 and OpenGL ES 2.0 GPUs only operate on floating point data, and simple-looking integer expressions (involving bit or logical operations) might be emulated using fairly complicated floating point math instructions.
Direct3D 11, OpenGL ES 3, Metal and other modern platforms have proper support for integer data types, so using bit shifts and bit masking works as expected.
HLSL has built-in vector and matrix types that are created from the basic types. For example, float3
is a 3D vector with .x, .y, .z components, and half4
is a medium precision 4D vector with .x, .y, .z, .w components. Alternatively, vectors can be indexed using .r, .g, .b, .a components, which is useful when working on colors.
Matrix types are built in a similar way; for example float4x4
is a 4x4 transformation matrix. Note that some platforms only support square matrices, most notably OpenGL ES 2.0.
Typically you declare textures in your HLSL code as follows:
sampler2D _MainTex;
samplerCUBE _Cubemap;
For mobile platforms, these translate into “low precision samplers”, i.e. the textures are expected to have low precision data in them.
You can change the the default sampler precision for the whole Unity project in the Player SettingsSettings that let you set various player-specific options for the final game built by Unity. More info
See in Glossary using the Shader precision model dropdown.
If you know your texture contains HDRhigh dynamic range
See in Glossary colors, you might want to use half precision sampler:
sampler2D_half _MainTex;
samplerCUBE_half _Cubemap;
Or if your texture contains full float precision data (e.g. depth texture), use a full precision sampler:
sampler2D_float _MainTex;
samplerCUBE_float _Cubemap;
One complication of float
/half
/fixed
data type usage is that PC GPUs are always high precision. That is, for all the PC (Windows/Mac/Linux) GPUs, it does not matter whether you write float
, half
or fixed
data types in your shaders. They always compute everything in full 32-bit floating point precision.
The half
and fixed
types only become relevant when targeting mobile GPUs, where these types primarily exist for power (and sometimes performance) constraints. Keep in mind that you need to test your shaders on mobile to see whether or not you are running into precision/numerical issues.
Even on mobile GPUs, the different precision support varies between GPU families. Here’s an overview of how each mobile GPU family treats each floating point type (indicated by the number of bits used for it):
GPU Family | float | half | fixed |
---|---|---|---|
PowerVR Series 6/7 | 32 | 16 | |
PowerVR SGX 5xx | 32 | 16 | 11 |
Qualcomm Adreno 4xx/3xx | 32 | 16 | |
Qualcomm Adreno 2xx | 32 vertex 24 fragment | ||
ARM Mali T6xx/7xx | 32 | 16 | |
ARM Mali 400/450 | 32 vertex 16 fragment | ||
NVIDIA X1 | 32 | 16 | |
NVIDIA K1 | 32 | ||
NVIDIA Tegra 3/4 | 32 | 16 |
Most modern mobile GPUs actually only support either 32-bit numbers (used for float
type) or 16-bit numbers (used for both half
and fixed
types). Some older GPUs have different precisions for vertex shaderA program that runs on each vertex of a 3D model when the model is being rendered. More info
See in Glossary and fragment shader computations.
Using lower precision can often be faster, either due to improved GPU register allocation, or due to special “fast path” execution units for certain lower-precision math operations. Even when there’s no raw performance advantage, using lower precision often uses less power on the GPU, leading to better battery life.
A general rule of thumb is to start with half precision for everything except positions and texture coordinates. Only increase precision if half precision is not enough for some parts of the computation.
Support for special floating point values can be different depending on which (mostly mobile) GPU family you’re running.
All PC GPUs that support Direct3D 10 support very well-specified IEEE 754 floating point standard. This means that float numbers behave exactly like they do in regular programming languages on the CPU.
Mobile GPUs can have slightly different levels of support. On some, dividing zero by zero might result in a NaN (“not a number”); on others it might result in infinity, zero or any other unspecified value. Make sure to test your shaders on the target device to check they are supported.
GPU vendors have in-depth guides about the performance and capabilities of their GPUs. See these for details: