Unity SRP Overview: Scriptable Render Pipeline

April 19, 2018
protect

I decided to do some research and write an upcoming experimental Unity feature: Scriptable Rendering Pipelines. Why? Because it concerns you, it concerns me. But do not panic. Or at least, not yet. Maybe not even next year, but it will eventually change the way you have to work. The readier you are, the better off you will be.

A better architecture for Unity projects overview

 

What is SRP?

Scriptable Render Pipeline (SRP) is a new Unity system and way of thinking that allows any graphics programmer to develop a customized render loop. This means, you will be able to tweak, reduce, extend how your game creates frames. This will add potential for you to optimize your game, create custom visual effects, make your system more maintainable, fix till-now-unfixable bugs but its main strength is that it will enable you to learn how graphics work in more detail. This idea is basically the opposite to the (legacyblack-box built-in renderer, where Unity had the monopoly about the rendering algorithms applied.

This technology started shipping with Unity 2018.1 Beta. Be careful, though, it is still experimental and might remain in that state for some time. Its main pillar is a C# API that is tightly bound with the C++ engine. The API is very likely to change during the development of the feature. The main bet behind it is that you will have much more fine-grain control over the rendering process your game will execute.

Unity offers two render pipelines along their code:

  • LWRP: Lightweight Rendering Pipeline

  • HDRP: High Definition Rendering Pipeline

In order to understand SRP, it pays off to have the overall picture of the typical per-camera rendering process:

  1. Culling

  2. Drawing

  3. Post-processing

If you know these aspects, feel free to skip the next sections.

1. Culling

The rendering of a frame typically starts with culling. Let us start with an informal but simple definition that will help us understanding it for now.

Culling

CPU process consisting in taking renderables and filtering them according to a camera's visibility criteria so as to produce a list of objects to be rendered

Renderables are basically game objects with Renderer components such as MeshRenderer and filtering just means whether it will be included it in the list or not. Note, however, that the real culling process also adds lights into the equation, but we will skip those.

Culling is important as it greatly reduces the amount of data and instructions the GPU will have to deal with. It might make no sense to render a running airplane if we are in a cave, since we do not see it (but it might, if e.g. it projects a shadow inside the cave). Culling itself takes some processor time, fact that engines have to be aware of when balancing the CPU/GPU load.

Incoming geometry/lights + camera settings + culling algorithm = list of renderers to draw

Culling

Culling step #nofilter #nomakeup 
 

2. Rendering

After we determined which data we should display, we just go for it. A commonly found process can be sum up in the following steps:

  1. Clear (back) buffer contents (CPU)

    We discard the buffers previously generated. It is usually the color and depth buffers, but it might include custom buffers for other techniques such as deferred shading.

  2. Sorting (CPU)

    The objects are sorted depending on the render queue (e.g. opaque front to back, and transparent back to front for proper blending).

  3. Dynamic batching (CPU)

    We try to group the renderers together as a single object so we can save draw calls. This optimization is optional.

  4. Command (draw call) preparation and dispatching (CPU)

    For each renderer, we prepare a draw command with its geometry data: vertices, uv coordinates, vertex colors, shader parameters such as transform matrices (MVP), texture ids, etc.. This instruction along its data is submitted to the API which will work together with the driver to pack and properly format this raw information into GPU-suited data structures.

  5. Render pipeline (GPU)

    Very roughly described: The GPU receives and processes the commands; the GPU frontend then assembles the geometry, vertex shaders are executed, the rasterizer kicks in, fragment shaders do their job, the GPU backend manages blending, render targets and all is written into different buffers.

  6. Wait for GPU to finish and swap buffers

    Depending on the VSync settings, it might even wait longer to do the back-front buffer swap.

That is an overly simplified typical rendering process.

Rendering

Plain rendering 
 

3. Post-effects

After the GPU filled the buffers (color, depth and possibly others), the developer may opt to apply further image enhancements. They consist in applying shaders to input textures (the created buffers) to overwrite them with the corrected image. Some are listed below:

Effect

Description

Performance cost

Bloom

It highlights the bright, emissive areas creating an aura kind of effect around the source

Medium

Depth of Field (DoF)

It blur certain parts of the screen depending on the set parameters

Expensive

SS Anti-Aliasing

Softens the abrupt transitions between pixel colours produced by the limited resolution

Light to expensive

Color correction

Changes the behaviour of colours according to the defined rules

Light

SS Ambient Occlusion

Adds contact shadows (it darkens areas between objects)

Medium

JikGuard.com, a high-tech security service provider focusing on game protection and anti-cheat, is committed to helping game companies solve the problem of cheats and hacks, and providing deeply integrated encryption protection solutions for games.

Read More>>