How to Improve Game Performance by Using the Camera
How to Improve Game Performance by Using the Camera

How to Improve Game Performance by Using the Camera

What exactly does a camera do?

A Camera, at its most basic, specifies a field of vision, a location, and an orientation in the scene. These settings define the content (Renderers) that a Camera may see. A Camera's produced picture is delivered to either a display or a RenderTexture. In both circumstances, the covered output area is defined by the Camera's viewport rectangle.

At a high level, each active Camera in the Unity engine's code must:

1.  Determine the collection of Renderers that are visible. This is known as culling. Culling guarantees that only Renderers that may contribute to the rendered picture of the Camera are drawn on the GPU. To put it another way, the idea is to avoid drawing as many Renderers as possible in order to increase performance. This is accomplished through the use of three processes:

·      Renderers on layers that do not match the culling mask of the Camera are omitted.

·      Renderers that are outside the Camera's frustum are excluded via frustum culling (i.e. its viewing volume).

·      Renderers that are entirely concealed behind other opaque Renderers are excluded via occlusion culling. This phase is optional and is frequently not complete.

2.  Determine the sequence in which the visible Renderers are drawn by the GPU. The Camera generally orders transparent items from back to front and opaque ones from front to back. Other parameters that might influence rendering order include the material or shader's Render Queue, sorting layers, and sorting order.

3.  To transmit work to the GPU, generate draw commands for each visible Renderer. These instructions prepare the Material and Mesh for rendering.


Scene for camera testing

We utilised everyone's favourite test case to calculate the cost of extra Cameras: spinning cubes! Every test scenario contains, from left to right:

·      A three-dimensional grid of rotating cubes. Each slice is made up of a 10x10 grid of cubes. In this article series, the number of slices is customizable and referred to as the "load factor."

·      Soft shadows are cast by a single directed light source.

·      Two "game" UI canvases, each with a panel, to replicate two popups in a mobile game.

·      A second "overlay" UI canvas is used to control the test.

 

Rendering Pipelines

Rendering pipelines are the methods via which a Camera inside a scene generates an image. Unity features a built-in render pipeline, which was formerly the sole way to render scenes. The Scriptable Render Pipeline (SRP) was added in Unity 2018.1 and provides many additional options for controlling rendering via C# scripts. Unity has two SRP implementations: the Universal Render Pipeline (URP) and the High Definition Render Pipeline (HDRP).

We investigate the performance overhead of Cameras in the Built-in Render Pipeline and URP since they support mobile devices in this article series. Because HDRP does not run on certain devices, it was left out of this test. Another reason for eliminating HDRP is because its range of render feature configuration possibilities is rather vast, making it impossible to construct fair and realistic comparison situations of inefficient Camera utilisation.

URP pioneered the Camera Stack idea, which comprises of a Base Camera and one or more Overlay Cameras. This is how we configured the cameras in the URP experiments. Details on how to programmatically set up and alter a Camera Stack at runtime may be found in Manager.cs.

Visit Us: corp.infogen-labs.com 

Social Media: Instagram | Facebook | LinkedIn | YouTube | Twitter

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics