One of the most effective ways to make your scenes absolutely stunning is by adding a volumetric lighting effect. Here's how you can easily achieve this stunning effect by harnessing the power of Unity's Universal Render Pipeline (URP).
monitoring: This tutorial assumes that you are already familiar with Unity, C# programming, and shaders. An understanding of vector algebra is helpful, but you can learn a lot without it.
If you're new to Unity development, check out oursGetting started with UnityeIntroduction to Shaders in UnityTutorials.
starting
Download the materials for the project from theDownload materialsbutton above or below in this tutorial. After that, unzip the package and open the home folder in Unity.
Then look at the contentRWim Projektfenster:
Here's a quick breakdown of what each folder contains:
- animations: Animations for the player character.
- materials: Materials to be used for the player and the environment.
- models: Models for the environment and the player.
- Prices: The sample scene you will be working on.
- scripts: Several scripts for the sample project.
- The definition: Assets for universal rendering pipeline settings.
- Shader: An empty folder where your shaders will be placed.
- textures: textures for the player and debugging.
Don't worry too much about the contents of all those folders; You will mainly work withscriptseShader.
Now open therays of the sunset goddinner insideRW/Price🇧🇷 Look at the scene view and clickSpielento try the game:
You will see an overview of the game level. Use theONEeDKeys to move the player left and right. Jump by pressingSpace.
Your goal in this tutorial is to learn how to enhance the game's graphics with some cool volumetric effects to make it more interesting and immersive. Your first step is to understand what volumetric lighting is and how to implement it.
Volumetric Light Scattering
In the real world, light doesn't propagate in a vacuum where there's nothing between you and the object you're looking at. Unless of course you are in space. 🇧🇷
In real-time rendering, this is known as the Light Transport Participating Media Effect. The most common phenomenon is fog.
When the density of particles in the air is high enough, objects partially obscuring a light source will cast shadows on those particles in the form of light rays or rays.
In game design they are known asdivine raysorlight wells🇧🇷 With this effect you can improve the realism and shine of your scenes and make everything just beautiful. 🇧🇷
Next, you'll learn how to achieve this effect in your Unity games.
Using the screen space method
While not physically accurate, the screen spacing method is fairly simple. First you render the color of the light source and then paint all objects in your scene black. They do this in an off-screen texture calledocclusion map🇧🇷 It looks like this:
After that, apply Radial Blur to the image in post-processing. Starting from the center of the light source, you sample multiple colors along a vector from the light source to the current pixel you are evaluating. You set the final color of that pixel to the weighted sum of those samples.
Finally, overlay this image over the original color image.
Now that you understand the process, you'll see how you can apply this concept to your own game.
Creating a custom renderer resource
The Universal Render Pipeline provides a script template for creating resources. You will now create your own custom renderer resource.
WithinRW/Scripts, SelectCreate ▸ Render ▸ Universal Render Pipeline ▸ Render Resourceand name the resourceVolumetric Light Scattering.
Then double clickVolumetricLightScattering.csto start the code editor. You will see the following:
This is your renderer resource class. It is derived from the abstract base class,ScriptableRendererFeature
, which allows you to insert render passes into the renderer and run them on various events.
ScriptableRendererRecursos
consist of one or moreScriptableRenderPasses
🇧🇷 By default, Unity gives you an empty passport namedCustomRenderPass
🇧🇷 You will learn more about the details of this class when you write your custom pass. For now, focus onVolumetric Light Scattering
.
Unity calls some methods in a predetermined order while the script is running:
- cry(): Called when the resource is loaded for the first time. You will use it to create and configure all of them
ScriptableRenderPass
instances. - AddRenderPasses(): Each image is called up once per camera. You will use it to inject yourself
ScriptableRenderPass
instances in theScriptableRenderer
.
First, configure some settings to configure this feature. Start by adding the following class aboveVolumetric Light Scattering
:
[System.Serializable]public class VolumetricLightScatteringSettings{ [Header("Properties")] [Range(0.1f, 1f)] public float resolutionScale = 0.5f; [interval(0.0f, 1.0f)] public turnover intensity = 1.0f; [Interval(0.0f, 1.0f)] public float blurWidth = 0.85f;}
VolumetricLightScatteringSettings
is a data container for the following resource configurations:
- resolution scale: Sets the size of your off-screen texture.
- intensity: Manages the brightness of the rays of light you produce.
- blur width: The blur radius used when blending pixel colors.
monitoring: you addSystem.Serializable
at the top of the class to make attributes editable via the inspector.
Then declare an instance of the settings by adding the following line abovecry()
:
public VolumetricLightScatteringSettings settings = new VolumetricLightScatteringSettings();
That's all you need to get your custom renderer up and running. Save your script and switch to Unity.
Adding the rendering resource to the advanced renderer
Now that you have a new fancy renderer feature, you need to add it to the advanced renderer.
See thisRW/Settingsin the project window and selectForwardRenderer.
Select in the inspector windowAdd Rendering Asset ▸ Volumetric Light Scattering.
The renderer now uses your renderer resource. clickThe definitionto view the properties you just defined.
click nowSpielenand... you will see that nothing has changed. This is because the resource render pass does nothing yet.
monitoring: If you don't see the settings in the inspector, try againVolumetricLightScattering.cs🇧🇷 Right click on the script and select itre-importation🇧🇷 Go back to the advanced renderer - the settings should now be visible.
Implement a custom render pass
If you recall, the code template contained two classes, including one for a custom render pass. This is what you will use now.
start going backVolumetricLightScattering.cs🇧🇷 WatchCustomRenderPass
and you will see this:
CustomRenderPass
derives from the abstract base class,ScriptableRenderPass
, which provides methods to implement a logical pass of rendering.
same as withScriptableRendererFeature
, Unity calls some methods while the script is running. Here are the ones you need to know for this tutorial:
- OnCameraSetup(): This is called before rendering a camera to configure render targets.
- To run(): Each frame is called to run the rendering logic.
- OnCameraCleanup(): After this render pass is run, call this to clean up any allocated resources - typically render targets.
There are other methods you can override but won't use in this tutorial, including:
- Furnish (): Before running the render pass to configure the render targets, you can call this function, it will be run right after
OnCameraSetup()
- OnFinishCameraStackRendering(): This function is called once after the last camera in the camera stack has been rendered. You can use this to clean up all allocated resources after all cameras in the batch have finished rendering.
Configure the light scattering pass
Next, set up the light scattering pass.
renameCustomRenderPass
ProLightScatteringPass
🇧🇷 Use your code editor's renaming feature as the term appears in multiple places. Then declare the following variables aboveOnCameraSetup()
:
private readonly RenderTargetHandle occluders = RenderTargetHandle.CameraTarget;private readonly floatresolutionScale;private readonly float intensidade;private readonly float blurWidth;
Here's what you do above:
- of occlusion: You need one
RenderTargetHandle
to create a texture. - resolution scale: The resolution scale.
- intensity: The intensity of the effect.
- blur width: The width of the radial blur.
you defineresolution scale
,intensity
eblur width
in the settings.
Your next step is to declare a constructor to initialize these variables. To do this, add the following code under the variables you just added:
public LightScatteringPass(VolumetricLightScatteringSettings settings) { occluders.Init("_OccludersMap"); resoluçãoScale = settings.resolutionScale; intensidade = configurações.intensidade; blurWidth = settings.blurWidth;}
Here,LightScatteringPass
is the render pass builder. You inject themThe definition
Instance you created for the feature class.
The first step is initializationresolution scale
based on settings. Then you need to initializeof occlusion
vocationStart()
with a texture name.
then replacecry()
noVolumetric Light Scattering
with the following:
public override void Create(){m_ScriptablePass = new LightScatteringPass(settings); m_ScriptablePass.renderPassEvent = RenderPassEvent.BeforeRenderingPostProcessing;}
Here you call the constructor pass-through and pass the settings as an argument. You also configure where to insert the render pass. In this case, insert it before the renderer does the post-processing.
monitoring: You can also insert render pass events at a specific point by adding an offset to aRenderPassEvent
.
Configure the occluder map
Now you'll create an off-canvas texture to store the silhouettes of any objects blocking the light source. You will do this inOnCameraSetup()
🇧🇷 Replace this method with the following code:
öffentlicher Ersatz void OnCameraSetup(CommandBuffer cmd, ref RenderingData renderingData){ // 1 RenderTextureDescriptor cameraTextureDescriptor = renderingData.cameraData.cameraTargetDescriptor; // 2 cameraTextureDescriptor.depthBufferBits = 0; // 3 cameraTextureDescriptor.width = Mathf.RoundToInt( cameraTextureDescriptor.width *resolutionScale); cameraTextureDescriptor.height = Mathf.RoundToInt (cameraTextureDescriptor.height * resolutionScale); // 4 cmd.GetTemporaryRT(occluders.id, cameraTextureDescriptor, FilterMode.Bilinear); // 5 ConfigureTarget(occluders.Identifier());}
Here are some important things happening:
- First you get a copy of the current camera
RenderTextureDescriptor
🇧🇷 This descriptor contains all the information needed to create a new texture. - So you turn off the depth buffer because you won't be using it.
- You rescale the dimensions of the texture
resolution scale
. - To create a new texture, issue a
GetTemporaryRT()
graphical command. The first parameter is the ID ofof occlusion
🇧🇷 The second parameter is the texture setting, which you will get from the descriptor you create, and the third is the texture filter mode. - Finally you call
ConfigureTarget()
with the textureRenderTargetIdentifier
to complete the setup.
monitoring: It is important to understand that you issue all rendering commands through acommand buffer
🇧🇷 You configure the commands you want to run and then pass them to the programmable rendering pipeline to actually run them. You must never callCommandBuffer.SetRenderTarget()
🇧🇷 Call insteadConfigureTarget()
econfigureLampar()
.
Save the script and go back to the editor.
Implementing the occluder shader
Next you need to create your own erased shader. Why write your own instead of using the default wipe shader?
The default shader respects the fog settings and uses them to affect the color of distant objects during rendering. That's fine for the final image, but not for this texture map. For this reason, you create and declare a custom unlit shaderFog {Mode Off}
noSubShader
.
WithinRW/Shader, SelectCreate ▸ Shaders ▸ Deleted Shaderand name itcolor off🇧🇷 Double clickUnlitColor.shaderto start the editor and replace all lines with:
Shader "Hidden/RW/UnlitColor"{ Properties { _Color("Main Color", Color) = (0.0, 0.0, 0.0, 0.0) } SubShader { Tags { "RenderType" = "Opaque" } Fog {Mode Off} Color[ _Cor] Aprovado {} }}
Here you create a shader that uses a color property called_Kor
and goes toKor
shader command. That's all you need to draw the objects in black.
Save the shader code and switch to the editor to compile it.
Running the render pass
First you need to create a material with the shader. Return toVolumetricLightScattering.csis onLightScatteringPass
, add the following line above the constructor:
private read-only material occludersMaterial;
This preserves the material instance.
Now add this line in the constructor:
occludersMaterial = neues Material (Shader.Find ("Hidden/RW/UnlitColor"));
This creates a new material instance with thecolor offshaders. you useShader.Find()
to get a reference to the shader with the shader name.
To run the playback logic, searchTo run()
and replace it with the following:
public override void Execute(ScriptableRenderContext context, ref RenderingData renderingData){ // 1 if (!occludersMaterial) { return; } // 2 CommandBuffer cmd = CommandBufferPool.Get(); // 3 using (new ProfilingScope(cmd, new ProfilingSampler("VolumetricLightScattering"))) { // TODO: 1 // TODO: 2 } // 4 context.ExecuteCommandBuffer(cmd); CommandBufferPool.Release(cmd);}
Here's what happens in the above code:
- They stop rendering the pass when footage is missing.
- As you already know, you issue graphical commands through command buffers.
CommandBufferPool
It's just a collection of ready-made command buffers that can be used immediately. You can request one withTake it()
. - You pack the graphical commands in a
Scope of Profiling
, which guarantees thatFrameDebuggerYou can profile the code. - After adding all the commands to
command buffer
, schedule the execution, and approve it.
Draw the light source
With the created material you draw the real light source.
Substitute// AT: 1
with the following lines:
context.ExecuteCommandBuffer(cmd);cmd.Clear();
This prepares the command buffer so you can start adding commands.
The first graphical command you enter renders the main light source. For simplicity, draw the skybox that contains the shape of the sun. This should give very good results!
Add these lines below the previous code:
Kamera camera = renderizaçãoData.cameraData.camera;context.DrawSkybox(camera);
ForScriptableRenderContext
provideDrawSkybox
, it needs a reference to the camera. You get this reference fromrendering data
, a structure that provides information about the scene.
Reference to Unity's default shaders
Next you'll draw the occluders, which are any objects in the scene that can block the light source. Instead of tracking these objects, use your shaders to reference them during rendering.
In this project, all objects use standard Unity shaders. To support standard shaders, you need to get the shader tag id for all standard shader passes. You do this once and store the results in a variable usingListen.
To use C# lists you need to add this line at the beginning of the file:
usando System.Collections.Generic;
Then declare the following field at the topLightScatteringPass
:
private readonly List<ShaderTagId> shaderTagIdList = new List<ShaderTagId>();
Then add the following code in the constructor to populate the list:
shaderTagIdList.Add(new ShaderTagId("UniversalForward"));shaderTagIdList.Add(new ShaderTagId("UniversalForwardOnly"));shaderTagIdList.Add(new ShaderTagId("LightweightForward"));shaderTagIdList.Add(new ShaderTagId("SRPDefaultUnlit") );
Drawing the occluder
Once you get the default shader IDs, you can draw objects with those shaders.
times paraDrawSkybox()
function and add the following line below it:
// 1DrawingSettings drawSettings = CreateDrawingSettings(shaderTagIdList, ref renderingData, SortingCriteria.CommonOpaque);// 2drawSettings.overrideMaterial = occludersMaterial;
Here's what you do with this code:
- Before you draw anything, you need to set up a few things.
drawing settings
describes how objects are sorted and which shader passes are allowed. You create this by callingCreateDrawingSettings()
🇧🇷 They annotate this method with Shaderpasses, a reference torendering data
and the sorting criteria for visible objects. - With material substitution, you replace the materials of objects
OcclusoreneMaterial
.
Then add the following after the previous code:
context.DrawRenderers(renderingData.cullResults, ref drawSettings, ref filteringSettings);
DrawRenderer
takes over the actual drawing call. It needs to know what objects are currently visible, and that's what the selection results are for. In addition, you must specify drawing settings and filter settings. You pass both structures by reference.
You have already defined the drawing settings but not the filter settings. Again, you can declare them once, so add this line at the top of the class:
FilteringSettings privados filteringSettings = new FilteringSettings(RenderQueueRange.opaque);
filter settings
Specifies which render queue range is allowed: opaque, transparent, or all. With this line you define the scope to filter out all objects that are not part of the opaque rendering queue.
The last thing you will do is clear allocated resources when running this render pass. To do this, replaceOnCameraCleanup()
from:
public override void OnCameraCleanup(CommandBuffer cmd){ cmd.ReleaseTemporaryRT(occluders.id);}
Congratulations, you've come a long way! Save the script, clickSpielenand guess what...it's still the same. Don't worry, you'll see why in the next section.
Inspection with the frame debugger
While you might not see anything new on the scene, something else is happening under the hood. Then use theFrame-Debuggerto inspect the renderer and see if the texture is drawn correctly.
Make sure you're still powered onSpielenaway with theSpielselected view. SelectWindow ▸ Analysis ▸ Frame Debugger, which opens a new window. Install the window next to thatCenatab, pressActivateand you will see this:
The main list shows the sequence of graphical commands in the form of a hierarchy that identifies their origin.
SelectVolumetric Light Scatteringand expand it. You will notice that the game view changes. When rendering on aRenderTexture
in the selected drawing description, its contentRenderTexture
displayed in the game view. This is the Occluder Map!
If you keep the default resolution scaling settings, you will see that the texture is half the screen size. You can select individual callouts to see what they do. You can even iterate through each draw call:
OK, the occluder map is working. clickdeactivateto stop debugging.
Refine the image in post-processing
Now refine the image by blurring it in post-processing.
Implementation of the Radial Blur Shader
Radial blur is achieved by creating a post-processing fragment shader.
Go toRW/Shader, SelectCriar ▸ Shader ▸ Image Effect Shaderand name itradial blur.
then openRadialBlur.shaderand replace the name explanation with:
Shader "Oculto/RW/RadialBlur"
This sets the new radial blur shader.
Next you need to tell the shader the settings defined in the renderer resource. In the property block add the following_MainTex
:
_BlurWidth("Blur Width", Range(0,1)) = 0.85_Intensity("Intensity", Range(0,1)) = 1_Center("Center", Vector) = (0.5,0.5,0,0)
_BlurWidth
e_Intensity
control the appearance of your light rays._Center
It is aVector
to the screen space coordinates of the Sun, the point of origin for the radial blur.
combining the images
Next, you'll run this shader on the occluder's texture map and overlay the resulting color over the main camera's color texture. You use blending modes to determine how the two images combine.
Start by walkingSubShader
and remove this code:
// No Selection or DepthCulll Off ZWrite Off ZTest Always
Replace with:
mix one
This line sets the blending mode toadditive🇧🇷 This adds the values of both images to the color channels and fixes them to the maximum value of 1.
Then declare the following attributes abovefrag()
:
#define NUM_SAMPLES 100float _BlurWidth;float _Intensity;float4 _Center;
The first line defines the number of samples that must be taken to blur the image. A high number gives better results, but also has less performance. The other lines are the same variables you declared in the property block.
The real magic happens in the fragment shader. Substitutefrag()
with this code:
fixed4 frag(v2f i) : SV_Target{ //1 cor fixed4 = fixed4(0.0f, 0.0f, 0.0f, 1.0f); //2 float2 raio = i.uv - _Center.xy; //3 for (int i = 0; i < NUM_SAMPLES; i++) {float scale = 1.0f - _BlurWidth * (float(i) / float(NUM_SAMPLES - 1)); color.xyz += tex2D(_MainTex, (ray * scale) + _Center.xy).xyz / float(NUM_SAMPLES); } //4 cor de retorno * _Intensity;}
In the above code you do:
- Proclaim
Kor
with a default value ofTherefore. - Calculate the radius from the center to the UV coordinates of the current pixel.
- Experience the texture along the
beam
and accumulate the color of the fragment. - Multiply
Kor
Prointensity
and return the result.
Save the shader and return to Unity. You test the new shader by creating a new material and dragging the shader file onto it.
Start by choosing the material and assigning itoccludersMapExample.pngin the texture slot. Find this texture inRW/Textures.
Now you can see the drop shadow effect in the preview window. change thatvisualization formfor oneairplaneand play around with the values to better understand the shader attributes.
Awesome, the effect is almost done.
Adding a radial blur material instance
The following steps are similar to those you did for the occluder map. Return toVolumetricLightScattering.csis onLightScatteringPass
, declare a material above the constructor:
private read-only material radialBlurMaterial;
Then add this line in the constructor:
radialBlurMaterial = new material ( Shader.Find("Oculto/RW/RadialBlur"));
Nothing new so far, you just create the material instance. Make sure the material is not missing by replacing itWhat if
Explanation:
if (!ocludersMaterial){ return;}
Like this:
if (!occludersMaterial || !radialBlurMaterial ){ return;}
You can now configure the blur material.
Set radial blur material
The blur shader needs to know the position of the sun in order to use it as the center point. InsideTo run()
, substitute// AT 2
with the following lines:
// 1Vector3 sunDirectionWorldSpace = RenderSettings.sun.transform.forward;// 2Vector3 cameraPositionWorldSpace = camera.transform.position;// 3Vector3 sunPositionWorldSpace = cameraPositionWorldSpace + sunDirectionWorldSpace;// 4Vector3 sunPositionViewportSpace = camera.WorldToViewportPoint(sunPositionWorldSpace);
This may sound like crazy voodoo magic, but it's actually quite simple:
- You get a connection to the sun
render settings
🇧🇷 You need the front vector of the sun since directional lights have no position in space. - Get the camera position.
- This will give you a unit vector going from the camera to the sun. You use this for the position of the sun.
- The shader expects a position in viewport space, but you performed your calculations in space. To fix this use
WorldToViewportPoint()
to transform the point-to-camera viewport space.
Good job, you finished the hardest part.
Now pass the data to the shader with the following code:
radialBlurMaterial.SetVector("_Center", new Vector4( sunPositionViewportSpace.x, sunPositionViewportSpace.y, 0, 0));radialBlurMaterial.SetFloat("_Intensity", Intensität);radialBlurMaterial.SetFloat("_BlurWidth", blurWidth);
Remember that you really only need the x and y components ofsunPositionViewportSpace
since it represents a pixel location on the screen.
Blur the occluder map
Finally, you need to blur the occluder map.
Add the following line of code to run the shader:
Blit(cmd, occluders.Identifier(), cameraColorTargetIdent, radialBlurMaterial);
The context providesStay
, a function that uses a shader to copy a source texture to a target texture. It runs its shader along with itof occlusion
as the source texture and then saves the output in the camera's target color. Based on the blending mode set in the shader, it blends the output and target colors.
For a reference to the camera's color target, add the following line aboveLightScatteringPass
:
private RenderTargetIdentifier cameraColorTargetIdent;
Add a new function below the constructor to define this variable:
public void SetCameraColorTarget(RenderTargetIdentifier cameraColorTargetIdent){ this.cameraColorTargetIdent = cameraColorTargetIdent;}
noVolumetric Light Scattering
, add this line at the endAddRenderPasses()
:
m_ScriptablePass.SetCameraColorTarget(renderer.cameraColorTarget);
This will pass the camera's color target to the render pass, whichwill()
requires.
Save the script and go back to the editor. Congratulations, you've completed the effect!
Go to Scene View and you'll see how your lighting effects work. clickSpielenand move through the level to see it in action:
Where to from here?
Load the final project with theDownload materialsbutton above or below in this tutorial.
In this tutorial, you learned how to write your own render resources to extend the universal render pipeline. You also learned an interesting post-processing technique to visually enhance your designs.
Feel free to explore the project files and experiment with the effect. To see how everything works together, watch the render pass withFrame-Debugger.
There is a little surprise hidden in the scene. Note the day and night regulator connected to the directional light. activateAuto-Incrementand enjoy the sunrise. 🇧🇷
If you want to learn more about the Universal Render Pipeline, a look at the source code is helpful. Yes! It's available to everyone for free and is a great source of learning material. You can find it underBundle/Universal RP.
Check out those tooBootsangriff-Demoby Andre McGrail using custom render assets for water effects and other cool stuff.
We hope you enjoyed this tutorial! If you have any questions or comments, please feel free to join the discussion in the forum below.