Classic shadow maps in a nutshell. Dev Blog 4: New Combat Arms Graphics Features

Light has three main characteristics: brightness (Multiplier), color (Color) and shadows cast from objects illuminated by it (Shadows).

When arranging light sources in a scene, be sure to pay attention to their color. Daylight sources have a blue tint, but to create an artificial light source, you need to give it a yellowish color.

It should also be taken into account that the color of the source simulating street light depends on the time of day. Therefore, if the subject of the scene involves evening time, the lighting may be in the reddish hues of a summer sunset.

Various renderers offer their own shadow generation algorithms. The shadow cast from an object can tell a lot - how high it is above the ground, what is the structure of the surface on which the shadow falls, what source the object is lit, etc.

In addition, the shadow can emphasize the contrast between the foreground and the background, as well as “give out” an object that is not in the field of view of the virtual camera lens.

Depending on the shape of the shadow cast by the object, the scene may look realistic (Fig. 6.6) or not quite believable (Fig. 6.7).

As we said above, a real ray of light undergoes a large number of reflections and refractions, so real shadows always have blurry edges. In three-dimensional graphics, a special term is used to denote such shadows - soft shadows.

Achieving soft shadows is quite difficult. Many renderers solve the problem of soft shadows by adding a non-point light source that has a rectangular or other shape to the 3ds max 7 interface. Such a source emits light not from one point, but from every point on the surface. In this case, the larger the area of ​​the light source, the softer the shadows are when rendered.

There are different approaches to rendering shadows: using a shadow map ( Shadow Map ), tracing ( Raytraced ) and global illumination ( Global Illumination ). Let's consider them in order.

Rice. 6.6. Object with soft shadows

Rice. 6.7. Object with hard shadows

Rice. 6.8. Scroll of settings Shadow Map Params (Parameters of the shadow map) of the light source

Using a shadow map allows you to get blurry shadows

with fuzzy edges. The main setting Shadow Map (Shadow map) is the size of the shadow map (parameter Size (Size)) in the settings rollout Shadow Map Params (Shadow map parameters) (Fig. 6.8). If the map size is reduced, the clarity of the resulting shadows will also decrease.

The tracing method allows you to get perfectly shaped shadows, which, however, look unnatural due to their sharp outline. Tracing is the tracking of the paths of individual light rays from the light source to the camera lens, taking into account their reflection from objects in the scene and refraction in transparent media. The tracing method is often used to render scenes that have specular reflections.

Starting with 3ds max 5, the Area Shadows method is used to obtain soft shadows, which is based on a slightly modified tracing method. Area Shadows (Distribution of shadows) allows you to calculate the shadows from the object as if there is not one light source in the scene, but a group of point light sources evenly distributed in a certain area.

Although ray tracing accurately reproduces the fine details of generated shadows, it is not ideal for rendering due to the resulting shadows being hard-edged.

The global illumination method (Radiosity) allows you to achieve soft shadows in the final image. This method is an alternative to lighting tracing. If the tracing method renders only those parts of the scene that are hit by light rays, then the global illumination method calculates the scattering of light in unlit or shadowed parts of the scene based on the analysis of each image pixel. This takes into account all the reflections of light rays in the scene.

Global Illumination allows you to get a realistic image, but the rendering process is heavily loaded on the workstation and, moreover, takes a lot of time. Therefore, in some cases it makes sense to use a lighting system that simulates the effect of scattered light. In this case, the light sources must be placed in such a way that their position coincides with the places of direct light exposure. Such sources should not create shadows and should have a low brightness. This method certainly does not produce as realistic an image as can be obtained using a true global illumination method. However, in scenes that have simple geometry, it may well come in handy.

There are several algorithms for calculating global illumination, one of the methods for calculating reflected light is photon tracing (Photon Mapping). This method involves the calculation of global illumination based on the creation of a so-called photon map. The photon map is information about the illumination of the scene, collected using tracing.

The advantage of the photon tracing method is that once saved as a photon map, the photon tracing results can later be used to create a global illumination effect in 3D animation scenes. The quality of global illumination calculated using photon tracing depends on the number of photons, as well as the depth of tracing. With the help of photon tracing, you can also calculate the caustic effect (for more information about the caustic effect, see the section “General information about visualization in 3D graphics” in Chapter 7).

Among the standard light sources, three sources (namely Spot, Direct and Omni) allow us to choose the type of shadows to be rendered. If we use the standard Default Scanline Renderer (DSR), then we will be interested in: Advanced ray-traced shadows, Area shadows, Ray-traced shadows, Shadow maps.

When a shadow type is selected, a shadow parameters rollout appears among the IS parameter scrolls, the name of which begins with the name of the type.

shadow map

The simplest and undemanding type of shadows to the calculated resources.

  1. The size of the map that the shadow is based on. The larger the map, the better the calculated shadow. It is better to use numbers of order 2 n
  2. Blurring the edge of the shadow. Increasing the parameter allows you to get rid of the jagged edge of the edge at low map resolution
  3. Parameter responsible for controlling the Bias value. Disabled by default (best result in most cases). In the case of animation, enabling the option can help.
  4. If disabled, light will pass through the surface if it hits polygons facing away from it with normals. Enabling the option allows you to get the correct shadows

In Fig.1, the top row of images clearly shows the change in the quality of the shadow with an increase in the Size parameter. Even a significant increase in the size of the map does not solve the problem of jagged edges of the shadow, although the drawing of the shadow certainly becomes more elaborate.

In the second row, in all three cases, the size of the map remains the same, but the Sample Range parameter changes. Gradually increasing it, we got rid of the jaggedness by blurring the edge of the shadow.

Fig.1 Changing the quality of a shadow of the Shadow Map type with different parameters

Ray Traced Shadows

Shadows of this type are calculated based on a tracing algorithm. They have sharp edges and are almost impossible to adjust.

The Ray-Traced Shadow is more accurate than the Shadow Map. In addition, they are able to take into account the transparency of the object, but at the same time “dry” and clear, which does not look very natural in most cases. Ray-Traced Shadow is more demanding on computer resources than Shadow Map.

  1. Distance of the object from the cast shadow
  2. Tracing depth is a parameter responsible for the development of the shadow. Increasing this value can significantly increase rendering time.

Ray-Traced Shadows with Omni type ICs will take longer to render than Ray-Traced Shadows + Spot (or Directional)

Fig.2 Ray-Traced Shadows from opaque and transparent objects

Advanced Ray Traced Shadows

Shadows of this type are very similar to Ray-Traced Shadows, but as the name implies, they have more advanced settings that allow you to get more natural and correct calculations.

  1. Shadow generation method
    Simple - a single beam leaves the IS. Shadow does not support any anti-aliasing and quality settings
    1-Pass Antialias - the emission of a beam of rays is imitated. Moreover, the same number of rays is reflected from each illuminated surface (the number of rays is regulated by Shadow Quality).
    2-Pass Antialias - Similarly, but two beams of rays are emitted.
  2. If disabled, light will pass through the surface if it hits polygons facing away from it with normals. Enabling the option allows you to get the correct shadows
  3. The number of rays emitted by the illuminated surface
  4. The number of secondary rays emitted by the illuminated surface
  5. The radius (in pixels) to blur the edge of the shadow. Increasing the parameter improves the quality of the blur. If small details are lost when blurring the edges, correct this incident by increasing Shadow Integrity
  6. Distance of the object from the cast shadow
  7. A parameter that controls the randomness of the rays. Initially, the rays are directed along a strict grid, which can cause unpleasant artifacts. Adding chaos will make the shadow look more natural.
    Recommended values ​​are 0.5-1.0. But softer shadows will require a higher Jitter Amount.

Area Shadows

This type of shadow allows you to take into account the dimensions of the light source, so you can get natural extended shadows that "split" and blur as they move away from the object. 3dsMax obtains such shadows by mixing a number of "samples" (samples) of shadows. The more "samples" and better blending, the better the calculated shadow.

  1. The shape of an imaginary light source that allows you to determine the nature of the shadow.
    Simple - a single beam leaves the IS. The shadow does not support any anti-aliasing and quality settings.
    Rectangle Light t - simulates the emission of light from a rectangular area.
    disc light - The IC behaves as if it had taken the form of a disk.
    box light – imitation of a cubic IC.
    Sphere Light t is an imitation of a spherical IC.
  2. If disabled, light will pass through the surface if it hits polygons facing away from it with normals. Enabling the option allows you to get the correct shadows.
  3. Controls the number of rays emitted (non-linear). The higher the number, the more rays, the higher the quality of the shadow.
  4. The parameter responsible for the quality of the shadow. For rational calculation, always set a number higher than Shadow Integrity.
  5. The radius (in pixels) to blur the edge of the shadow. Increasing the parameter improves the quality of the blur. If small details are lost when blurring the edges, correct this incident by increasing Shadow Integrity.
  6. The distance of the object from the cast shadow.
  7. A parameter that controls the randomness of the rays. Initially, the rays are directed along a strict grid, which can cause unpleasant artifacts. The introduction of chaos will make the image of the shadow more natural.
  8. Imaginary source dimensions. Length - length, Width - width, Height (only active for Box Light and Sphere Light) - height.

Let's take a look at Fig.3. On the first fragment. Several "samples" of the shadow are superimposed on top of each other without any blending. On the second fragment they are already mixed (Jitter Amount changed from 0.0 to 6.0). Mixed "samples" are perceived as a more natural shadow, but its quality leaves much to be desired. The third fragment shows a shadow with excellent quality (Shadow Integrity and Shadow Quality changed from single values ​​to 8 and 10 respectively).

The second row in Fig.3. illustrates how the nature of the shadow changes if we increase the dimensions of the imaginary source. In this case, we have an imaginary source of type Rectangle Light (flat rectangular). As the source area increases, the blurring of the shadow increases.

Fig.3 Changing the quality of the shadow of the Area Shadow type with different parameters

Some parameter values ​​are advisory in nature, but everything is limited only by your imagination. The best way to find out is to experiment. Don't be afraid to experiment with light. Catch the mood of the future picture and surrender to the settings.

On Fig.4. a chess horse with a material based on a simple Wood procedural texture. Three light sources tinted in different colors. A simple staging, nevertheless, the figure looks good.

Fig. 4 Chess piece "Knight". Object visualization

Summary

Lighting is one of the most important steps in working on a 3D scene. At first glance, it may seem that the dry information of the lesson cannot be applied to creative work. However, with due ingenuity and diligence, incredible results can be achieved. After all, all digital images are just sets of zeros and ones, and 3dsMax is just your next tool, just like a pencil or a brush.

The original shadow mapping algorithm was invented a long time ago. Its working principle is as follows:
  1. We draw the scene into a texture (shadow map) from the position of the light source. It is important to note here that for different types of light sources, everything happens a little differently.
    Directional light sources (in a certain approximation, sunlight can be referred to as such) do not have a position in space, however, to form a shadow map, this position has to be chosen. Usually it is tied to the position of the observer, so that objects that are directly in the observer's field of view fall into the shadow map. When rendering, an orthographic projection is used.
    Projection light sources (lamps with an opaque shade, spotlights) have a certain position in space and limit the spread of light in certain directions. When rendering the shadow map in this case, the usual perspective projection matrix is ​​used.
    Omnidirectional light sources (an incandescent lamp, for example), although they have a certain position in space, spread light in all directions. To correctly build shadows from such a light source, you need to use cube textures (cube maps), which, as a rule, means drawing the scene into a shadow map 6 times. Not every game can afford dynamic shadows from this kind of light, and not every game needs it. If you are interested in how this approach works, there is a thread on this topic.
    In addition, there is a subclass of shadow mapping algorithms (LiSPSM , TSM , PSM , etc.), which use non-standard projection view matrices to improve the quality of shadows and eliminate the shortcomings of the original approach.
    No matter how the shadow map is formed, it invariably contains the distance from the light source to the nearest visible (from the position of the light source) point or a function of this distance in more complex versions of the algorithm.
  2. We draw the scene from the main camera. In order to understand whether the point of any object is in the shadow, it is enough to translate the coordinates of this point into the space of the shadow map and make a comparison. The space of the shadow map is determined by the view-projection matrix, which was used in the formation of this map. By translating the coordinates of the object's point into this space and converting the coordinates from the range [-1;-1] in , we get the texture coordinates. If the received coordinates are out of range , then this point is not included in the shadow map, and it can be considered unshaded. Having made a selection from the shadow map according to the obtained texture coordinates, we will get the distance between the light source and the point of an object closest to it. If we compare this distance with the distance between the current point and the light source, then the point is in shadow if the value in the shadow map is less. This is quite simple from a logical point of view, if the value from the shadow map is less, then at this point there is some object that is closer to the light source, and we are in its shadow.
Shadow mapping is by far the most common algorithm for rendering dynamic shadows. The implementation of one or another modification of the algorithm can be found in almost any graphics engine. The main advantage of this algorithm is that it provides fast formation of shadows from arbitrarily complex geometric objects. At the same time, the existence of a wide range of variations of the algorithm is largely due to its shortcomings, which can lead to very unpleasant graphical artifacts. Problems specific to PPSM and ways to overcome them will be discussed below.

Parallel Split Shadow Mapping

Consider the following problem: it is necessary to draw dynamic shadows from objects that are at a considerable distance from the player without affecting the shadows from nearby objects. We restrict ourselves to directional sunlight.
This kind of task can be especially relevant in outdoor games, where in some situations the player can see the landscape hundreds of meters in front of him. At the same time, the further we want to see the shadow, the more space should fall into the shadow map. In order to keep the proper resolution of objects in the shadow map, we are forced to increase the resolution of the map itself, which first leads to a decrease in performance, then we run into a limit on the maximum size of the render target. As a result, balancing between performance and shadow quality, we will get shadows with a well-marked aliasing effect, which is poorly masked even by blurring. It is clear that such a solution cannot satisfy us.
To solve this problem, we can come up with a projection matrix such that objects that are close to the player receive more area in the shadow map than objects that are far away. This is the main idea of ​​the Perspective Shadow Mapping (PSM) algorithm and a number of other algorithms. The main advantage of this approach is the fact that we have practically not changed the process of rendering the scene, only the method of calculating the view-projection matrix has changed. This approach can be easily built into an existing game or engine without the need for major modifications to the latter. The main disadvantage of such approaches is the boundary conditions. Let's imagine a situation that we draw shadows from the Sun at sunset. As the Sun gets closer to the horizon, the objects in the shadow map start to overlap a lot. In this case, an atypical projection matrix can exacerbate the situation. In other words, PSM class algorithms work well in certain situations, for example, when the game draws shadows from the "fixed Sun" close to the zenith.
A fundamentally different approach is proposed in the PSSM algorithm. To some, this algorithm may be known as Cascaded Shadow Mapping (CSM). Formally, these are different algorithms, I would even say that PSSM is a special case of CSM. In this algorithm, it is proposed to divide the pyramid of visibility (frustum) of the main camera into segments. In the case of PSSM - with boundaries parallel to the near and far clipping planes, in the case of CSM - the type of separation is not strictly regulated. For each segment ( split in the terminology of the algorithm) builds its own shadow map. An example of separation is shown in the figure below.


In the figure you can see the division of the visibility pyramid into 3 segments. Each of the segments is marked with a bounding box (in 3D space there will be a box, a bounding box). For each of these limited parts of space, its own shadow map will be built. The attentive reader will note that here I have used axis-aligned bounding boxes. You can also use unaligned ones, this will add additional complexity to the object clipping algorithm and somewhat change the way the view matrix is ​​formed from the position of the light source. Since the pyramid of visibility expands, the area of ​​the segments closer to the camera can be significantly less than the area of ​​the more distant ones. With the same shadow map resolution, this means more resolution for shadows from nearby objects. In the article mentioned above in GPU Gems 3, the following scheme is proposed for calculating the distances of the splitting of the pyramid of visibility:



where i– partition index, m- the number of partitions, n is the distance to the near clipping plane, f is the distance to the far clipping plane, λ is the coefficient that determines the interpolation between the logarithmic and uniform partitioning scales.

General in implementation
The PSSM algorithm implemented in Direct3D 11 and OpenGL has a lot in common. To implement the algorithm, it is necessary to prepare the following:
  1. Several shadow maps (according to the number of partitions). At first glance, it seems that to get multiple shadow maps, you need to draw objects several times. In fact, it is not necessary to do this explicitly, we will use the hardware instancing mechanism. To do this, we need a so-called texture array for rendering and a simple geometry shader.
  2. Object clipping mechanism. Objects of the game world can be of different geometric shapes and have different positions in space. Extended objects can be seen in several shadow maps, small objects in only one. The object can be right on the border of neighboring segments and must be drawn with at least 2 shadow maps. Thus, a mechanism is needed to determine which subset of shadow maps an object falls into.
  3. A mechanism for determining the optimal number of partitions. Rendering shadow maps for each segment per frame can be a waste of computational resources. In many situations, the player sees only a small part of the game world in front of him (for example, he looks at his feet, or his eyes rest on the wall in front of him). It is clear that this greatly depends on the type of view in the game, but it would be nice to have such optimization.
As a result, we get the following algorithm for generating projection view matrices for rendering shadow maps:
  1. We calculate the distances for splitting the pyramid of visibility for the worst case. The worst case here is that we see shadows up to the far clipping plane of the camera.

    The code

    void calculateMaxSplitDistances() ( float nearPlane = m_camera.getInternalCamera().GetNearPlane(); float farPlane = m_camera.getInternalCamera().GetFarPlane(); for (int i = 1; i< m_splitCount; i++) { float f = (float)i / (float)m_splitCount; float l = nearPlane * pow(farPlane / nearPlane, f); float u = nearPlane + (farPlane - nearPlane) * f; m_maxSplitDistances = l * m_splitLambda + u * (1.0f - m_splitLambda); } m_farPlane = farPlane + m_splitShift; }

  2. Determine the distance between the camera and the farthest visible point of the object that casts the shadow. The important thing to note here is that objects may or may not cast shadows. For example, a flat-hilly landscape can be made not casting shadows, in this case the lighting algorithm can be responsible for shading. Only objects that cast shadows will be drawn into the shadow map.

    The code

    float calculateFurthestPointInCamera(const matrix44& cameraView) ( bbox3 scenebox; scenebox.begin_extend(); for (size_t i = 0; i< m_entitiesData.size(); i++) { if (m_entitiesData[i].isShadowCaster) { bbox3 b = m_entitiesData[i].geometry.lock()->getBoundingBox(); b.transform(m_entitiesData[i].model); scenebox.extend(b); ) ) scenebox.end_extend(); float maxZ = m_camera.getInternalCamera().GetNearPlane(); for (int i = 0; i< 8; i++) { vector3 corner = scenebox.corner_point(i); float z = -cameraView.transform_coord(corner).z; if (z >maxZ) maxZ = z; ) return std::min(maxZ, m_farPlane); )

  3. Based on the values ​​obtained in steps 1 and 2, we determine the number of segments that we really need and the splitting distance for them.

    The code

    void calculateSplitDistances() ( // calculate how many shadow maps do we really need m_currentSplitCount = 1; if (!m_maxSplitDistances.empty()) ( for (size_t i = 0; i< m_maxSplitDistances.size(); i++) { float d = m_maxSplitDistances[i] - m_splitShift; if (m_furthestPointInCamera >= d) m_currentSplitCount++; ) ) float nearPlane = m_camera.getInternalCamera().GetNearPlane(); for (int i = 0; i< m_currentSplitCount; i++) { float f = (float)i / (float)m_currentSplitCount; float l = nearPlane * pow(m_furthestPointInCamera / nearPlane, f); float u = nearPlane + (m_furthestPointInCamera - nearPlane) * f; m_splitDistances[i] = l * m_splitLambda + u * (1.0f - m_splitLambda); } m_splitDistances = nearPlane; m_splitDistances = m_furthestPointInCamera; }

  4. For each segment (the boundaries of the segment are determined by the near and far distances), we calculate the bounding box.

    The code

    bbox3 calculateFrustumBox(float nearPlane, float farPlane) ( vector3 eye = m_camera.getPosition(); vector3 vZ = m_camera.getOrientation().z_direction(); vector3 vX = m_camera.getOrientation().x_direction(); vector3 vY = m_camera. getOrientation().y_direction();float fov = n_deg2rad(m_camera.getInternalCamera().GetAngleOfView());float aspect = m_camera.getInternalCamera().GetAspectRatio();float nearPlaneHeight = n_tan(fov * 0.5f) * nearPlane; float nearPlaneWidth = nearPlaneHeight * aspect; float farPlaneHeight = n_tan(fov * 0.5f) * farPlane; float farPlaneWidth = farPlaneHeight * aspect; vector3 nearPlaneCenter = eye + vZ * nearPlane; vector3 farPlaneCenter = eye + vZ * farPlane; bbox3 box; box. begin_extend(); box.extend(vector3(nearPlaneCenter - vX * nearPlaneWidth - vY * nearPlaneHeight)); box.extend(vector3(nearPlaneCenter - vX * nearPlaneWidth + vY * nearPlaneHeight)); box.extend(vector3(nearPlaneCenter + vX * nearPlaneWidth + vY * nearPlaneHeight)); box.extend (vector3(nearPlaneCenter + vX * nearPlaneWidth - vY * nearPlaneHeight)); box.extend(vector3(farPlaneCenter - vX * farPlaneWidth - vY * farPlaneHeight)); box.extend(vector3(farPlaneCenter - vX * farPlaneWidth + vY * farPlaneHeight)); box.extend(vector3(farPlaneCenter + vX * farPlaneWidth + vY * farPlaneHeight)); box.extend(vector3(farPlaneCenter + vX * farPlaneWidth - vY * farPlaneHeight)); box.end_extend(); return box; )

  5. We calculate the shadow matrix of the view-projection for each segment.

    The code

    matrix44 calculateShadowViewProjection(const bbox3& frustumBox) ( const float LIGHT_SOURCE_HEIGHT = 500.0f; vector3 viewDir = m_camera.getOrientation().z_direction(); vector3 size = frustumBox.size(); vector3 center = frustumBox.center() - viewDir * m_splitShift; center.y = 0; auto lightSource = m_lightManager.getLightSource(0); vector3 lightDir = lightSource.orientation.z_direction(); matrix44 shadowView; shadowView.pos_component() = center - lightDir * LIGHT_SOURCE_HEIGHT; shadowView.lookatRh(shadowView.pos_component( ) + lightDir, lightSource.orientation.y_direction()); shadowView.invert_simple(); matrix44 shadowProj; float d = std::max(size.x, size.z); shadowProj.orthoRh(d, d, 0.1f, 2000.0f); return shadowView * shadowProj; )

Clipping of objects is implemented using a simple test for the intersection of two bounding boxes (an object and a frustum segment). There is one feature here that is important to consider. We may not see the object, but we can see the shadow of it. It is easy to guess that with the approach described above, we will cut off all objects that are not visible in the main camera, and there will be no shadows from them. To prevent this from happening, I used a fairly common trick - extending the bounding box of the object along the direction of light propagation, which gave a rough approximation of the area of ​​\u200b\u200bspace where the shadow from the object is visible. As a result, for each object, an array of shadow map indices was formed, into which this object must be drawn.

The code

void updateShadowVisibilityMask(const bbox3& frustumBox, const std::shared_ptr & entity, EntityData& entityData, int splitIndex) ( bbox3 b = entity->getBoundingBox(); b.transform(entityData.model); // shadow box computation auto lightSource = m_lightManager.getLightSource(0); vector3 lightDir = lightSource.orientation .z_direction(); float shadowBoxL = fabs(lightDir.z)< 1e-5 ? 1000.0f: (b.size().y / -lightDir.z); bbox3 shadowBox; shadowBox.begin_extend(); for (int i = 0; i < 8; i++) { shadowBox.extend(b.corner_point(i)); shadowBox.extend(b.corner_point(i) + lightDir * shadowBoxL); } shadowBox.end_extend(); if (frustumBox.clipstatus(shadowBox) != bbox3::Outside) { int i = entityData.shadowInstancesCount; entityData.shadowIndices[i] = splitIndex; entityData.shadowInstancesCount++; } }


Now let's take a look at the rendering process and the Direct3D 11 and OpenGL 4.3 specific parts.
Implementation on Direct3D 11
To implement the algorithm on Direct3D 11, we need:
  1. An array of textures for rendering shadow maps. To create this kind of object, the D3D11_TEXTURE2D_DESC structure has an ArraySize field. Thus, in C++ code, we will not have anything like ID3D11Texture2D* array[N] . From the point of view of the Direct3D API, an array of textures is little different from a single texture. An important feature when using such an array in a shader is that we can determine which texture in the array we will draw this or that object into (SV_RenderTargetArrayIndex semantics in HLSL). This is the main difference between this approach and MRT (multiple render targets), in which one object is drawn at once into all given textures. For objects that need to be drawn into several shadow maps at once, we will use hardware instancing, which allows us to clone objects at the GPU level. In this case, the object can be drawn into one texture in the array, and its clones into others. In the shadow maps, we will only store the depth value, so we will use the texture format DXGI_FORMAT_R32_FLOAT .
  2. Special texture sampler. In the Direct3D API, you can set special parameters for texture fetching, which will allow you to compare the value in the texture with a given number. The result in this case will be 0 or 1, and the transition between these values ​​can be smoothed with a linear or anisotropic filter. To create a sampler in the D3D11_SAMPLER_DESC structure, set the following parameters:

    SamplerDesc.Filter = D3D11_FILTER_COMPARISON_MIN_MAG_LINEAR_MIP_POINT; samplerDesc.ComparisonFunc = D3D11_COMPARISON_LESS; samplerDesc.AddressU = D3D11_TEXTURE_ADDRESS_BORDER; samplerDesc.AddressV = D3D11_TEXTURE_ADDRESS_BORDER; samplerDesc.BorderColor = 1.0f; samplerDesc.BorderColor = 1.0f; samplerDesc.BorderColor = 1.0f; samplerDesc.BorderColor = 1.0f;
    Thus, we will have bilinear filtering, comparison with the “less than” function, and sampling from the texture by coordinates outside the range will return 1 (i.e. no shadow).

Rendering will be carried out according to the following scheme:

Implementation in OpenGL 4.3
To implement the algorithm on OpenGL 4.3, we need everything the same as for Direct3D 11, but there are subtleties. In OpenGL, we can only do combined comparison sampling for textures that contain a depth value (for example, in the format GL_DEPTH_COMPONENT32F). Therefore, we will render only to the depth buffer, and remove the write to color (more precisely, we will attach only an array of textures to the framebuffer to store the depth buffer). On the one hand, this will save us some video memory and lighten the graphics pipeline, on the other hand, it will force us to work with normalized depth values.
The sampling parameters in OpenGL can be bound directly to the texture. They will be identical to those discussed earlier for Direct3D 11.

Const float BORDER_COLOR = ( 1.0f, 1.0f, 1.0f, 1.0f ); glBindTexture(m_shadowMap->getTargetType(), m_shadowMap->getDepthBuffer()); glTexParameteri(m_shadowMap->getTargetType(), GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); glTexParameteri(m_shadowMap->getTargetType(), GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(m_shadowMap->getTargetType(), GL_TEXTURE_COMPARE_MODE, GL_COMPARE_REF_TO_TEXTURE); glTexParameteri(m_shadowMap->getTargetType(), GL_TEXTURE_COMPARE_FUNC, GL_LESS); glTexParameteri(m_shadowMap->getTargetType(), GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER); glTexParameteri(m_shadowMap->getTargetType(), GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER); glTexParameterfv(m_shadowMap->getTargetType(), GL_TEXTURE_BORDER_COLOR, BORDER_COLOR); glBindTexture(m_shadowMap->getTargetType(), 0);
An interesting process is the creation of an array of textures, which inside OpenGL is represented by a three-dimensional texture. No special function was made to create it, both are created using glTexStorage3D . The GLSL analogue of SV_RenderTargetArrayIndex is the gl_Layer built-in variable.
The rendering scheme has also remained the same:

Problems

The shadow mapping algorithm and its modifications have many problems. Often the algorithm has to be carefully tuned for a specific game or even a specific scene. A list of the most common problems and ways to solve them can be found. While implementing PSSM, I encountered the following:

Performance

Performance measurements were carried out on a computer with the following configuration: AMD Phenom II X4 970 3.79GHz, 16Gb RAM, AMD Radeon HD 7700 Series, running Windows 8.1.

Average frame time. Direct3D 11 / 1920x1080 / MSAA 8x / full screen / small scene (~12k polygons per frame, ~20 objects)

Average frame time. OpenGL 4.3 / 1920x1080 / MSAA 8x / full screen / small scene (~12k polygons per frame, ~20 objects)

Average frame time. 4 splits / 1920x1080 / MSAA 8x / full screen / large scene (~1000k polygons per frame, ~1000 objects, ~500 object instances)

The results showed that on large and small scenes, the OpenGL 4.3 implementation is generally faster. With an increase in the load on the graphics pipeline (an increase in the number of objects and their instances, an increase in the size of shadow maps), the difference in speed between implementations is reduced. I attribute the advantage of the OpenGL implementation to the way of generating the shadow map, which is different from Direct3D 11 (we used only the depth buffer without writing to color). Nothing prevents us from doing the same on Direct3D 11, while resigning ourselves to using normalized depth values. However, this approach will only work as long as we do not want to store in the shadow map some additional data or a function from the depth value instead of the depth value. And some algorithm improvements (for example, Variance Shadow Mapping) will be difficult for us to implement.

conclusions

The PSSM algorithm is one of the most successful ways to create shadows in large open spaces. It is based on a simple and clear splitting principle, which can be easily scaled by increasing or decreasing the quality of the shadows. This algorithm can be combined with other shadow mapping algorithms to produce more beautiful soft shadows or more physically correct ones. At the same time, the algorithms of the shadow mapping class often lead to the appearance of unpleasant graphical artifacts, which must be eliminated by fine-tuning the algorithm for a specific game.

Tags:

  • Shadow mapping
  • PSSM
  • Direct3D 11
  • OpenGL 4
Add tags

Developer Blog Four: New Graphics Features

In the first graphics engine update diary, we briefly talked about some of the new graphics improvements that will be added to the game soon. In addition, we have stated that most of these improvements will be on par with modern game engines such as Unity 5, Unreal 4 or CryEngine 3. But what does this really mean?

First of all, we want you to understand that the update process will happen gradually and not all maps, characters, weapon textures and objects will be updated immediately. Most of these will continue to work with the game's current assets, but over time we'll update those as well to give Combat Arms a truly updated look.

It's worth noting that using many of these graphics features to run on such an old engine, while still being backwards compatible with the old graphics mode and resources, required us to do a lot of interesting and sometimes crazy workarounds, limited to DirectX 9 . It was very funny!

Dynamic lighting and shadows

The original Lithitech Jupiter engine had limited support for dynamic lighting and shadows, although what the engine offered was fairly advanced for the era in which it was developed. Basically in the original Lithtech Jupiter graphics engine, lighting and shading were completely static, which means they are pre-computed and stored as a texture that is used when rendering the map's geometry. However, even static lighting at some points allowed them to create dynamically lit scenes on some objects, such as character models and weapons. Because dynamic lighting used per-vertex on the objects, various extra details and bulges also had to be pre-computed and baked directly onto the diffuse textures. And because they are baked, they cannot dynamically interact with different light sources and often do not fit into lighting conditions. This makes the picture look very flat and dated for 2017.

The biggest improvement we've been able to make is to combine both static map geometry and getting objects to have fully dynamic lighting and shadows. Dynamic lighting really allows you to render a scene with per-pixel lighting, reflections, highlights, dynamic shadows, and a bump map.

While introducing a single dynamic light source was not that difficult initially, the difficulty arose from the need to support many dynamic lights at the same time in an efficient manner. Support for many dynamic light sources has become a core feature for almost every modern game engine. The only way the Lithtech engine supported rendering of multiple dynamic lights was to essentially redraw the entire scene multiple times, once for each light on the map. This is somewhat of an exaggeration, since Lithtech is able to cluster the map geometry into blocks that can be executed by a bounding light sphere, and only these map blocks will need to be redrawn. But this still implies a high CPU load to render the scene multiple times and places even more restrictions on how complex the map geometry is and how many objects can be displayed on the screen at the same time.

We started by saying that in the beginning only the main directional sunlight was the only dynamic light source. All other internal omnidirectional and point light sources were left static. But many of the game locations in Combat Arms are partly or entirely housed in buildings, so the difference between indoor static and outdoor dynamic lighting was too noticeable.

Thus, for dynamic lighting using multiple light sources, it was necessary to make big changes in how the engine renders the scene and processes the light itself. To do this, we used a rendering technique known as Deferred Shading. The topic of deferred lighting and shading can be quite broad, and we briefly mentioned it in the first dev blog post. In principle, it allows you to transfer all dynamic lighting to the area of ​​post-process rendering, which is completely independent of the complexity of the scene and can be almost completely processed by the GPU with very little CPU load.

With all these optimizations, we were able to implement a complex physical lighting shader. We then tested this lighting model extensively, comparing it to how similar scenes would render in other modern game engines such as Unity 5 and Unreal 4, as well as fully ray-rendered scenes created using professional offline rendering. We limited ourselves to what we could implement in real time with modern graphics hardware using modern lighting shaders. The results are quite comparable to ray rendering, with the main differences being reflection accuracy and light attenuation.

New dynamic lighting shaders in Combat Arms:

Ray-tracing version of the scene:

shadow map

When implementing dynamic shadows, we faced a number of challenges in order to achieve modern fully dynamic shading in Combat Arms. In the original Lithtech engine, most shadows are baked into the static light textures used when rendering map geometry. This is why not all objects cast shadows, especially dynamic objects, because the shadow is fixed and cannot change. Although Lithtech has a feature that allows you to create dynamic shadows on certain objects, it was practically not used in Combat Arms. In addition, this function is too limited in its scope, and dynamic objects themselves cannot receive shadows.

In the end, we decided to use a fully dynamic cascading shadow map (CSM) for the main directional light from the sun, and a partially dynamic solution for the omnidirectional lights and lights using a pre-calculated omnidirectional shadow map that can be applied to both world geometry and dynamic objects. This is similar to how other modern game engines provide dynamic shading.

What is a shadow map anyway? It's basically a very clever trick that games and renderers use to simulate GPU-accelerated dynamic shading. https://en.wikipedia.org/wiki/Shadow_mapping.

The shadow rendering algorithm is basically about having to render the scene in terms of the light source to a texture, using that as the entry point for the lighting pixel shader, and then using that texture to calculate by the GPU whether the pixel is in shadow or not.

In game, the shadow map texture looks like this:

Please note that this texture has not 1, but 4 shadow maps. Each of these shadow maps is called a shadow map cascade. As the distance to the observed surface increases, a larger cascade is used, but with a lower resolution. This allows the shadows to extend very far but still fit into one fixed size texture. In this case, the shadows next to the observer will have a higher resolution. Shadow map resolution is directly related to how sharp and precise shadows can be.

Because the shadow map is just another camera in the scene, it can be rendered dynamically in real time. The shadow quality settings will affect the resolution of the shadow map texture.

Here's what the shadow map solution looks like when filtered.

Shadow map filtering

Using only the shadow map alone can result in very hard shadows. And although sometimes they may look good at the same time, this is not true.

In the real world, shadows are not always hard or soft, they change depending on the size of the light source and the distance from the illuminated subject. You can see examples in the photo:

Notice how in these photos the shadows get softer as you move away from the objects that cast them. Most other modern games still use the simplest shading filtering method called Percentage Closer Filtering or PCF which applies a fixed value of hardness or softness to all shadows.

These screenshots from quite modern Doom and Overwatch games show how most games do simple shadow filtering using PCF.

However, in Combat Arms, we decided to implement a more advanced shadow filtering technology that will be closer to what we see in real life. In the game, we simulate this using a technology developed by NVidia called Closer Soft Shadows (PCSS).

Although this technology was introduced back in 2005, it is only now that graphics hardware has become powerful enough to use this method, and modern games have only in recent years begun to use it heavily to filter the shadow map. At the time of writing this blog, this technology is still not available in Unreal 4 or Unity 5. Although technically Unreal 4 provides an alternative solution called distance ray tracing, it is limited to only allowing static geometry to cast shadows.

PCSS disabled:

PCSS Enabled:

Subtle filtering with PCSS will be applied when shadow quality settings are set to high. Also, PCSS is currently only used for basic directional sunlight at all quality levels. Unfortunately PCSS is not compatible with hardware accelerated shadow filtering. Shader Model 3 DirectX 9 does not support Gather4 shader instructions that are commonly used in DirectX 10+/Shader Model 4+ pixel shaders for shadow map filtering. Due to the operational load of PCSS using the Shader Model 3 pixel shader, anti-aliased filtering is used to reduce the load on the GPU.

Omnidirectional shadow mapping

With omnidirectional light sources, things are a little more complicated. Since they cast shadows in all directions, making shadow maps for them is not so easy, since the scene needs to be rendered using some kind of 360 ° camera. This usually involves rendering the scene in multiple directions from the position of the light source. However, the scene is difficult to scale when there are many such lights. So for this we applied a pre-computed shadow map, where it is only calculated once when the map is loaded instead of every frame. The precomputed shadow map can cast shadows on dynamic objects in the scene, but dynamic objects within the scene will not cast shadows. To get around this limitation, Lithtech's own object shadow function is used to give object shadows from pre-calculated omnidirectional shadow maps. We also combine this with Screen-Space Shadows tracing to give objects detailed eigenshadows and contact shadows.

An interesting example of how a shadow map is used to project an omnidirectional shadow map onto a single texture:

It uses a full 360° projection method known as octahedron spherical parametrization, or sometimes called "octahedron mapping", where a sphere is mapped onto a single texture based on its projection onto the face of the octahedron.

Traditionally omnidirectional shadow maps use a cube map that requires at least 6 different textures to represent a complete omnidirectional 360° shadow map, but with this map projection only 1 texture is used. This method also provides less distortion than most other 2D spherical design methods, and unlike the unfolded cube map, it fits perfectly into a single square texture. Many of the existing maps in the game use a large number of omnidirectional lights as their primary lights, and this allows many of them to also support real-time dynamic shading.

Summing up

We believe that this combination of lighting and shading technologies will give the game a very modern look for 2017, and also that our solutions are quite competitive with what other game engines offer today. While these changes alone are not enough to "modernize" the game, we still have a lot of work to do to improve textures and assets that will take full advantage of the new features.

Of course, this is not all the changes, it was just an in-depth look at the technical details about the implementation of dynamic lighting and shading in the upcoming Combat Arms update. Also, it's important to note that the screenshots shown were taken using a build with current assets to demonstrate graphics features. They may not accurately reflect the final level of graphics after the release of the update.

I should also remind you that the updated graphics options are optional and you will be able to switch between engine versions after the update is released. We develop these graphics enhancements based on the capabilities of today's mid-range and high-end graphics hardware as of 2016. We want the game engine to be able to take full advantage of the capabilities of modern GPUs to provide the best possible picture quality. We're still constantly benchmarking and optimizing new graphics features, but be aware that slightly outdated or cheaper hardware will only be able to run the new graphics options at lower settings or resolutions. We will continue to support the old graphics engine, so players with older or less powerful hardware, or those who simply prefer the classic Combat Arms style, will be able to continue to enjoy the game as before.

Sincerely, Combat Arms team!

The shadow map is probably the most difficult part in creating a visual representation of an object. We use them to get baked light and shadow.
They must be uniquely unwrapped, so that each part of the model has its own place in UV space in order to get the correct light and shadow information in the end.
It is important to remember that the resolution of the shadow map is tiny compared to the size of the UV space.
It is also important to understand that the more a level needs to be optimized, the lower the level designer should use lightmap resolution, sometimes going as low as 8x8 or 16x16 for smaller objects.
This trend requires that we leave a lot of extra space around each section of the object's unfolding so that areas that are dark are
did not affect the highlights and did not destroy the illusion of the visual correctness of the shadows in the game.

There are 3 main ways to create such a sweep:

BOX UNWRAP

This is often the most reliable method of creating an object sweep, since most environmental models are close in shape to blocks that are combined into some kind of structure.
A continuous mesh (a mesh that has no off-base parts) is often a very useful solution when building a sweep,
this solution will help ensure a more efficient distribution of the geometry mesh in UV space.
This will also work well even at a low shadow map resolution, as it then forms a single gradient from dark to light.
Unlike a fragmented sweep where the result will seem more ambiguous and you may need to increase the resolution of the shadow map to parry the effect of sharp transitions.
We should try to avoid this wherever we can. Unfortunately, sometimes it is not possible to use a lower resolution or a single sweep of the geometry.

PLANAR UNWRAP

This method is especially useful for flat structures such as walls with multiple chamfers or bulges. It is also very useful for large parts of building facades such as apartment buildings.
planar will unfold much better if you use non-discontinuous geometry, because here the question will be only in “relaxing” the unwrapped grid.
Sometimes it's also a good rule of thumb to make sure there's more horizontal than vertical space on such a sweep, as shadow casting tends to come from the side at a slightly elevated angle,
not straight down. Thus, more horizontal space gives more room for sharper shadows, due to the tendency for designers to choose lighting at an angle,
to create more interesting shadows than when lit from top to bottom.

CYLINDRICAL UNWRAP

Most other shapes can be thought of as variations of the cylindrical shape, unless of course they are close to parallelepipeds or planes.
Cylindrical development is good for many designs that have a front and sides, but no back, otherwise we would use the method BOX UNWRAP.

Examples

It was a non-breaking mesh so it was easy to deploy with BOX UNWRAP and just lay it out horizontally to use as much shadow map space as possible.
Undersurfaces that might be visible in the middle image have been removed as they will almost always be black,
and if they were connected to the rest of the scan, the shadows from them would simply seep in dark spots on the walls where they should not be. The same is true for the top edges.
Except that they would always be light.


This unwrapping method allows us to have an almost perfect shadow map in the game at 32x32 resolution. The geometry doesn't have any seams. Where there should be a shadow, we see thin black lines, and where there should not be, there is none.


Here we see that it is necessary to use the maximum possible space, since the shadow map will cover the entire scan anyway.
Therefore, between the aspect ratio of the object 1 to 1 and 1 to 7, you will see a significant difference. You can also see that here some parts of the sweep are separated and moved away from the main grid.
This is because these parts will always remain in shadow. They should not affect the rest of the shadow map.


Even on large facades like this one, planar shows a good result. This mesh is unbreakable, which helps our work,
but in this case, everything would work the same even if the scan was divided into several vertical or horizontal stripes, although it was necessary to make small indents between them.


You can see that here, the tight-fitting geometry made it easy to lay down the UVs. You can also see that indents are made here between the parts of the sweep so that the dark areas do not affect the light ones.
The lower the resolution of the map, the more it is necessary to take an indent.


You can see some pretty aggressive distortion on the intersecting vertical pieces that hold the railing together.
You can see that the central support is divided into two parts instead of three, as if we cut it along the edges of the central part. This is done in order to reduce the number of seams and provide smooth lighting over a larger area.


Some projects fail to follow these simple rules, as in the screenshot below.


When there are so many individual elements, we have no choice but to increase the resolution of the textures, otherwise we would waste a lot of space on the padding between the elements of the scan, it would look terrible in the game.
So the shadow map resolution has been bumped up to 128x128, but it still doesn't look perfect, but still not enough to completely ruin the visual look of the object in the game.


Sometimes it's easy to expand an object, it's enough to break it into several reasonable parts. And then just “relax” the sweep. A great example is the object below.


This design is essentially a cylinder with a flat base, so these two basic methods of unfolding an object are used here.
planar unrolls parts of the geometry down the z-axis and then applies a "relax" modifier and adjusts the vertex positions a bit to make sure nothing gets too little coverage.
In the middle is a case similar to the base, here the central part is divided and used planar instead of Cylindrical in order to provide a large coverage area.
As always, we're more concerned with coverage than with a 1:1 aspect ratio. It's a big advantage to have the seams in their actual locations, this will make the shadows look more natural.
If your object has deep cuts, extremely sharp joints of geometry, then this is a great place to lay a seam here, unless of course it is required.

Lightmap Coordinate Index

By default, the first set of UVs (index 0) of the static mesh will be used when creating the shadow map for the static lighting.
This means that the same set of coordinates that is used to apply materials to the mesh will also be used for static lighting.
This method is often not ideal. One of the reasons for this is that the UVs used to generate the shadow map must be unique,
which means that each mesh face must not overlap any other surface in UV space. The reason for this is fairly obvious: if the faces overlap each other on the UV map,
the part of the shadow map corresponding to this space will be applied to both faces. This will lead to incorrect lighting, the appearance of shadows where, in principle, they should not be.
Static meshes have the property LightmapCoordinateIndex, which allows you to use the specified UV for the shadow map. Set this property to point to a set of UVs that are properly set up for lighting.

UV charts and padding

Groups of isolated triangles with contiguous UVs are called UV charts.

You should divide the sweep into charts and place them separately if you want to exclude the influence of the shadows of one chart on another. Also, when indenting, you should remember a simple rule:
The padding size must be greater than 4x4 texels, since DXT compression works with blocks of this size.

  1. Wasted indent
  2. Required indent

This means that for a shadow map with a resolution of 32, the padding between parts of the UV map should be 12.5% ​​of the entire UV space.
However, be aware that using too much padding between parts of the UVs will result in the shadow map memory being wasted at higher resolutions.
The closer you can place the UV charts, the better. This will reduce the amount of wasted memory.


This is far from an ideal sweep.

One example of a deployment problem is excessive fragmentation. You can see how the shadows that should remain on the inner parts of the object give shading to the outer edges.
Another potential pitfall is relying on auto-unwrap, as that too can lead to the same problems.


The best way to unwrap a shadow map is to model the entire mesh as one continuous element, or to manually unwrap.


This will give a single unfold that has almost no seams and is much more efficient.

The end result is a mesh that lights up properly without any artifacts.


An additional benefit of this method is that it also typically reduces the number of vertices and triangles required for a given model.


CATEGORIES

POPULAR ARTICLES

2022 "kingad.ru" - ultrasound examination of human organs