Sie sind auf Seite 1von 13

DEFERRED SHADING, SHADOWING ALGORITHMS AND HDR RENDERING

TNCG14

- A D VA N C E D C O M P U T E R G R A P H I C S P R O G R A M M I N G

Jonathan Melchert - jonme610

Joakim Kilby - joaki259

Tuesday 24th May, 2011 17:41

Abstract
In this report we will examine algorithms for; deferred rendering, screen space ambient occlusion, variance shadow mapping, and HDR rendering with tonemapping and bloom. We will talk about the background of these algorithm, describe their implementations, present the results both in text and as gures and nally discuss our results and possible improvements to the methods weve implemented.

Introduction

In this report we will examine the use of deferred rendering as an alternative to forward rendering and implement two shadowing algorithms as well as a HDR tone mapping algorithm with bloom. 1

Deferred rendering is as mentioned above an alternative to the traditional forward rendering pipeline, it differs from traditional rendering in the sense that it does only need to draw the scene geometry once. The general shadowing algorithm implemented is the variance shadow maps algorithm. The use of variance shadow maps allows us to perform ltering of the depth map associated with a lightsource and thus create softer shadows. As a complement to the general shadowing algorithm we have implemented screen space ambient occlusion. This algorithm is an approximation of ambient occlusion in a scene executed in real time. This extra shadowing algorithm allows us to compute local occlusion around pixels/fragments. In order to further improve the realism and visual quality of our scene we have implemented HDR rendering with tone mapping in an attempt to mimic the behaviour of human eyes in an environment which contains both brigh and dark

areas. We conclude that the use of screen space ambient occlusion provides vital lighting cues in areas such as corners but comes at a great cost computational wise and should thus be used with care.

supported either because only information of the closest surface is stored in the G-Buffer.

2.2

Shadow Mapping

2
2.1

Background
Deferred Shading

Deferred shading is an increasingly popular shading technique used as an alternative to the forward rendering pipeline. It is used in games like for example Crysis, Starcraft II, Killzone and Battleeld 3. Deferred shading differs from traditional rendering algorithms in the way that it only has to draw the scene geometry once and store geometry attributes in textures, commonly known as the G-Buffer. Lighting is then calculated in a separate pass as a 2D post effect where the scene can be reconstructed from the information stored in the textures. One of the advantages of this approach is that lighting computations are only done for pixels that have been determined to be visible. This also allows for a larger amount of dynamic lights in a scene without performance drops. The G-Buffer pass makes use of multiple render targets and outputs geometry information such as normals, color, position/depth and material properties. These values are stored in textures, e.g. the normals can be stored in a three channel texture where the x, y and z-components are each stored in a separate channel. This is the only pass where the geometry needs to be sent to the graphics device for rendering. Lighting pass and other post-effect passes are there after performed on screen aligned quads. Disadvantages of using deferred shading are the excessive use of memory bandwidth when storing all this information in higher precision oating point textures and the limited material model when surface material properties have to be stored in textures. Tranparency is not directly 2

Shadow mapping is a technique used to add shadows to a rendered scene in real time, the concept was introduced by Lance Williams in 1978 [6]. The basic algorithm for determining whether a pixel lies in shadow or not is performed by rendering the scene from the lightsources point of view and storing the pixel depth as a shadow map. In the lighting pass the positions in view space are transformed into light space, and the depth of the rendered pixel from the cameras point of view is compared to that of the depthmap . Assuming a right-hand oriented coordinate system the pixel lies in shadow if the z-value obtained from the texture is lesser than the z-value of the rendered pixel. Shadow Mapping as proposed by Williams [6] is a popular and efcient method for creating shadows in real time but unfortunately the technique suffers from several artifacts such as sharp shadows, magnication and minication. Magnication occurs when a texel of the shadowmap cover a large area in screen space and conversely minication when several texels of the shadow map are mapped to the same screen-size pixel. Unaturaly sharp shadows occur since a screen space pixel will be classied as in shadow or not, there is no soft transition in between where a pixel could be partially shadowed. A solution to these problems would be ltering of the shadow map, however this is not applicable to shadow maps since there is no point in ltering the depth values stored in the texture. A pixel would still be classied as in shadow or not in the end. A solution to the problem of properly ltering a shadow map was proposed by William Donnelly and Andrew Lauritzen in 2006 [2] called variance shadow mapping.

2.2.1

Variance Shadow Mapping

The main idea behind variance shadow maps is to represent the depth data as something which can be ltered linearly, thus making use of algorithms and hardware which work with colors and other linear data. The algorithm is similar to the standard shadow mapping algorithm except that instead of writing only depth to the shadow map, depth d and depth squared d2 is written into a two-component variance shadow map. By ltering over a region of a variance shadow map two so called moments M1 and M2 of the depth distribution of the current region are obtained, where; M1 = E( x ) = M2 = E( x2 ) =
xp ( x )dx 2 x p ( x ) dx

be used as an effective way of removing shadow biasing and the problem of polygons which span a large range of depth values. This is achieved through a modication of the M2 quantity. Instead of simply computing it as having a depth of d the partial derivatives may be included with respect to x and y. Which gives the following equations for the moments, where f is the depth as a function of x and y e.g a rendered texture, see [2] for a full derivation. M1 = E( x ) = M2 = E( x2 ) = 2 + 1 4 (2) (3)

f 2 f ) + ( )2 x y

2.3

Screen Space Ambient Occlusion

From these two moments the mean and variance 2 of the depth distribution can be obtained as; = E( x ) = M1 2 = E( x2 ) E( x )2 = M2 M1 With the help of the variance Chebyshevs inequality [2] can be used to compute an upper bound on the probability that the current surface at a certain depth d is occluded as; P( x >= d) <= pmax (d) = 2 2 + ( d )2 (1)

Ambient occlusion refers to the shadowing of ambient light, it is a technique used in global illumination to approximate the ammount of incoming ambient light which is received at a surface point. As ambient light does not have a specic direction the ambient occlusion is independent of light direction. The intensity of the ambient light at a surface point will depend on how much the surface point is occluded by other surfaces in the scene. This is expressed in equation 4 where E( p, n) refers to the resulting intensity at a surface point p with normal n, L a to total incoming light, the angle between the surface normal and an outgoing ray direction from the point p and v( p, l ) is a visibility function which equals zero if a ray is blocked by another object and one otherwise. E( p, n) = L a

This inequality is valid for d > , if d <= then pmax = 1 and the surface is fully lit. Donnelly and Lauritzen [2] has shown that this inequality becomes an equality for a single planar receiver and sender which is a fair approximation of many scenes in a local neighborhood around a pixel. Thus pmax will give the probability that an area is shadowed and may be used directly as the amount with which the area is shadowed, giving softer shadows as the probability diminishes. In addition to providing a way of ltering the shadow map, variance shadow maps can also 3

v( p, l )cos( )d

(4)

Screen space ambient occlusion is a technique used to approximate the ambient occlusion effect in real time. It was rst used in the PC game Crysis released in 2007 [3] and has since then become an extremely popular feature in many games. The algorithm is implemented as a pixel shader which for each pixel on the screen samples depth values around the current pixel from a depth buffer and approximates the local ambient occlusion. To

avoid taking too many samples of the depth texture and achieve an algorithm which can be executed in real time a common approach is to use a randomly rotated sample kernel of directions for sampling. This random directional sampling approach will produce high frequency noise which is ltered out during a post process blurring step. The algorithm is independent of scene complexity, works with dynamic scenes and can be executed entirely on the GPU. There are however some issues with the method; it is highly localized around the current pixel being considered and can produce a haloing effect around pixels which are not occluded at all.

to produce the correct tone map. Two methods [5] in scope for this project are: Maximum to white The luminance used to scale the pixels is the highest luminance found in the image. Log average luminance The logarithm for all the pixel luminance values are summed and averaged and then converted back. Lw = exp( 1 N

log( + Lw (x, y)))

(5)

2.4

HDR - High Dynamic Range Rendering

High dynamic range rendering is a technique used to display images with a greater dynamic range than normal. This allows for light calculations to be performed with a higher precision. Normal displaying devices can only output values within a certain range but with HDR the ability to see details in both heavily shadowed areas aswell as very bright ones becomes available. In normal rendering. color values are within the range 0 to 1 but with HDR the values can exceed 1 which can be used to simulate the real value of a light. Methods used to approximate HDR are tone mapping and blooming. 2.4.1 Tone mapping

where Lw is the average luminance, Lw ( x, y) is the luminance at pixel ( x, y) and is a small value so the logarithm of zero never is attemped. When the average is computed a tone operator can be applied to map the colors. A simple operator is a L( x, y) = Lw ( x, y) (6) Lw where L( x, y) is the resulting luminance of the pixel and a is the key value of the scene. A high value of the key will give the image an overexposed look and low value will make it underexposed. In some scenes the light can be really bright and in order to cap the brightness at a certain value eq. 7 can be used. L( x, y)(1 + Ld ( x, y) =
L( x,y) ) L2 white

Tone mapping is used to map a range of high dynamic range values into more displayable low dynamic range values i.e. an image with dark areas will be scaled so that details can be seen and images with very bright areas will be scaled so that details appear instead of looking overexposed. This is essential for HDR rendering since the color values need to be in range of the monitor for correct display. To scale the HDR values into a LDR range a tone mapping operator is used. A so called global tone mapping operator works by e.g calculating an average luminance value from the pixels in the image. Each pixel is thereafter scaled by this value 4

1 + L( x, y)

(7)

where L2 white is the maximum luminance to cap at. To calculate the luminance of a pixel equation 8 is used [4]. Lw ( x, y) = 0.27R + 0.67G + 0.06B (8)

This equation mirrors the human perception of colors. Green light contributes the most to the intensity percieved by humans and blue the least.

2.4.2

Blooming

Bloom is an effect simulating when the light from really bright areas bleed over into adjacent areas creating a soft aura. It is heavily used in the gaming industry and can be seen in many newer games. This effect can be seen in photography when a photo is taken of an object with a bright light behind it. The light will seem to bleed over the edges of the object. This effect can be simulated in computer graphics as a 2D post process effect by rst ltering out the bright areas in an image with a certain threshold, low pass lter and downsample this image a number of times until a visually pleasing result is obtained and then nally blend the resulting images together with the original image [5].

The geometry pass is implemented in the project by rendering the scene to a frame buffer object with multiple render targets, MRTs, where each render target holds specic information needed at later passes. One target stores the view space normals of the scene, one stores the view space position and the last one stores the colors of the scene. The format of the texture is chosen to be a oating point precision RGBA32F format to be able to store the information in the buffers as precise as possible. All the objects in the scene are drawn normally with corresponding textures bound and lighting disabled. To render to MRTs from a fragment shader the output is written to gl_FragData[] instead of the usual gl_FragColor. When calculating the ligthing a fullscreen quad is drawn. In the fragment shader normal, position and color is retrieved from the G-buffer for each corresponding fragment. Standard lighting can then be calculated from this information for the current fragment and the nal color is output to the bound frame buffer. Standard OpenGL lighsources can not be used with deferred shading so a custom light class is dened that keeps track of attributes such as position and direction. These values are sent down to the lighting shader as uniforms. The lighting pass is performed in two steps, rst an ambient pass where the scene is rendered in ambient light. This pass is then blended with the other lighting pass which calculates the rest of the lighting equation. Fragments that are determinded not to be shaded will be discarded and then recieve the value which is set in the ambient step. This is done because multiple lights will require multiple passes. Each light affecting the scene will require one additional quad to be rendered.

Implementation

The project is implemented in C++ using OpenGL and GLSL. GLUT is used to create the OpenGL context and GLEW is used to load the necessary OpenGL extensions. The GUI is created using the AntTweakBar library. To handle frame buffer objects efciently a custom FBO class is written and to optimize rendering a custom vertex buffer object class is used. To load models, classes is written to load Wavefront Obj models with their corresponding material les. The model used is the Sponza Atrium originally modelled by Marko Dabrovic and modied by Crytek. Model and textures are downloaded from www.crytek.com.

3.1

Deferred Shading

As previously mentioned in section 2.1, deferred shading works by storing all the geometry information in textures, the G-Buffer, and then lighting calculations are performed in a 2D pass using a full screen quad to draw the scene which is reconstructed with the information stored in the G-buffer.

3.2

Variance Shadow Mapping

To create the variance shadow map the camera is placed at the lightsource position and aligned to the lightsource direction. The scene is then rendered using a shader which will output the two moments mentioned in section 2.2.1. The rst moment M1 5

rendering process, once again via the glGetFloatv command, and then inverted.

3.3
(a) View space normals (b) Diffuse color

Screen Space Ambient Occlusion

(c) View space position Figure 1: Example of parameters stored in the GBuffer

In order to compute an approximate solution to equation 4 a large ammount of rays would need to be cast from each pixel and sampled to determine to occlusion factor of that pixel. This is not possible in real time applications and instead a smaller ammount of rays will be cast and sampled within a normal centered hemisphere at that pixel. To create a ray pointing to the surface of a unit hemisphere is fairly straightforward. If the orientation of the hemisphere is choosen along the z-axis the equation which give a sample ray is given in equation 9. x = random(1, 1) y = random(1, 1) z = random(0, 1) (9)

which equals the mean depth at is simply set to the depth of the current pixel and the second moment M2 is calculated with the help of equation 3. The partial derivatives with respect to x and y are approximated with the help of the built-in GLSL functions dFdx () and dFdy(). The variance shadow map is then ltered in two passes with 1D Gaussian kernels of length 7 to produce smoother transitions between shaded and unshaded areas. FInally the shadow map is used in the light pass to determine wether a pixel will be lit or in shadow by comparing depths obtained from the shadow map to those of the rendered pixels. The transformation of the rendered pixels from view space into light space is achieved by multiplying the vertex positions with the matrix M dened below. M = lightProjectionMatrix lightModelViewMatrix modelViewMatrix 1

The sample rays generated with equation 9 will fall on the unit hemesphere surface which is not desirable, the rays should be distributed within the hemisphere. To achieve this the rays need to be rescaled, a simple approach is to simply scale each component of the ray with a random number between zero and one. Improved quality can be achieved if the scaling is performed in a way such that most ray end points fall close to the center of the hemisphere, this will cause most samples from the depthbuffer to be taken close to the surface normal where lightcontribution is largest. To achieve this effect an accelerating interpolation function is used to rescale the rays, pseudo code for the accelerating interpolation function for the i : th ray generated can be seen below. f actor = i/kernelSize scale = (1 f actor2 ) 0.1 + f actor2 1 Since more than one ray needs to be cast from each pixel several rays are created by repeated evaluations of equation 9, the collection of rays obtained from this process is called a sample kernel. This sample kernel will be used for texture lookups in the 6

The lightProjectionMatrix and lightModelViewMatrix can be accessed via the command glGetFloatv in openGL as the depth map from the lightsource is being created. The modelViewMatrix 1 refers to the inverse of the modelViewMatrix at the time when the scene is being drawn from the camera. Thus the modelViewMatrix has to be saved at that stage in the

depth buffer. Since texture lookup is a very expensive operation the size of the kernel is limited and in order to eliminate aliasing caused by the small kernel size the ray directions in the kernel need to be rotated between pixels. The rotation of ray directions is performed with the help of a noise texture containing vectors dened as in equation10. x = random(1, 1) y = random(1, 1) z=0 (10)

The z-component of the noise texture vectors is set to zero since the hemisphere is oriented along the z-axis. The rotation of the sample kernel is performed in such a way that the resulting sample kernel will fall within a hemisphere oriented along the normal of the pixel being considered. This is achieved by using the Gram-Schmidt orthogonalization process [1] with the rst vector e1 = r n chosen as the orthogonal part of the vector from the noise texture with regards to the pixel normal. The second vector e2 = e1 n is chosen as the cross-product between the vector e1 and the pixel normal and the third vector e3 = n is chosen as the pixel normal. The three vectors e1 , e2 and e3 now compose an orthogonal basis and a change of basis matrix A can be constructed [1] as in equation 11. A = e1 e2 e3 = r n (r n ) n n (11)

look up and depth comparison is then calculated as; sample.position = pixel.position + sample radius where radius is an arbitrary variable which will determine effectively how far away from the current pixel rays will be cast. Once the position in view space in which the sample will take place is determined it is transformed into screen space via multiplication by the current projectionMatrix and a sample is drawn from the depth buffer. A range check is performed where the sample from the depth buffer is compared to the depth of the contructed ray, if the difference is larger than the variable radius the ray will not contribute to the occlusion of the pixel, this will remove problems where objects which are far away in space lie next to eachother in screen space. Finally a comparison is made between the sample from the depth buffer and the depth of the constructed ray, if the depth buffer sample is larger than the constructed ray depth it will contribute to the occlusion of the pixel. Once the loop is completed any ray which fell into another object will have given a contribution to the occlusion factor of the current pixel and thus the occlusion factor need to be normalized with respect to the size of the sample kernel. The nal incoming light intensity at a pixel will then be dened as Intensity = (1 occlusionFactor ).

3.4

Tone mapping

By multiplying a ray obtained from the sample kernel with the matrix A the ray will be oriented along the pixel normal and fall within a unit hemisphere located at the pixel. The random rotation of the kernel has been incorporated by including the vector r obtained from the noise texture in the matrix A. To determine the factor with which a pixel is occluded a loop is performed over the size of the sample kernel, the sample ray is oriented and rotated by multiplying it with the matrix dened in equation 11. The point which will be used both for a depth buffer 7

Tone mapping is done by rst converting all the RGB values from the scene rendered in the light pass to luminance values as described in eq. 8 in a fragment shader. These values are stored in a texture which is passed in to a shader to calculate the average luminance for the scene. In this shader, an arbitrary number of samples are taken from the texture and compared to each other. The nal average luminance in then selected according to the Maximum to white method mentioned in section 2.4.1. This value is stored in the texture and read back to the CPU with a call to glReadPixels(). Due to the fact that a fragment program works on the all the fragments in the image, a lot of

computional power is wasted because the average only needs to be calculated once. To bypass these unnecessary computions GL_SCISSOR_TEST is enabled and a scissor box is specied with glScissor(x0,y0,x1,y1). Every pixel that is not within the dened box will not be modied and thus the fragment program will not be executed for these pixels. By specifying a box of one pixel, the fragment program will only be executed for that pixel and the value can be read back from this specic pixel to the CPU. When the average luminance value is calculated it is sent down to the shader of the next pass where the actual tone mapping is performed. This according to equations 6 and 7. To make the transition between a scene view that is darker and one that is brighter without ickering due to fast changes in average luminance, and also to simulate the fact that the human eye needs time to adjust itself when being exposed to e.g. a bright outdoor scene and then entering a dark room, an operator which reduces the change in luminance smoothly is implemented. This adaptive luminance is calculated by Lumn = Lumn1 + ( Lumn Lumn1 ) (1 e ) where is a user controlled constant which determines how fast the transition is made, Lumn is the average luminance for the current frame and Lumn1 is from the previous frame.

nal image is composited by blending the output from the tone mapping shader with the output from the bloom shaders.

Results

All the results presented are produced in a system running on a NVIDIA GTX 260 with 896 Mb of graphics memory , Intel Core 2 Duo E8400 3.0 GHz and 4096 Mb RAM. The application is running on a resolution of 1024x612. Figure 5 shows the resulting images from the different passes. Figure 5a shows the scene rendered with ambient light and a spotlight. In gure 5b shadows are added, in Figure 5c SSAO effect, ambient and a spotlight are rendered and in Figure 5d all the effects are shown; ambient light, a spotlight, SSAO, shadow mapping, tone mapping and bloom. Table 1 shows how the different parts affect the overall frame rate. Effect Ambient, spotlight + shadow mapping + SSAO + HDR FPS 166 95 72 60

3.5

Blooming

Table 1: FPS benchmarks for the parts of the renderer. Numbers are obtained when using a SSAO kernel size of 16, half size SSAO texture and full size shadow map.

To achieve the blooming effect the image is rst ltered in a bright tone lter pass. This shader outputs the fragment color if has a value greater than an user dened value. If it is lower, the fragment color will be set to black. This will create an image consisting of only bright areas. The next step is to low pass lter and downsample this image. To get a good result the image is downsampled and blurred three times. Each shader takes as input the previously blurred texture. To get a visually pleasing result the resulting images from each pass are combined into one. The 8

Figure 2 shows the resulting images from the SSAO pass for three different kernel sizes, using an FBO size of 50% of the windows height and 50% of the windows width. Notice the high frequency noise particulary present in g.2 a and b, the noise is caused by taking too few samples per pixel and is greatly amplied by using an FBO of 50% size in width and height. Figure 3 shows the images from the SSAO pass using the same kernel sizes as in gure 2 but with

an FBO the same size as the window. The noise is still present in g.3 a and b but is not as obvious. A comparison between g.2 a and g.3 a clearly shows the benets of using a full sized FBO in terms of noise reduction. All the images seen in gures 3 and 2 have been ltered in two 1D passes using a mean lter of size three to reduce the noise present, ltering with larger kernel sizes produces images with less noise but can produce halos around objects and cause the shadowing effect to become too blurred. While the use of a full sized FBO has great visual benets it comes at a cost; table 2 shows benchmarks in terms of FPS for images seen in gures 3 and 2. There is an avarage difference of 30-40 FPS when comparing a full sized FBO to one that is half the height and width of the window. Note that the FPS numbers shown in table 2 refer to when the SSAO pass alone is used, no other effects are added to the scene. SSAO FBO size 50% height, 50% width 50% height, 50% width 50% height, 50% width 100% height, 100% width 100% height, 100% width 100% height, 100% width Kernel Size 4 16 64 4 16 64 FPS 140 106 53 100 60 23

becomes more and more diffuse as the FBO size gets smaller. Shadows cast by larger objects such as the one cast by the roof in the far back of the images in gure 4 are however not noticably affected by the size of the FBO used. The choice of FBO size ofcourse affects the performance; table3 lists the benchmarks in terms of FPS for the FBO used when generating the images in gure 4. As table 3 shows there is a signicant drop in FPS as the size of the shadow map FBO gets larger. Shadow map FBO size 50% height, 50% width 100% height, 100% width 200% height, 200% width FPS 124 100 58

Table 3: FPS benchmarks for various sizes of the shadow map FBO

Discussion

Table 2: FPS benchmarks for various sizes of the SSAO FBO

Figure 4 shows the results of shadow mapping for three different sizes of the shadow map FBO. As the images show; the larger the FBO used is, the better it will capture details and provide for sharper shadows. The two horizontal shadows seen near the ower pots in g. 4 c are caused by agpoles in the model placed near the ceiling. Being small objects the shadowing effect caused by them gets fainter as the size of the shadow map FBO gets smaller. In g. 4 b they are very diffuse and in g. 4 a they have completely disappeared. Note also that the shadow cast by the hanging ower pot to the right in the images 9

We have found that while being an important part in producing realistic scenes, screen space ambient occlusion is the most subtle of the effects we have implemented as well as being the most expensive computational wise. The SSAO effect is most noticable while looking at intersections between objects. This is a case where the shadow mapping fails to produce shadows since the two walls of the corner are not occluding each other wth regards to the light source. Figure 6 shows the SSAO effect in a corner, in g. 6 a SSAO is enabled and in g. 6 b it is disabled. In the particular case of gure 6 the SSAO effect provides the nescessary cues to visualize the intersections between the two walls and a pillar and contributes greatly to the realism of the scene. Based on these observations weve concluded that given the small effect the SSAO creates in comparison to the cost of adding the effect, it is wise to keep the size of the SSAO FBO as well as the size of the sample kernel quite small and blurr the result.

The quality of the SSAO effect is also highly dependent on the random sample kernel vectors, thus some effort should go into nding optimal kernels for the specic scene the SSAO is applied to. As for the implementation of HDR rendering and tonemapping the visual effects produced, especially the time dependent transition between light and dark areas, greatly adds to the realism of the scene. While standing in a dark area the effect is comparable to simply adding a small ammount of ambient light, but as soon as one moves between darker and lighter areas the effect is very noticable and realistic. This is due to the fact that the time dependent tone mapping is designed to resemble the way a human eye will behave in a similar situation. We have noticed that the HDR rendering and tonemapping unfortunately is highly scene and parameter dependent and the parameters are very tricky to get right. The addition of shadow mapping to a scene denately improves the realism and visual impression of the scene, today shadow mapping has become somewhat of a standard in most graphics applications. That being said the basic algorithm which weve implemented with a xed shadow map FBO size isnt used much today. It would improve the visual quality greatly if there was an adaptive algorithm in place which varied the shadow map size depending on e.g camera distance etc. This would enable better visual results without dropping FPS too much.

[5] Nathaniel Hoffman Tomas Akenine-Mller, Eric Haines. Real-Time Rendering. A K Peters, Ltd., 3 edition. [6] Lance Williams. Casting curved shadows on curved surfaces. SIGGRAPH Comput. Graph., 12, August 1978.

References
[1] George Baravdish. Linear Algebra. Liu-Tryck Kakenhus Norrkoping, 2006. [2] William Donnelly and Andrew Lauritzen. Variance shadow maps. 2006. [3] Martin Mittring. Finding next gen: Cryengine 2. 2007. [4] Erik Reinhard, Michael Stark, Peter Shirley, and James Ferwerda. Photographic tone reproduction for digital images. In Proceedings of the 29th annual conference on Computer graphics and interactive techniques, SIGGRAPH 02, New York, NY, USA, 2002. ACM.

10

(a) 4 samples kernel

(a) 4 samples kernel

(b) 16 samples kernel

(b) 16 samples kernel

(c) 64 samples kernel Figure 2: Results of different kernel sizes when using a FBO with half size width and half size height.

(c) 64 samples kernel Figure 3: Results of different kernel sizes when using a FBO with full size width and full size height.

11

(a) Amibent and spotlight

(a) Half sized width and height

(b) Shadow mapping

(b) Full sized width and height

(c) SSAO

(c) Double sized width and height Figure 4: Results of different resolutions of the shadow map.

(d) Shadow mapping, HDR, SSAO

12

Figure 5: Results of the different passes.

(a) SSAO enabled

(b) SSAO disabled Figure 6: SSAO in effect in a corner.

13

Das könnte Ihnen auch gefallen