Wednesday, July 24, 2013 Eric Richards

Lit Terrain Demo

Last time, we ran through a whirlwind tour of lighting and material theory, and built up our data structures and shader functions for our lighting operations.  This time around, we’ll get to actually implementing our first lighting demo.  We’re going to take the Waves Demo that we covered previously and light it with a directional light, a point light that will orbit the scene, and a spot light that will focus on our camera’s view point.  On to the code!

lighting

We will start by creating a new project, and copying over our code from the Waves Demo.  We’ll rename our main application class LightingDemo, and add member fields for our lights, materials, and shader variable pointers.

// New member variables
private DirectionalLight _dirLight;
private PointLight _pointLight;
private SpotLight _spotLight;
private Material _landMaterial;
private Material _wavesMaterial;

private Effect _fx;
private EffectTechnique _tech;
private EffectMatrixVariable _fxWVP;
private EffectMatrixVariable _fxWorld;
private EffectMatrixVariable _fxWIT;
private EffectVectorVariable _fxEyePosW;
private EffectVariable _fxDirLight;
private EffectVariable _fxPointLight;
private EffectVariable _fxSpotLight;
private EffectVariable _fxMaterial;

private Vector3 _eyePosW;

In our constructor, LightingDemo(), we will initialize our lights and materials.  Note that we will not be setting the position of our spot or point lights yet, as these will be updated each frame to follow our camera viewing angle and a circular path over our terrain, respectively.  We setup a green material for our terrain mesh, and a dark blue material for our water mesh.  Also be careful to specify the specular power as the alpha component of the Color4 for the Specular field of  the Material instances; if you are following along with the original C++ code, you should note that the XMFLOAT4 structures used are specified in RGBA format, whereas the Color4 constructor expects ARGB ordering.

_dirLight = new DirectionalLight {
    Ambient = new Color4(0.2f, 0.2f, 0.2f),
    Diffuse = new Color4(0.5f, 0.5f, 0.5f),
    Specular = new Color4(0.5f, 0.5f, 0.5f),
    Direction = new Vector3(0.57735f, -0.57735f, 0.57735f)
};

_pointLight = new PointLight {
    Ambient = new Color4(0.3f, 0.3f, 0.3f),
    Diffuse = new Color4(0.7f, 0.7f, 0.7f),
    Specular = new Color4(0.7f, 0.7f, 0.7f),
    Attenuation = new Vector3(0.0f, 0.1f, 0.0f),
    Range = 25.0f
};
_spotLight = new SpotLight {
    Ambient = new Color4(0,0,0),
    Diffuse = new Color4(1.0f, 1.0f, 0.0f),
    Specular = Color.White,
    Attenuation = new Vector3(1.0f, 0.0f, 0.0f),
    Spot = 96.0f,
    Range = 10000.0f
};

// NOTE: must put alpha (spec power) first, rather than last as in book code
_landMaterial = new Material {
    Ambient = new Color4(1.0f, 0.48f, 0.77f, 0.46f),
    Diffuse = new Color4(1.0f, 0.48f, 0.77f, 0.46f),
    Specular = new Color4(16.0f, 0.2f, 0.2f, 0.2f)
};
_wavesMaterial = new Material {
    Ambient =  new Color4(0.137f, 0.42f, 0.556f),
    Diffuse = new Color4(0.137f, 0.42f, 0.556f),
    Specular = new Color4(96.0f, 0.8f, 0.8f, 0.8f)
};

In our Init() function, we will update our BuildFX() helper function to load the new lighting shader, FX/Lighting.fxo, and to grab the pointers to the new shader constants exposed.  There is nothing ground-breaking here, so I will omit the changes.  Additionally, we will modify our BuildVertexLayout() function to create a proper InputLayout for our new VertexPN structure, as described in the previous post.

When we are constructing our terrain geometry (BuildLandGeometryBuffers()), we will need to specify the normal for each vertex, rather than a color as previously.  Because our terrain is described by a trigonomic function, we can calculate these vertex normals directly, rather than by the average face normal method.  We’ll split this calculation out into a helper function, GetHillNormal(x,z).

private static Vector3 GetHillNormal(float x, float z) {
    var n = new Vector3(
        -0.03f*z*MathF.Cos(0.1f*x) - 0.3f*MathF.Cos(0.1f*z), 
        1.0f,
        0.3f*MathF.Sin(0.1f*x) + 0.03f*x*MathF.Sin(0.1f*z)
        );
    n.Normalize();

    return n;
}

In our UpdateScene() function, in addition to our camera handling and wave function update code, we have added code to update the positions of our spot and point lights.  The point light will travel in a circular path around the origin of our scene in the XZ plane, with it’s y-position dependant on the height of the terrain beneath it.  Our spotlight will be positioned at the same position as our virtual camera, pointed along our look vector.

// animate lights
_pointLight.Position = new Vector3(
    70.0f*MathF.Cos(0.2f*Timer.TotalTime), 
    Math.Max(GetHillHeight(_pointLight.Position.X, _pointLight.Position.Z), -3.0f) + 10.0f, 
    70.0f*MathF.Sin(0.2f*Timer.TotalTime)
);
_spotLight.Position = _eyePosW;
_spotLight.Direction = Vector3.Normalize(target - pos);

We also will need to update our Waves.cs Update(dt) function to compute the normals for each vertex in the ocean mesh.  Honestly, the derivation of the formula to determine these normals is over my head, so I must refer you to Chapter 15 of Mathematics for 3D Game Programming and Computer Graphics by Eric Lengyel for the details of the equation, but it was straightforward to convert Mr. Luna’s C++ code to C#.

We will update our DrawScene() function to set the appropriate global and per-object shader variables.  The global shader variables that we need to set are our lights, and the position in world space of our camera.  It is somewhat cumbersome to set the light structures, as the DX11 SlimDX API does not seem to allow us to upload the structures directly; instead we must first marshal the structures to a byte array, and upload this byte array using a DataStream.  I seem to recall being able to set EffectVariables to structures directly in the DX9 SlimDX API, so I am not sure whether something has changed in the underlying DirectX APIs, or if this is simply something the SlimDX team decided to change when they did the DX11 wrapper.

// Set global shader variables
var array = Util.GetArray(_dirLight);
_fxDirLight.SetRawValue(new DataStream(array, false, false), array.Length);
array = Util.GetArray(_pointLight);
_fxPointLight.SetRawValue(new DataStream(array, false, false), array.Length);
array = Util.GetArray(_spotLight);
_fxSpotLight.SetRawValue(new DataStream(array, false, false), array.Length );

_fxEyePosW.Set(_eyePosW);

For each object, we need to set its world matrix, its world-inverse-transpose, its world-view-projection matrix, and its material.  In our shader, we will need the world matrix to determine the position of the camera, and thus the vector from the camera to the pixel fragment in world space.  We will use the inverse-transpose to transform the object’s vertex normals, see Section 7.2.2 of Mr. Luna’s book for the derivation. 

// setting shader variables for our terrain mesh
_fxWVP.SetMatrix(_landWorld * viewProj);
var invTranspose = Matrix.Invert(Matrix.Transpose(_landWorld));
_fxWIT.SetMatrix(invTranspose);
_fxWorld.SetMatrix(_landWorld);
array = Util.GetArray(_landMaterial);
_fxMaterial.SetRawValue(new DataStream(array, false, false), array.Length);
// upload the values to the shader and draw the geometry
var pass = _tech.GetPassByIndex(i);
pass.Apply(ImmediateContext);
ImmediateContext.DrawIndexed(_landIndexCount, 0, 0);

As I mentioned, to set shader variables to C# structures, we need to marshal them into byte arrays.  We will use our GetArray() function, which I have added as a static function to our Util class in Util.cs.  Once we have received this byte array, we construct a DataStream and upload that to our shader variable.

public static byte[] GetArray(object o) {
    var len = Marshal.SizeOf(o);
    var arr = new byte[len];
    var ptr = Marshal.AllocHGlobal(len);
    Marshal.StructureToPtr(o, ptr, true);
    Marshal.Copy(ptr, arr, 0, len);
    Marshal.FreeHGlobal(ptr);
    return arr;
}

Our last step is to write the effect shader that we will use for the example, FX/Lighting.fx.  We will begin by including LightHelper.fx, which we discussed in the last post, which contains our structure definitions and helper functions for computing the color components for each light type.  We define two constant buffers, one for global variables, like our lights, and one for per-object variables, like our materials and world matrices.  Splitting up the constant buffers in this way can improve performance, as DirectX will upload the entirety of the buffer when we change one of the variables contained therein.  If we used a single constant buffer, we would be uploading all of our global state as well, when we switched between drawing different objects.  Note that I have modified the shader from that used in the book code to use vertex and pixel shaders versions 4.0, as I am splitting my development between DirectX11 and a DirectX10 feature-level machines; in this case, we are not using any 5.0 specific features, so this works adequately.

#include "LightHelper.fx"
 
cbuffer cbPerFrame
{
    DirectionalLight gDirLight;
    PointLight gPointLight;
    SpotLight gSpotLight;
    float3 gEyePosW;
};

cbuffer cbPerObject
{
    float4x4 gWorld;
    float4x4 gWorldInvTranspose;
    float4x4 gWorldViewProj;
    Material gMaterial;
};

struct VertexIn
{
    float3 PosL    : POSITION;
    float3 NormalL : NORMAL;
};

struct VertexOut
{
    float4 PosH    : SV_POSITION;
    float3 PosW    : POSITION;
    float3 NormalW : NORMAL;
};

VertexOut VS(VertexIn vin)
{
    VertexOut vout;
    
    // Transform to world space space.
    vout.PosW    = mul(float4(vin.PosL, 1.0f), gWorld).xyz;
    vout.NormalW = mul(vin.NormalL, (float3x3)gWorldInvTranspose);
        
    // Transform to homogeneous clip space.
    vout.PosH = mul(float4(vin.PosL, 1.0f), gWorldViewProj);
    
    return vout;
}
  
float4 PS(VertexOut pin) : SV_Target
{
    // Interpolating normal can unnormalize it, so normalize it.
    pin.NormalW = normalize(pin.NormalW); 

    float3 toEyeW = normalize(gEyePosW - pin.PosW);

    // Start with a sum of zero. 
    float4 ambient = float4(0.0f, 0.0f, 0.0f, 0.0f);
    float4 diffuse = float4(0.0f, 0.0f, 0.0f, 0.0f);
    float4 spec    = float4(0.0f, 0.0f, 0.0f, 0.0f);

    // Sum the light contribution from each light source.
    float4 A, D, S;

    ComputeDirectionalLight(gMaterial, gDirLight, pin.NormalW, toEyeW, A, D, S);
    ambient += A;  
    diffuse += D;
    spec    += S;

    ComputePointLight(gMaterial, gPointLight, pin.PosW, pin.NormalW, toEyeW, A, D, S);
    ambient += A;
    diffuse += D;
    spec    += S;

    ComputeSpotLight(gMaterial, gSpotLight, pin.PosW, pin.NormalW, toEyeW, A, D, S);
    ambient += A;
    diffuse += D;
    spec    += S;
       
    float4 litColor = ambient + diffuse + spec;

    // Common to take alpha from diffuse material.
    litColor.a = gMaterial.Diffuse.a;


    return litColor;
}

technique11 LightTech
{
    pass P0
    {
        SetVertexShader( CompileShader( vs_4_0, VS() ) );
        SetGeometryShader( NULL );
        SetPixelShader( CompileShader( ps_4_0, PS() ) );
    }
}

Next Time…

Up next, we will be writing a C# class to encapsulate a three-point light shader effect file, which we will be able to reuse for our ensuing examples.  This class will be end up very similar to the XNA BasicEffect class, if you have ever worked with XNA.  We will also do some work on centralizing our effect and InputLayout instances, to facilitate more complicated examples.  Additionally, we will modify our Shapes Demo to use this new effect class and render a lit version of the scene.

litskull


Bookshelf

I have way too many programming and programming-related books. Here are some of my favorites.