Saturday, January 17, 2015 Eric Richards

Recently, I have been using OWIN a good deal for developing internal web applications. One of the chief benefits of this is that OWIN offers the ability to host its own HTTP server, which allows me to get out of the business of installing and configuring IIS on windows, which is one of the main points of pain when deploying the products I work on to our customers. Unfortunately, when I first started using OWIN, there was not a version of ASP.NET MVC available that was compatible with OWIN. Most of my previous experience with programming web servers has been based on MVC (except for briefly experiencing WebForms hell), so finding a similar framework that was compatible with OWIN was one of my first priorities.

In my search, I discovered Nancy, a fairly similar MVC-style framework which offered OWIN support. It also was capable of using the same Razor view engine as ASP.NET MVC, with some minor differences, so I was able to convert existing IIS ASP.NET MVC applications to OWIN/Nancy using most of the existing views and front-end code. At some point I plan to write an article illustrating how one would do this type of conversion, but for now, I'm going to examine one particular gotcha I discovered when converting my personal Netflix-type video application to OWIN/Nancy: serving HTML5 video files.


Sunday, December 21, 2014 Eric Richards

A couple of weeks ago as I was browsing HackerNews, I stumbled onto an article about creating spherical procedural maps by Andy Gainey. Procedural terrain/map generation is always something I find interesting, having grown up slightly obsessed with Civilization and its successors. Various methods of tiling a sphere in order to make a game grid have been bouncing around the back of my mind for a while now - unfortunately finding time to explore some of those ideas has been problematic. But, after reading through Mr. Gainey's piece and looking over the source for his demo, I got inspired to take a stab at something similar. My first attempt was to try to directly port his Javascript implementation, which got bogged down in the impedance mismatch between going from JS to C# and WebGL to DirectX at the same time, so I stepped back, and decided to come at the idea again, working in smaller steps.

The first step was to generate a spherical triangle mesh. For this purpose, the spherical mesh that we have previously worked with is not very well suited, as its topology is very inconsistent, particularly around the poles, as you can see in the screenshot below. Instead of this kind of lattitude/longitude subdivision, we will instead use an alternate method, which starts from an icosahedron, then subdivides each face of the icosahedron into four triangles, and finally projects the resulting vertices onto the sphere. This produces a much more regular tessellation, known as a geodesic grid, as you can see in the right screenshot below. This tessellation is not without artifacts - at the vertices of the original icosahedron, the resulting geodesic will have five edges, while all other vertices have six edges, similar to a real-life soccer ball.

lat/long tessellation
Lattitude/Longitude tessellation, with artifacts at the poles
geodesic tessellation
Geodesic tessellation

Frank Luna�s Introduction to 3D Game Programming with Direct3D 11.0, Chapter 6, presented a method of generating of generating a geodesic sphere, which I previously skipped over. The code for this example is based upon his example, with some improvements to eliminate redundant vertices.

The code for this example can be obtained from my GitHub repository.


Monday, November 24, 2014 Eric Richards

Moving along with Chapter 1 of the HLSL Development Cookbook, we're on to handling directional lighting. I've covered most of the theory behind directional lighting previously, so this is going to be quite brief.

To recap, in case anyone is unfamiliar with the term, a directional light is a light which illuminates the entire scene equally from a given direction. Typically this means a light source which is so large and far away from the scene being rendered, such as the sun or moon, that any attenuation in light intensity, via the inverse-square law or variations in the direction from any point in the scene to the light source location are neglible. Thus, we can model a directional light simply by using a directional vector, representing the direction of incoming light from the source, and the color of the light emitted by the light source.

Generally, we model the light hitting a surface by breaking it into two components, diffuse and specular, according to an empirical lighting equation called the Phong reflection model. Diffuse light is the light that reflects from a surface equally in all directions which is calculated from the angle between the surface normal vector and the vector from the surface to the light source. Specular light is light that is reflected off of glossy surfaces in a view-dependant direction. This direction of this specular reflection is controlled by the surface normal, the vector from the surface to the light, and the vector from the surface to the viewer of the scene, while the size and color of the specular highlights are controlled by properties of the material being lit, the specular exponent, approximating the "smoothness" of the object, and the specular color of the material. Many objects will reflect specular reflections in all color frequencies, while others, mostly metals, will absorb some frequencies more than others. For now, we're only going to consider the first class of materials.

Below you can see a model being lit by directional light, in addition to the ambient lighting we used in the last example. Here, the directional light is coming downward and and from the lower-left to the upper-right. Surfaces with normals facing more directly towards the light source are lit more brightly than surfaces which are angled partly away from the light, while surfaces facing away from the light are lit only by ambient lighting. In addition, you can see a series of brighter spots along the back of the rabbit model, where there are specular highlights.

Full code for this example can be downloaded from my GitHub repository.

directional lighting


Thursday, November 13, 2014 Eric Richards

Well, it's been two rough months since the last post. Work has been crazy, and I've been wrestling with finishing up my never-ending Voronoi mapping project. So I decided to switch gears and return to the HLSL Development Cookbook. Now that I've got code in place to handle loading the .sdkmesh model files used by the book's examples, the hardest part is done. Now it is just a matter of converting plain HLSL vertex and pixel shaders over into the Effects Framework format I've been using, and porting the DXUT C++ code over to C# and my SlimDX application framework.

The first chapter of the HLSL Development Cookbook covers classic forward lighting, which is the style of lighting that we have used throughout the examples posted here thus far (see Directional, Point and Spot lights and three-point lighting). Later, the book covers some more complicated lighting schemes, but we'll get to that in due time.

Up first on the slate is ambient lighting. If you've been reading along with my posts, you may recall that we briefly discussed ambient lighting when defining our Light and Material classes. Ambient lighting is something of a hack that was invented to fake the effect of indirect lighting (light that hits a surface after bouncing off one or more other surfaces) without having to actually perform the computations to model the light rays. Historically, back in the days before programmable shaders, this ambient lighting was expressed as constant color values for the lights and materials in the scene being rendered. So far, I have followed this simple scheme in the shaders I adapted from Frank Luna's Introduction to 3D Game Programming with Direct3D 11.0. As you can see in the screenshot on the left below, this method results in a flat ambient color, which conceals the surface details of the mesh. Combined with the diffuse and specular terms in the traditional Phong reflection equation, this method works well, but on its own it is underwhelming.

HLSL Development Cookbook presents an alternative method of calculating ambient lighting, called hemispheric ambient lighting. This method takes advantage of programmable pixel shaders, and allows us to calculate a varying ambient color per pixel, based on the interpolated surface normal and two constant color values representing the color of light that is bounced off the ground and off the ceiling/sky. As you can see in the screenshot on the right below, this lighting method preserves the surface details of the mesh (so long as different colors are selected for the top and bottom colors). As we'll see, this method of computing ambient lighting is also pretty cheap to do computationally.

flat ambient lighting hemispherical ambient lighting
Old-style flat ambient lighting Hemispherical ambient lighting

As always, the full source code for this post is available at my github repository.


Thursday, September 11, 2014 Eric Richards

Many moons ago now, I picked up a copy of HLSL Development Cookbook by Doron Feinstein. I had intended to work my way through it after I finished up Luna's Introduction to 3D Game Programming with Direct3D 11.0, but winter and life kind of got in the way...

Another difficulty I had with this book was that the code samples made heavy use of DXUT which was the semi-official Direct3D sample framework. With the Windows 8 transistion, DXUT is sorta-kinda deprecated now, and besides, SlimDX doesn't really have support for it - SlimDX is more of a bare-metal binding to the underlying DirectX framework, which DXUT sits on top of.

So in any case, I was going to have to do a fair amount of rework to adapt the samples from this book to fit into the framework of code that I'd built up over the past year or so. Swapping the UI related stuff out for WinForms controls hosted in my SlimDX app didn't seem as though it would be that hard, but a bigger stumbling block was the fact that all of the sample meshes provided with the code used the .sdkmesh format. SDKMesh is a 3D model format that used to be used in a lot of the DirectX sample code, after .X models kind of fell into disfavor (Of course, now it appears they are using yet another model format, .CMO). SDKMesh is unfortunately not supported by Assimp, so I can't use Assimp.Net to load SDKMesh meshes. Fortunately, it is a relatively simple binary format. The documentation, at least that provided in the DirectX July 2010 SDK, is a little spotty, and not totally correct, but its possible to figure things out by looking at the source code for DXUT and another C# SDKMesh loader that appears to have disappeared from the internet since I first found it...

Well, this has sat in my todo pile for long enough, let's get started. As always, you can get the code for this example from my public GitHub repository.

stanford bunny skdmesh