Sunday, July 12, 2015 Eric Richards

Until recently, if you wanted to write a .NET web application and run it on Windows, that generally meant creating an ASP.NET project, and then deploying it to IIS. For most use-cases, that was not that painful - it's pretty well tuned for supporting a typical request/response dependent web site. IIS is also very powerful and very configurable, which is both a blessing and a curse.

In my day job, we've built out several enterprise applications for interfacing and extending instant messaging platforms, like Microsoft Lync and IBM SameTime. Invariably, we need some kind of web interface for these applications, for configuring settings, displaying metrics, and so forth. Until the past year, we followed the standard pattern, with ASP.NET web applications on top of IIS. In our case, however, there were some pretty serious pain-points that would repeatedly arise:

  • Complicated Deployments: The software we sell to our customers is distributed as an installer that they install on servers within their corporate networks. The tooling that we had to build our installers was not always the most powerful, and not well-suited to automatically installing Windows features and IIS roles required for our applications. I've lost track of the number of support calls that I have been on where a customer skipped installing prerequisite components, or missed some of the required IIS roles. Also, having to install an array of IIS roles and features for a very light-weight web application really feels like overkill.
  • IIS Configuration Woes: Similarly, because IIS is so configurable, it's easy enough for some of the settings to get twiddled and break our applications. Particularly when one of our applications shares a server or IIS site with other applications (the Lync Server Web UI and SharePoint are common culprits), sorting out the interaction between global IIS settings, site-level settings, and settings for our applications can be needlessly complicated.
  • Mismatches between standard IIS behavior and our applications' needs: ASP.NET on IIS is generally oriented around a low-persistence request/response model. Some of our applications have considerable amounts of background processing to perform their work, which doesn't fit this model very well. By default, IIS application pools are set to terminate applications if they have been idle (i.e. not served an HTTP request) in a relatively small amount of time, and also to recycle the application at set intervals. There are ways to get around this behavior, by disabling the idle timeouts and recycling intervals, or employing some other service to periodically ping a keep-alive method on the application. Or you can split the UI component off from the background processing and deploy that as an additional service, but that introduces the complexity of communicating between the two processes and requires installing and configuring an additional service. We've done both, but neither is particularly fun.

Clearly, we were not the only ones to run into these pain-points. Eventually, Microsoft released the OWIN framework, which is much more light-weight and modular web application model than traditional ASP.NET. Along with OWIN came Project Katana, which was the default implementation of OWIN. While OWIN can be integrated into ASP.NET running on IIS, one of the huge selling points of OWIN and Katana is that it makes it considerably easier to create a stand-alone light-weight web server, running out of a Windows Service or even a regular old desktop application.

Running a light-weight stand-alone web server offers a lot of benefits:

  • No dependencies on IIS or ASP.NET. All that is required is the .NET Framework, which drastically cuts back on the complexity of checking for and installing prerequisites at deployment.
  • Much simpler deployments. Since your application is basically just an .exe, deployments are much simpler, no need to register anything with IIS, setup application pools, or any of that mess. You can go as far as simply zipping up the application directory, unzip it on the target, run the executable, and have a web application up and running.
  • Complete control over configuration. Generally with OWIN, all the configuration for your webserver is explicitly configured in code. All of the IIS control panel and web.config fiddling goes away, and you can specify exactly the behaviors expected.
  • Application lifecycle is simpler. Your application can service any incoming HTTP requests using the OWIN HttpListener, which works on its own background threads, while your application performs any other processing. There aren't any idle timeouts or automatic recyclings, and your application runs until you decide to stop it.

This all sounds pretty excellent, and in practice it has been a pretty big win for us. However, one difficulty that we encountered was in trying to convert our existing ASP.NET MVC applications over to OWIN self-hosted versions. ASP.NET 5 has been available for some time as a release candidate, and should be available in a final version soon, but at the time that we were first investigating making the switch to OWIN, there was not any version of ASP.NET MVC that could be used with the self-hosted OWIN HttpListener. One option could have been to rewrite our applications to use WebAPI on the back-end and serve static HTML, with JavaScript to load in data from the API, but we had relatively mature applications that followed the MVC way of doing things, populating Razor cshtml files from model classes server-side, and so doing that kind of rewrite and all the necessary regression testing was not particularly attractive.

So we cast about for an alternative that would let us reuse more of our existing code, and came upon the Nancy web framework. While there are some differences in execution, Nancy follows roughly the same conceptual model as ASP.NET MVC, and also provides a view engine with support for Razor cshtml views. It also had a mature and simple-to-use OWIN integration package, so after some experimentation to determine feasibility, we jumped in and started converting over some of our applications, and it has been a great success for us.

With that justification out of the way, I'm going to go ahead a present a very barebones example of how to setup a self-hosted OWIN web application that uses Nancy and runs as either a console application or as a Windows service. The source code is available on GitHub, at https://github.com/ericrrichards/NancyOwinTemplate and you can also download this code as a Visual Studio 2013 project template.


Thursday, June 25, 2015 Eric Richards

Recently, I've started work on developing a mobile application to interface with one of my employer's server-side products. Because this product is primarily a C# .NET application, we decided to take a look at using Xamarin.Forms, so that we could leverage parts of our existing code-base and also make use of Xamarin's cross-platform capabilities, rather than developing separate native clients using Java on Android, Objective-C on iOS, or C# on Windows Phone. Since we are primarily a C#/HTML/CSS/JS shop, the potential time-savings of being able to reuse our current expertise in .NET programming languages, and only having to learn the Xamarin framework, as opposed to two additional languages and three mobile development frameworks was a huge plus - particularly as some of the technologies that we would use to interact with our server-side product are not officially supported from the native platforms to the same degree as their .NET implementations.

Time will tell if Xamarin.Forms is the correct long-term solution for our needs - on the one hand, our particular application does not need much in the way of device-specific capabilities and customization that Xamarin does not provide access to, but on the other, the licensing costs for Xamarin are somewhat steep. At least for the purposes of rapidly throwing together a prototype proof-of-concept and exploring the feasibility of the idea, using Xamarin.Forms has been a great success - in the equivalent of one week of developer time, we were able to produce a fully-functional prototype.

While I was able to throw together something workable in a short period of time working from the Xamarin sample code, a variety of blog posts, and trawling the Xamarin Forms development forums, the task would have been much easier, and I would have avoided some missteps, if I had taken the time to read a brief overview of the technology beforehand. About midway through the prototype, I happened upon Xamarin Forms Succinctly, which is that quick overview that I was looking for. It is a quick read, at only 148 pages in the PDF version, but it provides a good surface treatment of the concepts and techniques for building simple Xamarin Forms applications - I printed out a copy, put it in a three-ring binder and read through it over the course of a couple of evenings and lunch breaks. It helped a great deal in filling in some of the blanks that I had missed in my more haphazard initial exploration, and it was a great help in revising some of the issues and antipatterns from the inital prototype version of our app.

I would definitely recommend that anyone who is interested in investigating Xamarin Forms and seeing what the technology offers take a few hours and read through Xamarin Forms Succinctly. For the price (free, other than registering an account with SyncFusion), and the time investment required (minimal), it is hard to beat. Besides, it's a gateway into the large library of free, generally high-quality, Succinctly ebooks, which cover a vast array of technical topics in a quick, accessible format.


Wednesday, May 20, 2015 Eric Richards

For the past few weeks, I've been once again noodling on the idea of starting a .NET port of a classic Id FPS. As a kid on my first computer, an off-brand 486 with DOS, I just hit the tail end of the good old days of shareware. And amongst all the floppy disks of kiddy and educational software and sliming Gruzzles couldn't really hold a candle to exploring Indiana Jones and the Last Crusade-esque Gothic castles and knifing Nazis.

While the original source-code for Wolfenstein 3D has been available for some time, it is a bit of a slog trying to wade through C code that was written 20 years ago, with near and far pointers, blitting directly to VGA memory, and hand-rolled assembly routines, let alone build the project successfully. Consequently, converting over to C# is a bit of a struggle, particularly for some of the low-level pointer manipulation and when loading in binary assets - it is very helpful to be able to step through both codebases side by side in the debugger to figure out any discrepancies.

Because of these difficulties, I have started looking at the Chocolate Wolfenstein 3D project, by Fabien Sanglard. Mr. Sanglard is a great student of the Id Software engine source code, and has done several very nice writeups of the different engines open-sourced by Id. He was even planning on writing a book-length analysis of Wolfenstein 3D, which hopefully is still underway. Chocolate Wolfenstein 3D is a more modernized C++ conversion of the original Id code, with some of the more gnarly bits smoothed out, and using SDL for cross-platform window-management, graphics, input, and sound. Even better, it can be built and run using current versions of Visual Studio.

The only problem I had with the Chocolate Wolfenstein 3D GitHub repository is that it is missing some dependencies and requires a small amount of tweaking in order to get it to build and run on Visual Studio 2013. These steps are not particularly difficult, but if you simply clone the repo and hit F5, it doesn't work right out of the box. If you are working on a Mac, there is a very nice guide on setting up the project in XCode, but I have not found a similar guide for Windows, so I decided to document the steps that I went through and share those here.

Chocolate Wolfenstein 3D title screen


Wednesday, May 13, 2015 Eric Richards

If you have not used it yet, Signalr is a real-time web communication library which is now under the ASP.NET banner. It is a very cool technology, which allows you to actively push data from a server application to your clients, without having to worry too much about the particular transport mechanism by which the content is delivered - SignalR ideally uses WebSockets, but degrades gracefully to older transport protocols (like AJAX long polling), if WebSocket connections are not supported by either the client or server. This is very powerful, since it allows you to write a single implementation, focused on your high-level needs, and SignalR handles most of the nitty gritty work for you. In addition, while SignalR can be used as something of a dumb pipe (although not that dumb, since you do get object->JSON serialization out of the box) via PersistentConnection, the real power of SignalR lies in its Hubs concept. By connecting to a SignalR Hub, a client can call methods on the server, both to execute commands, or to query for data. Similarly, the server can issue commands to the client by calling client-side callback methods.

This allows you to do some pretty cool things. Probably the canonical example is web-based instant-messaging/chat rooms, which is dead simple to implement using SignalR. But really any kind web page that polls for new data on an interval is a good candidate for SignalR, like, say, real-time dashboards, or a sports score ticker, maybe some classes of multiplayer games.

One scenario that SignalR does not support very nicely is when the server needs to query its clients for information and have that data returned to the Hub. As currently implemented, a SignalR client can only define handlers for methods called by the Hub which are Action<T,...> delegates, which have a void return signature. So, while the code below will compile if it is included in a Hub method, data is actually a Task object, which is an asynchronous wrapper for the client-side Action. In short, there is no way to directly return a value from a client-side method back to the Hub.

var data = Clients.Client(SOME_ID).GetData()

Why would you want to do this kind of action using SignalR? In my case, I recently encountered a situation where a third-party library implemented a wrapper around a COM application. This COM wrapper was implemented as a singleton, but in my particular case, I needed to execute multiple instances of the underlying COM application, in order to connect to an external service with different user accounts, but, because the wrapper was a singleton, I could only make one connection from my main application process. However, I could get around this limitation by spawning an additional process for each instance of the COM wrapper that was required (fortunately, it was simply a C# singleton, and not limited by any kind of global mutex). While each child process could for the most part operate independently performing its responsibilities, the main application needed to be able to issue commands to control the child processes, as well as retrieve information from them for various reporting features and to ensure consistency in various aspects globally across the application.

I considered using a more traditional IPC method, like bidirectional sockets, or named pipes, but the simplicity and cleanness of using SignalR was more attractive to me than rolling my own message-passing protocol and coding a message loop on both sides. At least for issuing commands to the child processes, implementing a SignalR Hub on my main application to which the clients would connect was pretty quick and easy. Then I ran into the difficulty of how to get data back out of the child processes using SignalR.

In retrospect, I could have modified the code executing on the child processes into a more event-driven form, so that I could use the normal SignalR paradigm to push information to the Hub in the main application, instead of querying the child processes for it. However, this would have involved a complete revision of the architecture of the application (there were very similar components that ran within the main process, using the same interfaces and the bulk of the implementation), as well as necessitating an extensive amount of caching in the main process.

Instead, I settled on the slightly crazy pattern illustrated here. In broad strokes, what I am doing is:

  1. A specially-tagged SignalR client owned by the main application calls a method on the Hub requesting a piece of data from a specific child process.
  2. The Hub resolves the requested child process to its SignalR connection Id, and calls a method to order the child process to assemble the requested data.
  3. The child process calculates the requested data, and pushes it to a different method on the Hub.
  4. The second Hub method stuffs the pushed data from the child process into a dictionary, keyed by the child process' identity.
  5. While this has been happening, the first Hub method has been blocking, waiting for the requested data to appear in the dictionary.
  6. When the data appears in the dictionary, it is popped off, removing the key from the dictionary, preparing the dictionary for the next invocation. There is also a timeout value, so that if there is some sort of interuption, a default value will be returned, rather than indefinitely blocking.
  7. Finally, the first Hub method returns the piece of data to the main application.

bidirectional signalR

The code for this example is available on GitHub at https://github.com/ericrrichards/crazy-signalr-ipc


Wednesday, February 11, 2015 Eric Richards

Just a quick update today. I've updated the 3D model loading code to use the latest version of AssimpNet that is on Nuget now. The latest code is updated on GitHub.

The biggest changes appear to be that the AssimpImporter/Exporter classes have been merged into a single AssimpContext class that can do both. Some of the GetXXX methods to retrieve vertex elements and textures also appear to be replaced with properties. The way that one hooks into the internal Assimp logging has also changed. I haven't discovered many other changes, besides some additional file format support yet, but I haven't gotten around to really explore it that closely. Mildly irritating that the interface has changed here, but hey, at least it is being actively worked on (which is more than I can say for some of my stuff at times...).

I've been quite busy with work lately, so I haven't had a huge amount of time to work on anything here. I'm also a little burned out on doing straight DX11 graphics stuff, so I think I'm going to try to transition into doing some more applied work for a while. I've got some stuff in the pipeline on making simple games with Direct2D, and ideally I'd like to get involved with One Game a Month, to give me a little additional motivation.

skinnedModels


Bookshelf

I have way too many programming and programming-related books. Here are some of my favorites.