Tuesday, August 28, 2018 Eric Richards


MergeSorting a hand of playing cards

So we're still not up to an algorithm a day, but I'm gaining... So far, we've been looking at relatively simple quadratic (i.e. O(n2)) sorting algorithms. These have been incremental algorithms, so that we sort one element, then we sort the next unsorted element, and so on. Another category of algorithms are those known as divide-and-conquer algorithms. In this style of algorithm, we take our entire data set and partition it into smaller, simpler portions, solve each subportion, and then integrate the subportions back together to yield the full result.

Mergesort is a simple example of a divide-and-conquer algorithm, described in Chapter 2.3.1 of CLRS. Perhaps this should really be called divide-conquer-combine, as that more accurately describes what we are doing:

  1. Divide: Split our items in half, giving us a left set and a right set each of n/2 items.
  2. Conquer: We then need to sort each of these smaller sets, so we recursively apply the divide step on each of the subsets. Trivially, once we have reached subsets of a single item, that subset is sorted, so that is our base case where we stop dividing.
  3. Combine: When the two subsets are sorted, we then merge them back together to produce the full sorted set.
You can see this procedure in action in the GIF to the left. Dividing into the left and right subsets is trivial, so the real key is the merging operation. Using cards as an example, this is quite intuitive; when we are merging, what we do is look at the first card in the left set and the first card in the right set. Whichever card is smaller, we strip it off and add it to the merged set, and continue in that fashion until one of the sets is empty, then add all the remaining cards in the non-empty set.

Simple, right? Conceptually, it certainly is, and if I were going to do this with some higher-level constructs like LINQ, it would be a snap to translate almost directly into code, but we're going to do this the hard way, and so that makes it a little more complicated ☺.


Tuesday, August 07, 2018 Eric Richards


Selection Sort

I am not good at this whole blogging thing anymore... My carpentry and remodeling skills have been put to the test over the last four months, however, as I've completely redone my kitchen and gutted and renovated my first-floor apartment. Now back into the saddle with some more algorithms!

Two posts ago, I started out with one of the fundamental sorting algorithms, Insertion Sort. Today we'll be looking at its close cousin, Selection Sort, which is described briefly in exercise 2.2-2 of CLRS.

If you recall, with Insertion Sort, we would iterate over our array taking the current value and then inserting it into the proper place in the elements that we have already sorted, by bubbling it down until the array left of our current position is sorted.

Selection Sort works slightly differently. Our algorithm, in English, is as follows:

  1. Scan through the array, to select the smallest value, and swap that value with the value in the first position.
  2. Then scan through the rest of the array to find the next smallest value, and swap that value with the value in the second position.
  3. Continue in this fashion until we reach the last element, at which point the array is sorted.

At first glance, this is very, very similar to Insertion Sort, but they are actually almost opposite. The key difference lies in the way that the comparisons are performed

  • In Insertion Sort, we select the next unsorted element, and scan backwards through the sorted elements to find the position that the current element should be placed.
  • In Selection Sort, we select the next position in the array, and scan forward through the remaining unsorted elements to find the element that should go in that position

This difference becomes important when we consider the best and worst case performance of the two algorithms.

  • For Insertion Sort, the best case is when we have an already sorted array and try to sort it again. In this case, we take each element, and need to only compare it against the preceding element, so that we have O(n) comparisons. The worst case is when we have a reverse sorted array, in which case, we will have to compare each new element against every previously sorted element, which gives us n(n-1)/2 comparisons, for a worst-case runtime of O(n2).
  • For Selection Sort, the best and worst case runtimes are the same. For every element, we have to scan all the remaining elements to find the smallest element, so that we must always perform n(n-1)/2 comparisons, so it is always O(n2).
  • One other difference between Insertion and Selection Sort is in the number of swaps that the two algorithms need to perform. Selection Sort will always perform O(n) swaps, whereas Insertion Sort can perform between 0, in the best case, to O(n2) swaps, due to the way it bubbles unsorted values into the sorted portion of the array.

Implementation

Now that I've done a few of these, I'm not going to go through all the different variations and C#-isms for generics and custom comparators, and just show the straightforward implementation for Selection Sort on an array of integers. If you want to see the full code, including a recursive version, it is available on GitHub.

public static void SelectionSort(this int[] a) {
    for (var i = 0; i < a.Length-1; i++) {
        var smallest = a[i];
        var smallestI = i; // index of the smallest item
        for (var j = i+1; j < a.Length; j++) {
            if (a[j] < smallest) {
                smallest = a[j];
                smallestI = j;
            }
        }
        // swap the smallest value into the final position
        a[smallestI] = a[i];
        a[i] = smallest;
    }
}

Flying Spaghetti Monster willing, I'm done with renovation projects for a little bit, so I should have a little more time on my hands, and there won't be a multimonth layover before the next entry in this series, but you never know. Next up we'll be moving on to divide and conquer sorting algorithms, starting with Mergesort. Stay tuned.


Monday, April 02, 2018 Eric Richards

A birthday, crunch time at work, Easter, experiments with MonoGame, and some other life things have gotten in the way of me continuing this series as frequently as I had planned, but never fear, back into the breach I go. Last time around, we started things off with Insertion Sort, which is the focus of Chapter 2.1 in my copy of the 3rd Edition. The natural next step to continue on to would be other sorting algorithms: selection sort, merge sort, quicksort, etc, etc. And we'll get there. But first I wanted to make a little diversion into one of the exercises for this chapter; writing a linear search.

I'm going to reproduce the problem definition from the exercise here, just to get us started:

Input: A sequence of n numbers such that A = {a1, a2, ..., an} and a value v.

Output: An index i such that v = A[ i ] or the special value NIL if v does not appear in A.

There's a bit more about writing loop invariants and proving that the algorithm is correct, if you were going to do this as a homework assignment. But that's incredibly dull, especially for something as self-evident as this. Let's get right to writing some code (github); this will be a short one.

Linear Search

There's not a whole lot to get fancy with with a linear search.

  • We have the value we're searching for, and an array of values.
  • We start at the beginning and compare the first item in the array with our search value, if it matches, we return that index.
  • If not, we move on to the second item, and repeat
  • If we get to the end and haven't found the value, then it isn't present, and we return our NIL sentinel value.
Since C# has 0-based arrays, and doesn't allow (yet) for any trickery like using negative indices to slice from the end, -1 is a reasonable sentinel value (and matches with the convention of the CLR Array.IndexOf function). So that leads pretty naturally to this, where we just use a for-loop:

public static int LinearSearch(this int[] a, int find) {
    for (var i = 0; i < a.Length; i++) {
        if (a[i] == find) {
            return i;
        }
    }
    return -1;
}


Wednesday, March 07, 2018 Eric Richards

The other day I was on HackerNews lurking as I do, and there were a number of people praising Introduction to Algorithms, the infamous CLRS book. Almost everybody who has been through a Computer Science degree has had to buy a copy of it and slog along through at some point. It's basically the bible of algorithms and data structures classes, and as most of that kind of book are, it's a weighty tome that costs about a week's work-study wages. Mine has mostly been gathering dust on the shelf for the past ten years...

I've got a little bit of a love-hate relationship with the book. My algorithms and data structures class in college was entirely theoretical - we didn't write a single line of code during the whole semester, we just did page after page of proofs. Moreover, the professor was one of those space-case academic types, who was probably a bang-up researcher, but couldn't teach a lick. My only enduring memories of that course are chalk scratching on the blackboard, as she transcribed the textbook word for word and symbol for symbol, and sitting in the 1902 Room in Baker Library with reams of scratch paper scattered across the table, flailing away doing induction proofs. Oh, and the week we spent going over that median-of-medians algorithm that I've still never seen a point to... I think I scratched out a B-, and it was easily the least enjoyable class of my entire degree.

On the other hand, I could see that there was a lot of really cool information in the text, if it was taught differently. I always thought that a hands-on version of the class, where you had to implement everything in C, rather than pseudocode and pencil, would be really fun, and moreover, really teach people how to program. Alas, we skipped a lot of the interesting material in the book, and overwhelmingly focused on proving run-time complexity bounds. Which I always thought was a little dumb, detached from real-life implementation details - there was very little consideration of trading space for time, or taking into account the capabilities of the hardware, or other real life concerns - but that is neither here nor there.

So this week, I decided to dust off my copy and give it another go. I'm mostly a C# programmer, and probably will remain one for the foreseeable future; it's a really nice language. However, I'm going to try to keep things pretty basic for the most part, to hew as closely to the pseudocode presented in the text.


Tuesday, September 12, 2017 Eric Richards

TLDR; the Microsoft Automatic Graph Layout library is pretty slick at automatically generating state transition diagrams.

One of the products that I work on for my day job is an instant message-based call center/helpdesk application. It's a pretty complicated beast, but at the heart of it, what it does is:

  • Allow users to contact a central endpoint.
  • Process the incoming request
  • Route the request to a member of a predefined group that can assist the end-user
  • Broker the conversation between the end-user and the helpdesk/call center agent.
  • Perform post-processing on the chat session once it has ended.
Internally, each incoming chat session uses a state machine to keep track of the session and its progression through the workflow. For each stage in the pipeline, we've got a State class that handles operations during that particular stage. There is also a table in our state machine which determines what the valid transitions between the states are, so that we can easily assert that the session is following the expected workflow.

This state machine started out fairly simple, with just a few states, but as always happens, we began to accrete features, and so more and more stages were added into the pipeline, and the complexity and number of state transitions grew. The state machine definition that originated as a hastily drawn scrawling on a sheet of printer paper blossomed into a Visio diagram, then eventually wilted into irrelevance as more and more states were added and documentation efforts fell behind. Like most manual processes, it became a pain point, and was neglected.

Things came to a head recently when it was discovered that a state transition had been added as a code path in certain situations, but the state machine transition table had not been updated accordingly, and so it was possible for chats to attempt to make a transition that the state machine would reject as invalid, leaving the chat session in limbo. Not ideal. Ideally, we'd have caught this in some way with compile or build-time static analysis, but getting that implemented is going to have to be an exercise for the future. Failing that, exhaustive unit-tests validating the state transition rules was the easier task, but ran into the difficulty that the hand-written design documentation had fallen out of date, which necessitated a tedious and time-consuming search through the code base to trace out the logic. Once that was complete, there remained the task of updating the Visio docs... which is less than fun to do by hand.

Earlier that week, I had run across a link to the Microsoft Automatic Graph Layout library, a nifty open-source project that makes it stupidly easy to define a graph and then either display it in a Windows Form or render to an image file. While it may not be quite as nice looking as a hand-crafted Visio doc, it's not too unattractive, and better, can be completely automated. So we can just pipe our state transition definitions into it from the source code, and right out comes a legible diagram. We can even run it as a post-build step in our build to automatically keep the documentation up-to-date. Hooray!


Bookshelf

Hi, I'm Eric, and I'm a biblioholic. Here is a selection of my favorites. All proceeds go to feed my addiction...