Permutations and Combinations

In the last couple years I’ve regained interest in solidifying my math chops. One mathematical concept that I see popping up again and again is that of permutations and combinations. I tried searching around for quick explanations, but I wasn’t really pleased with what I found. I thought I would share an approach that really helped me get it.

Permutations

Finding the number of permutations of a unique set is simple, but tricky to understand. Once it “clicks,” you’ll think to yourself, “why did I think that was so hard?” (if you’re like me anyways).

Given a set with unique elements, you can permute it by mixing up the elements. Finding the permutations of a set involves figuring out all the unique ways you can mix those elements. The key concept in a permutation is ordering: it matters how the elements are ordered. Thus, the permutation \left \{ab \right \} is distinct from \left \{ba \right \}. The question, “how many permutations are there of a set?”, is asking for all possible orderings of the elements in that set. Consider the set:

S = \left \{abc \right \}

What are all the permutations of S? One way to do this by hand is by picking an element to go first and enumerating the permutations of the remaining elements:

(a goes first)

\left \{abc \right \} \left \{acb \right \}

(b goes first)

\left \{bac \right \} \left \{bca \right \}

(c goes first)

\left \{cab \right \} \left \{cba \right \}

As you can see, there are 6 permutations of this set. One thing to notice is that there are 3 members in the set, and we could pick any of those to be the first element. More generally, given N elements in a set, there are N ways to pick the first element.

But what about the second element? Assuming all members are unique, we can take the subset of N-1 elements and pick any of those to be the second. For example, let’s assume we’ve picked a already, we’re left with a subset of S without a:

\left \{bc \right \}

This particular set is trivial, but let’s stick with our plan:

(b goes first)

\left \{bc\right \}

(c goes first)

\left \{cb \right \}

We end up with only two permutations of this subset. More generally, we’ve got N-1 elements, and we can pick any of those to go first in our subset.

You may be spotting a pattern here, and that’s the key insight. We started with N elements, and we’re free to pick any one of those to go first. Then, for every one of those N choices, we have a subset of N-1 elements. Following our pattern, we can pick any N-1 of those elements to go first in that subset. If we keep going, we end up with N-2, N-3, etc–all the way down to 1. When we reach a set of 1 element, we’re done. This is our base case.

The trickiest part is knowing how to combine these results.

Starting with N elements, we have N choices for the first one. For each of those N choices, we have N-1 sub-choices. These are multiplied together to give us the total number of choices, N * (N-1). But we aren’t done, because have to consider the N-2 choices for each of the N-1 sub-choices, for each of the N original choices, which is N * (N-1) * (N-2). If you continue this pattern all the way down to 1, you get the familiar factorial function, N! Going back to our example, we had 3 choices for our first element, which gave us the three lines above. For each of those 3 lines, we had a 2 element subset, of which the permutations are merely the 2 possible “flippings” of those elements. We ended up with 3*2*1 = 6 permutations, just as we expected.

Thus, the answer to our question, “how many permutations are there of a set?” is N!, where N is the number of elements in our set.

Combinations

You may be familiar with the “n choose r” notation, which is denoted mathematically by the following equation:

\binom{n}{r} = \frac{n!}{r!(n-r)!}

This is a pretty scary looking equation, and it’s not easy to “reverse engineer.” the meaning by just looking at it. Fortunately, there’s an easy way to understand it that builds on the idea of permutations presented in the previous section, so bear with me.

First of all, what is a combination? Remember how permutations were ordered sets? The key behind a combination is that it is unordered. How many ways can you divide a group of 4 students into 2 person teams? That’s a combination problem. We don’t care who’s first or second on each team–only who is on each team. Thinking about it, combinations must somehow relate to permutations. More precisely, the combinations must be a subset of the permutations. Why? Well, if you have any combination of students on a team, you could generate permutations from that–by picking someone to go first, etc., like we learned above. It “feels” like there are duplicate sets being counted that we don’t care about–which is exactly the case. The equation above is basically computing all possible permutations and then “trimming” the duplicate results to give us the combinations.

With that insight, let’s examine that scary equation once again, just for a bit.

\binom{n}{r} = \frac{n!}{r!(n-r)!}

This equation is saying, “how many ways can we choose r elements from n elements?” Using our team example above, we’d be choosing 2 person teams from 4 total people. Let’s plug it in and see what we get:

\binom{4}{2} = \frac{4!}{2!(4-2)!} = \frac{24}{2*2} = 6

Given a set, \left \{abcd \right \}, it’s not too hard to pick out those 6 combinations:

\left \{ab\right \} \left \{ac \right \} \left \{ad \right \} \left \{bc \right \} \left \{bd \right \} \left \{cd \right \}

As you can see, each combination includes a unique group of people, in no particular order. We could visit each of these groups, permute them by picking all possible orderings, and combine the results to get 24 total permutations. I’ll leave that as an exercise for you. The key insight is that we can compute permutations from combinations. Each permutation is the same combination, just mixed up differently. Let’s just examine one of these combinations: \left \{ab\right \}.

Finding the permutations in this set is trivial. Here they are:

\left \{ab \right \} \left \{ba \right \}

There are 2! = 2 permutations here. These are the permutations of the combination, \left \{ab\right \}. Hmmm, it seems like we’re ending up with twice as many results as we want. For each combination, we have a duplicate. We need to take our original n! permutation estimate and somehow remove these duplicates. We do this by dividing by r!. Think about it, if we have a 2 element set, we end up double counting everything. We need to divide by 2. If we had a 3 element set, we could have even more duplicates. We would need to divide by 3! = 6.

Using our previous example, dividing 24 (the number of permutations of n elements) by 2 (the number of permutations of r elements) gives us 12–not 6. What gives?

We’ve still got the (n – r)! term in there. If you think about it, n-r counts the remaining elements not chosen in our combination. Let’s take a look at our team example once more to see why we need this.

Let’s say we’ve chosen the combination, \left \{ab \right \}. We know that this expands into 2 permutations, so we divide our total number of permutations (24 in this example) by 2, but we still have duplicates. The issue is that our n! permutations still include permutations of the remaining elements, which are incorrectly included in our count. Let’s examine our first combination once again:

\left \{ab \right \}

We know we can permute this, but we still have to combine it with the remaining unused elements, which combined form a permutation of the original set. Thus, even if we purge duplicates from our subset, we still end up with more duplicates. Assuming we’ve purged duplicates from our example subset, our combined set still includes the permuted unused elements:

\left \{abcd \right \} \left \{abdc \right \}

To correct this, we need to divide by the (n-r) element permutations, which gives us the missing (n-r)! term. If we divide our original n! permutations by these two terms, we finally get the correct result:

\binom{n}{r} = \frac{n!}{r!(n-r)!}

Samba 4: A Case Study (Part 1)

I work in the IT department for the Computer Science Department at Taylor University. We do things a little differently here in terms of our lab machine setup. We dual boot Ubuntu Linux and Windows 7, but we centralize our file serving and domain controlling through Samba on Linux–as opposed to a Windows Active Directory Solution. Currently, we use Samba 3.5.x as our Primary Domain Controller with an OpenLDAP backend to host user accounts. This has been working, but it has issues. Samba 3 was designed for the NT style domain, which is really starting to show its age when compared to modern Active Directory. For instance, to edit a group policy setting, we currently have to manually edit our Windows OS image and push it out to every machine (alternatively, we have an updater script that allows us to do it without re-imaging, but it’s still a pain).

I started following Samba 4 development late last year, which just had its first official release a couple months ago, and I’m now in the process of building a production active directory cluster to replace the Samba 3 server. The official HOWTO is a little skimpy (listed here: http://wiki.samba.org/index.php/Samba_AD_DC_HOWTO), so I thought  I would add some of my experience setting this stuff up. As a disclaimer, I’m no expert, so this just my understanding based on messing around with it. If you notice something that’s not correct, please leave a comment or email me.

System Architecture

Before I go into details, I wanted to give a bit of an overview of the server architecture I went with. Although the Samba team would love for you to use Samba 4 exclusively, my personal experience tells me that it really should have a Windows DC attached to it (two would be better). I guess it depends on your specific needs, but one big reason we’re opting for Samba is the unification of file space and permissions between Linux and Windows.

One key thing to know about Active Directory is that it’s deeply tied into DNS. You can’t query for anything in the domain without a working DNS configuration. For a Linux-only solution, there are two options: use a heavily modified Bind configuration integrated with Samba (which has issues), or just choose the new default internal DNS server built by the Samba team.

Messing with Bind isn’t my cup of tea, and if you’re looking for help there, I’m not your guy. I’m not really excited about using the custom DNS solution either, for a couple reasons. For one, it’s brand new, so it’s a security risk, and it lacks the stability of industry standard DNS solutions. It seems like the Samba team just got fed up hacking Bind to match their needs, so they just wrote their own. That’s all well and good, but I’d rather use the Windows DNS solution. It’s turnkey, stable, and still allows for Linux integration I need.

I fiddles around with a couple different server configurations. I first tried using S4 as the initial DC. I built a Windows Server 2008 R2 machine (sidenote: don’t bother with Windows Server 2012, it’s not supported yet), and tried joining it to the Samba domain. I kept running into strange errors in dcpromo.exe, so that didn’t work.

In my second attempt, I built a Windows server box and joined the S4 server to it, using the instructions listed here: http://wiki.samba.org/index.php/Samba4/HOWTO/Join_a_domain_as_a_DC

This worked pretty well. I was able to join successfully and get replication working; I really like having the Windows GUI for DNS and Group Policy settings accessible from Windows. Because I don’t quite trust S4 yet, I added  a second Windows DC to the cluster. I’ve noticed that the S4 server screws up some features. For instance, I can’t demote any of the domain controllers (I get weird errors), so I have to manually remove the connections and DNS entries. This isn’t the end of the world for us because we have a small cluster. For larger clusters this would be a big show stopper. I’ve seen some chatter about this on the samba news list, but no solutions.  There are other minor issues I’ve run into, but I won’t list them here. I’ll probably make a separate post for that.

Those hiccups aside, I’m able to replicate the directory entries between all domain controllers. I can also create accounts on either the S4 or Windows machines successfully. I created a couple fresh Windows 7 builds and configured some group policy settings for them (including folder redirection and roaming profiles). It didn’t take too long to get that working.

There’s a lot of details I’ve come across in setting this up, so I’m going to try and spread it out over a few posts. Stay tuned for updates! If you have any questions about things, feel free to comment or email me.

Gaussian Blur Effect

Note: This is taken from the MetaVoxel development blog, http://www.metavoxelgame.com

I started a task last week to add a simple Gaussian blur effect to the background blocks in the scene.  Before we were just drawing them with no filtering, so they looked “pixelated.” I tried using bilinear filtering, but that looks pretty ugly too. It ended up taking way too much time, but I learned a lot about image compositing and running convolution kernels on the GPU. Here are the results:

blur blur2

The algorithm splits the scene up into layers, similar to how you might do it in Photoshop. The “blur” layer is saved to a separate render target and filtered using a two-pass Gaussian blur. This makes for a nice softening effect. The layers are then composited together in a final merge step and rendered to the screen.

For those who care about the details, I ran into some issues with compositing, and it turned out that I’d been thinking about alpha blending all wrong. Most resources I’ve seen out there use linear interpolation for blending, based on the source alpha. The equation looks like:

dest = src_alpha * src_color+ (1 – src_alpha) * dst_color

In OpenGL, this is done through glBlendFunc( GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA ).

There are some issues with this blend mode. For one, how do you represent complete transparency? Is it (0, 0, 0, 0)? Or (255, 255, 255, 0)? You can pick any arbitrary “color” for your transparent surface, which doesn’t really make physical sense. Even worse, if you render a texture into a transparent render target, you end up with discolored fringes:

bad

The solution is to use premultiplied alpha. We instead use the blend mode:

dest = src_color + (1 – src_alpha) * dst_color

This way, (0, 0, 0, 0) is the only true transparent color. It reacts properly to filtering and avoids the ugly fringes. Once I switched the alpha blending to use premultiplied alpha, I was able to set the layer render targets to fully transparent, render into them, and then blend the layers together correctly.

This requires a bit of overhead for texture loading, because the input color is multiplied by the alpha channel. The results look very good though.

Buffer Streaming in OpenGL

I spent a bit of time recently designing a sprite batching system using the (more) modern OpenGL 3.x core functionality rather than the old immediate mode stuff. I thought I would share some of my observations about working with it. My first approach to the problem consisted of allocating a single, large dynamic vertex buffer object, which I mapped via glMapBuffer() to copy in data for each batch of sprites. I used a static index buffer that I locked once and filled at initialization time, since I only ever needed to draw quads. This worked fine, except it was wicked slow.

For one, MetaVoxel uses alpha blending very heavily, requiring everything to be drawn back to front. As you might expect, this results in an output sensitive batch size, because it relies on the next quad requiring an identical rendering state. Sorting by texture isn’t possible, unfortunately. Additionally, each time I mapped the buffer, I would map the entire thing all at once–even for small batches. Anyway, it didn’t take much to bring things to a crawl. Profiling revealed a vast majority of time spent in the driver, waiting on glMapBuffer.

Clearly, I was doing something wrong. The key issue turned out to be buffer synchronization between the CPU and GPU. I found some great resources below that go into detail explaining how to optimize this.

http://hacksoflife.blogspot.com/2012/04/beyond-glmapbuffer.html

http://www.opengl.org/wiki/Buffer_Object_Streaming

I won’t rehash the details of how buffer orphaning works, see the above articles if you’re interested. The basic idea is that the driver is very conservative and will happily stall waiting for a buffer to flush to the GPU before letting you write to it again. You either need to coax the driver to allocate a new buffer for you, or implement a ring buffer scheme yourself. I ended up implementing the following techniques that got things running smoothly.

    I used glMapBufferRange to map only the required amount of data per batch, specifying the GL_MAP_INVALIDATE_RANGE_BIT.

  • I created a ring buffer of vertex buffer objects, mapping each one in sequence with each new batch. Rather than coaxing the driver to give me a fresh set of data each time, I implemented it myself.
  • I specified GL_STREAM_DRAW as the driver hint.

When I tested this on several machines, I saw vastly better performance–indicating less synchronization between the CPU and GPU.

Simulating an Inverse Kinematics Chain

Inverse kinematics is a fun application of linear algebra and calculus that is extremely relevant in computer animation. Given a rigged skeleton, inverse kinematics can solve the joint angles (or similar parameters for other types of joints) necessary to minimize the distance between the end effector (i.e. joint at the end of the chain) and target. Solving this system essentially boils down to the following system of equations:

\vec{e}=J\vec{\Delta\theta}

where \vec{e} is called the “error vector,” and describes the velocity of the end effector necessary to converge on the target; \Delta\vec{\theta} is the vector of joint angle velocities that will result in the correct end effector velocity (this is the unknown part), and the Jacobian matrix J, that encodes the change of basis from joint angle space to Cartesian coordinate space for the partial derivatives. Alternatively, you can view this as a transformation matrix that takes the rate of change of the joint angles and relates it to the rate of change of the Cartesian coordinates of the end effector joint.

It is common to have more columns than rows in a system like this (especially in a chain), because the there will be few end effectors and many joints. Because the matrix is not invertible, our next best thing is the Moore-Penrose psuedoinverse which gives a least-squares solution to the problem. This is desirable because there are many cases where the chain cannot converge on the target, so we want the best possible attempt. The results in the following equation:

\Delta\vec{\theta}=J^T(JJ^T)^{-1}\vec{e}

This form of the psuedoinverse assumes that you have full row rank in your matrix J, otherwise JJ^T will be singular an non-invertible. This presents a problem because there do exist singularities in certain joint configurations. A common solution to this problem is to damp this matrix by a small factor to ensure invertibility. This method is called the Damped Least Squares method, and the requires only a simple change to the previous equation:

\Delta\vec{\theta}=J^T(JJ^T + \lambda I)^{-1}\vec{e}

Here, \lambda is just an arbitrarily chosen constant that keeps the matrix from every going singular. The bigger the constant, the less realistic the motion near singular configurations. On the other hand, a higher constant results in more numerical stability in the equations. The demo I wrote allows you to set an arbitrary \lambda value.

I also implemented two other solvers, a Jacobian Transpose solver and a Cyclic-Coordinate Decent solver. The former cheats by using the transpose of the Jacobian to solve the system rather than the psuedoinverse. It’s a simple solution but results in a lot of jittery motion and isn’t very realistic. CCD essentially loops through each joint multiple times and orients the child chain in the direction of the target. With enough iterations, this converges on a solution, but it only works in situations where joints are 1DOF. It is quite simple to understand and there are several articles online that explain it well.

Finally, the demo is written in Python using the pyglet library. User interactions is almost exclusively through a console interface (activated with ~). Included in the zip is a text file with the syntax for these commands.

Source Archive

For a more in-depth introduction to this topic, I found this paper extremely helpful.

Some Thoughts on Software Design

NOTE: This was taken from my blog on gamedev.net. I’m migrating things over slowly.

Software Engineering as I understand it involves the concept of software design. Anyone can program, and I would even venture to say that programming is the easy part. Good software design is where things get hairy. I consider myself a seasoned programmer, but a somewhat fledgling Software Engineer–a state that I think is fairly commonplace among academic or hobbyist developers. Bad habits done out ignorance can carry over into the industry and wreak havoc on bigger projects. I too am struggling when it comes to designing and implementing larger systems (like a game engine), although I am learning a lot. Though I am still inexperienced, I thought I might share some of my realizations so far. If you consider yourself a seasoned software engineer, feel free to comment/correct as you see fit.

Formulate Requirements

First of all, trying to design a system without knowing the requirements will drive you insane if you are expecting any semblance of quality in your architecture. I know this because I’ve tried. My previous attempt at creating a game engine went down in flaming glory because the end result wasn’t as flexible and usable as I had hoped. Before you write a single line of code, I implore you, write a design document for your project. Describe what the system needs to do in detail. Why is this so important? Without this step, you’re essentially flying blind. You will re-factor your code three times as often, bloat your code base including features you don’t need; and if you’re like me, you’ll agonize over finding the Right Way to do it–except you won’t, because you don’t know what you need.

Case in point: in my first engine, I rewrote my resource manager four times. Not a simple re-factor mind you; a complete rewrite. I included features like loading from a separate thread, which is a great feature, except that I didn’t really need it. I just thought it was cool and that every engine should use it. Granted, I learned a lot about what I did need during those four rewrites, but I would have been better off starting with the essentials and refactoring from there. I went into it without fleshing out how the end product would use the system; as a result, my design suffered from code bloat.

Build a (Good) Foundation

I’ve heard differing opinions on this, so take it with a grain of salt. I see a two specific mistakes that neophyte developers make (including myself) when they are just starting a big project. Either they try to sketch out the entire system (a large one, mind you) on paper, in detail, including every major class and how they’ll interact with other classes; or they don’t attempt to design anything and just start coding. The latter is just pure laziness in my opinion–the chances of designing a good system with that strategy are akin to winning the lottery (unless you’re John Carmack). The former, however, is just as destructive. From my experience, trying to build exclusive from the top down is paralyzing. You scrap design after design trying to find the perfect one (hint: it doesn’t exist).

As with all things in life, good design comes with moderation. Begin with your requirements and write some hypothetical sample code of how you will use the system. One good way I’ve found is to write a bogus program (or part of one) in pseudo code, testing out various interfaces and ideas. At the same time, begin building the foundation for your system. If it’s a game, start with the basics, like a basic map loader. Just get something working.

Break Down Work into Manageable Chunks

As you build the components of your system, not only will the next step become more clear, but so will the requirements. You’ll realize something you need but couldn’t have known early on in the process. This allows you to refine your requirements as well as your code. By starting simple, you will save time refactoring and gain momentum and efficiency. I see many developers who attempt to write hundreds of lines of code without ever compiling it. They proceed to spend hours trying to get it to even compile, and then twice as many working out the bugs. This is not good practice; a better method is to utilize unit testing. This is the process of writing small chunks of code and then rigorously testing it before moving on to another component. This gives you confidence in the components of your system as you build on them, and boosts your morale as you see small parts working independently. Furthermore, it simplifies bug fixing since you can focus on one part of the system exclusively.

Pick an Achievable Goal and Stick to It

This is probably the most important point. Avoid the pain and suffering of trying to design the perfect system for any product. It won’t happen. Always know your requirements and design for that only. This is especially true if you are trying something new. I don’t personally know how the guys at Epic designed Unreal 3 or their story, but I would imagine they built it up using the good parts of their games. Not only that, but the guys developing Unreal Engine 4 are probably the same ones who built the previous three engines (at least the lead developers). In short, defer your amazing do-it-all project for when you actually know what you’re doing (I say this to myself as well). Most of us started out wanting to develop the best MMO on the planet; we quickly realized this was foolish. However, some of us are repeating that foolishness on a smaller scale. We’re trying to build something we have little experience with, and make it better than everything else out there. That’s just my opinion.

How does this apply to me? Well, a lot. I’m trying to trim down my grandiose plans to something manageable. For instance, what are my goals? I’ve spoken about those in recent blog entries. If you go back to my first blog, you’ll find that I’ve trimmed it down a bit since then. I may trim it down even more. As a humble hobbyist with only 10 or so hours a week to devote to doing what I love, this has to be the way it is. It’s just reality.

Anyway, I hope this was helpful. Again, it’s just my opinion, and I’d love to hear your feedback if you agree or disagree. Thanks!

Build a strong math foundation!

A strong math background is an vital tool for game development. That may sound completely obvious to you, but it wasn’t to the 12-18 year old me. I was a good student and enjoyed my math classes and all, but the light just wasn’t on. I never took a formal linear algebra class, so the best I had was the primer you get in every game development book. It’s enough to understand basic vector algebra and affine transformations, but not much more. I mostly just used the equations and maintained a very surface-level understanding. The dangerous part was that I somehow fostered a false confidence in my math skills. I was good until I started prodding.

Man, was I missing out.

Last year, I had the opportunity to interview for a game development studio. Sure enough, my interviewer gave me a vector algebra problem, and I totally choked. I eventually made it through, but not without lots of coaxing and hand holding. It was a pivotal moment for me: I realized just how much I was holding myself back by not building a strong foundation. So I decided to change!

I immediately picked up Introduction to Linear Algebra by Gilbert Strang and began working through his M.I.T. course online. I also registered for Calculus III for Fall 2012, and Differential Equations, and Computational Geometry for my last semester. I picked up Real-Time Rendering and Physically Based Rendering, which are two fantastic and math-heavy books. Linear algebra and Calculus III have been incredible. I’m able to dissect problems and understand their context in ways like never before.

As an example, consider the problem of tangent space normal mapping. I never understood the formal concept of linear transformations and basis vectors. A matrix was just numbers to me; now it’s a familiar construct with a plethora of fascinating properties and elements (like eigenvalues and eigenvectors!). Computing the tangent, bi-tangent, and normal vectors and transforming the lights  and viewpoint into tangent space is a walk in the park now. No game development book primer is a worthy replacement for a rigorous math course.

So do yourself a favor. Learn the math! And really learn it.

Most recently, I’ve been studying the Singular Value Decomposition of a matrix, as well as it’s Psuedoinverse. These constructs are ways to break down matrices into basic and powerful elements. A great application of this concept is inverse kinematics. In my next post, I’ll explain how I solved a basic inverse kinematics chain in 2D using python, linear algebra, and calculus.