The Architect´s Napkin

Software Architecture on the Back of a Napkin
posts - 69 , comments - 229 , trackbacks - 0

My Links



Post Categories

Image Galleries

Flowing Bowling Game Kata I

Ron Jeffries challenged me to show how Flow-Design and Event-Based Components can help software development. This is the problem he posed in the Software Craftsmanship discussion group:

Solve bowling scoring. Here is the specification. Note that this is a simpler
version than the one Bob Martin often uses. I'll take questions if you have any.

  Given a list of the rolls of a legal game of ten pin bowling,
  which you may assume are provided without error or omission,
  produce the total, final, score of the game.

    Twenty zeros produce zero.
    Twenty fours produce 80.
    Twelve tens produce 300.

Since I don´t know much about bowling I consulted the KataBowling description as a second source. I haven´t done the kata before and have not looked at any of the kata solutions on display on the internet. That´s fortunate for a fresh and unbiased start – but it´s also a pain, since I simply don´t like bowling. But, well, I don´t want to complain. Here we go…

Understanding the problem

I believe in thinking before coding. And the first thing to think about is the problem. Before I can start to code I need to understand the problem which also means I have at least an idea of how to solve it.

It´s too bad I can´t show you the process of actually gaining an understanding of the problem. But what I can show you is data. Here´s a sample bowling game score sheet:


This shows you how I interpret the rules I read. Not just as a test case, but with some explanations as to why the scores are calculated like that.

The input to the solution I´m supposed to write would look like follows for this game:


It´s just the number of pins knocked down with each roll.

The combination of this list and the total score (here: 131) describes an acceptance test. Ron also provided me with some acceptance tests (see above).

Please note: Acceptance tests given by the customer are not really enough. You yourself should reply back to your customer with a couple of acceptance test suggestions. That´s what my game score sheet is about. Because only by defining your own acceptance test cases and showing them to the customer you can be sure to demonstrate you understand the problem.

If you explain something to someone – e.g. basic mathematical operations like +, –, *, / – and ask her, “Do you understand?” and she nods, that means nothing. You can only know if somebody understands what you´re saying by getting him to explain to you, what she understood or ask her questions.

The above description/acceptance test case is my reply to Ron. Hope he agrees I understood the problem correctly.

For the moment let me assume I did. What´s next?

“User Interface”

After I made sure I understand a problem, I like to clarify how the customer (here: Ron) wants to interact with my solution. Software is about transforming input into output. So how is the input provided? How should the output be returned? Often this entails user interaction and you need to talk with your customer about some kind of user interface. In this case, though, no UI is needed; at least Ron has not mentioned any. So I assume he´ll be satisfied with some kind of API. (As is the case for most Coding Katas.)

The API I´d like to suggest is this (Warning: I´ll be using C# as my solution language):

public class BowlingGame
    public static int CalculateTotal(IEnumerable<int> rolls) {…}

Using it would look like this:

var total = BowlingGame.CalculateTotal(new[]{3, 4, 5, …, 10, 3, 7, 5});

Acceptance Test Code

With acceptance tests and a UI in hand, I set up acceptance test code. This makes sure I do not deliver a solution not meeting the minimum criteria agreed upon my the customer. Employing NUnit this can look like this:

[TestCase("00000000000000000000", 0)]
[TestCase("44444444444444444444", 80)]
[TestCase("AAAAAAAAAAAA", 300)]
[TestCase("192837465555647382915", 154)] // all spares
[TestCase("000000000000000000195", 15)]  // focus: just one more roll after spare in 10th frame
[TestCase("000000000000000000a34", 17)]  // focus: two more rolls after strike in 10th frame
[TestCase("3451a45267354aa375", 131)]    // acceptance test case from blog entry
public void AcceptanceTest(string rolls, int expectedTotal)
    var total = BowlingGame.CalculateTotal(String2Rolls(rolls));
    Assert.AreEqual(expectedTotal, total);

The first three test cases are Ron´s. Then follow additional ones I derived from my understanding of the rules.

As soon as my solution delivers correct results for the acceptance test input data I´m ready to ship.

If I´d continue with TDD to design the implementation, I´d start from here. But I don´t. I continue with some more thinking…

Modelling the solution I

Instead of sitting down and code I like to close in on a solution using a stepwise refinement process. Here´s my first step; I draw the solution on the highes level of abstraction possible. The whole solution is a single black box:


There is a list of integers flowing into the one method representing the solution; and a single integer is flowing out of it as the result. Seen from far away that´s how the solution works, how in fact any solution works: rolls are transformed into a total score, input is transformed into output by a single action.

Given an action I can decide if I want to switch to coding mode. If I feel the solution will be short and I feel comfortable writing it down, I´d switch. But if not I´d rather continue modelling with a graphic language. It´s so much easier to change a mental model of a solution if it´s just a diagram.

In this case I decide to continue modelling not only because this is the purpose of the exercise, but also because I already have a clear vision of a more detailed design. Understanding the problem lead me to distinguish three operations:

Calculating the total score consists of adding the pins in each frame, adding bonus points for spares and adding bonus points for strikes.

I don´t think these operations are too far fetched. I find them pretty obvious from reading the requirements. I´d consider me not understanding the problem if I had no idea of these aspects of a solution.

But since I´m very sure these operations contribute towards the solution, I use them to refine the all encompassing action:


Now there are three functional units to put “domain logic” in – and one at the top just wiring them together. This is a stratified design: an operation on a higher level of abstraction is assembled from operations on a lower level of abstraction. A very well known approach to build complicated stuff all over the world.

But wait: there is a list of integers entering the root functional unit, but a list of frames go into the first operation. This does not look right. Where are the frames coming from? And where is the total score calculated? Two more functional units are needed:


This looks better: consistent and complete. This is even not very technical; I could explain it to the customer to make clear what kind of solution I envision.

Should I start coding now? I guess that would be ok. All functional units seem to be quite small. And they represent crucial domain terminology.

The basic idea of this solution is to transform the initial list of rolls into a set of frames – then enrich these frames with a score – and finally sum all frame scores. Hm… this sounds good. I should go back to the model and make it mirror this straightforward explanation:


That´s jojo-modelling, I´d say :-) First top-down, then bottom-up.

Now I can explain the solution to my fellow developers on three different levels. It´s complete on each level, but each level is lacking detail. That´s on purpose. Modelling with Flow-Design is about abstraction; and abstraction is about hiding details. That´s why Flow-Design does not want to duplicate what programming languages do. It´s not flow-charts, it´s data-flow. It´s not imperative, it´s declarative. The transformation depicted will magically happen like transforming data from sectors on a hard disk into records in memory magically happens when using SQL.

And what about the data? Usual object orientation start with focusing on the data.

With Flow-Design data is not the focus even though it´s about data-flow. Sure, in the end the details of the data flowing needs to be specified. But usually data is not the problem; how data should be structured mostly is pretty obvious. That´s probably one reason why you mostly start by modelling data structures: you feel comfortable, you get something done.

In the end, though, data is not really the problem. Transformation is. As programmers we´re hired to implement transformations. And since a well known advice is to start work with higher risk tasks I´d argue it´s good advice to start with the transformations when programming. Either data structures are well known – or transformations drive data structures (like TDD is supposed to drive design).

To start programming by identifying data classes from nouns in a requirements document might even be premature optimization. So be cautious about it.

But, yes, I need to define Frame before I start coding. Here it is:

class Frame
    public int[] Rolls = new int[2];
    public int Score;

More´s not necessary, I´d say. It´s devoid of functionality. Why? To be honest: I don´t know which functionality I should attribute to Frame. What´s the responsibility of Frame except to hold data? Responsibilities I feel sure about are modelled in the above diagram. To me it feels very natural to not force them onto frame.

Modelling the solution II

Although the above model seems to be up to the task, I don´t want to keep it as a secret: there is another way to model the solution. Maybe you even thought of this alternative first. It´s replacing the enrichment action sequence with parallel actions:


This is not to suggest there will be running anything in parallel (although it could). It´s only to make clear the independence of the actions to sum pins and bonuses.

Also note how frames enter the summation actions – but integers are leaving them. The list of frames will not get enriched. It´s just input to be traversed to calculate the output.

To me that sounds even better than the first model.

Nevertheless I´ll implement the first one because it lends itself nicely to a translation into very plain C# code as you´ll see. The second model I´ll leave to my colleague Stefan Lieser who´s working with me on Flow-Design and Event-based Components. I´ll describe it later in this blog.

For now I hope you feel with me at least a little bit how easy it is to reason about different approaches when looking at a picture. Imagine juggling models only in your head? Or imagine sitting at your IDE and coding away using TDD. You´ll most certainly focus on just a single solution. TDD will drive you into one direction without showing you the alternatives. Or if alternatives show up you´d need to experiment with them in code. Sure, that would be executable experiments – but it would also be quite tedious to explore them. You simply cannot type as fast as you can draw or think. (At least I cannot.)

So although “bubbles don´t crash” and there is no guarantee that the solutions are comprehensive I prefer to model them explicitly like this first. It´s sufficiently coarse grained to be swift. And it´s sufficiently fine grained as to make coding easier.

Translating the model

Enough scribbled. On to some code.

How should I start coding the first model?

Well, anyway I like. I can start top-down or bottom-up. The model tells me which functional units to code. I don´t need to find them out through refactoring, I know which ones are needed – at least at a certain level of abstraction.

So I randomly pick Add_pins_in_frame to implement first. Here´s my test – yes, I´m coding test-first:

public void Calc_basic_scores()
    var frames = new[] { new Frame { Rolls = new[] { 1, 2 } },
                            new Frame { Rolls = new[] { 3, 5 } } };
    frames = frames.Add_pins_in_frame().ToArray();
    Assert.AreEqual(new[] {3, 8 },
                    frames.Select(f => f.Score).ToArray());

I chose to implement the action as a C# extension method. As you´ll see this will make the code very readable:

internal static class BowlingGameExtensions_Scoring
    public static IEnumerable<Frame> Add_pins_in_frame(this IEnumerable<Frame> frames)
        foreach (var f in frames)
            f.Score = f.Rolls[0] + f.Rolls[1];
            yield return f;

This is easy enough, isn´t it? A small functional unit, readily understandable, with single responsibility.

The other operations look the same. I spare you listing them here. They are all independent of each other. So they are easy to test.

But what about the composit functional units like the root? I implement them too although they are hardly doing anything. Their only purpose is to “wire up” the actions they contain. Here´s the API function for which you saw the acceptance test above:

public class BowlingGame
    public static int CalculateTotal(IEnumerable<int> rolls)
        return rolls.ToFrames()

How easy to read is this for you? How close to the model is this?

Or here the other composite action:

internal static class BowlingGameExtensions_Scoring
    public static IEnumerable<Frame> Enrich_frames_with_score(this IEnumerable<Frame> frames)
        return frames.Add_pins_in_frame()

This too faithfully represents the model. In fact I “mechanically” translated it from the model. Each functional unit in the model either becomes an operation and is fleshed out test-first. Or it becomes a composite just plugging together other functional units in the most legible and easy way possible. There´s never a control statement in implementations of composite functional units.

There is a clear separation of concerns between operations and composites. The former need creative implementation, the latter only need mechanical implementation. They can even be generated from the model as Stefan Lieser´s solution will show.

Intermediate conclusion

Solving Ron´s problem this way was very straightforward. The hardest part was “decoding” the requirements. But once I understood the bowling score rules it took me just a couple of minutes to come up with the models.

As said above, even the nicest model diagram is no guarantee to be correct/sufficient. Nevertheless it provides a lot of value:

  • I was able to talk about my mental model with my colleague.
  • We were able to weigh the different approaches against each other.
  • The model described the most important part of any solution: the transformation. Flow-Design models are about functionality. The put action first and structure second.
  • The model provided me with small, focused, independent functional units to implement. No refactoring was necessary.
  • The model sports a fundamental and important separation of concerns between “doing” (operations) and “coordination” (composites).
  • The model is easy to understand on different levels of abstraction “at a glance”. (Which still requires understanding the visual notation and the domain terminology. My grandma sure would not understand these models.)
  • The model is fully present in the code, i.e. each model element has a corresponding code artifact.

But this is only one possible way of translating Flow-Designs into code. It´s the easiest way, because no tooling is required and the result is readily understood. However the code lacks the capability to reproduce the model; code and model can go out of sync. Also it can become tedious to write “coordinating” code by hand.

That´s why the next article is going to show you a translation of model into Event-based Components.

PS: In case you think this solution is overengineered: Partly I agree. It´s a tiny problem that also could have easily be solved without an explicit design like this. But Ron posed this problem to challenge me, so I was required to use Flow-Design.

On the other hand you never really know. A problem might look small – but in the end, once you understand it, is not. Also this approach lead to code that´s easy to evolve since it´s already refactored from the outset. Functionality is obvious and communicatable. Reasoning about where to apply changes if necessary is easy.

TDD might have resulted in a similarly fine grained design. But that design never had been visualized so all reasoning would need to work on just code. At least I find that cumbersome.

Print | posted on Tuesday, July 5, 2011 1:49 AM | Filed Under [ Event-Based Components ]


No comments posted yet.
Post A Comment

Powered by: