The Architect´s Napkin

Software Architecture on the Back of a Napkin
posts - 69 , comments - 229 , trackbacks - 0

My Links

News

Article Categories

Archives

Post Categories

Image Galleries

Evolution of a Basic Software Anatomy

In the beginning there was, well, chaos. Software had no particular anatomy, i.e. agreed upon fundamental structure. It consisted of several different “modules” which where dependent on each other in arbitrary ways:

image

(Please note the line end symbol I´m using to denote dependencies. You´ll see in a minute why I´m deviating from the traditional arrow.)

Then came along the multi-layer architecture. A very successful pattern to bring order into chaos. Its benefits were twofold:

  1. Multi-layer architecture separated fundamental concerns recurring in every software.
  2. Multi-layer architecture aligned dependencies clearly from top to bottom.

image

How many layers there are in a multi-layer architecture does not really matter. It´s about the Separation of Concerns (SoC) principle and disentangling dependencies.

This was better than before – but led to a strange effect: business logic was now dependent on infrastructure. Technically this was overcome sooner or later by applying the Inversion of Control (IoC) principle. That way the design time dependencies between layers where separated from the runtime dependencies.

image

This seemed to work – except now the implementation did not really mirror the design anymore. Also the layers and the very straightforward dependencies did not match a growing number of aspects anymore.

So the next evolutionary step in software anatomy moved away from layers and top-bottom thinking to rings. Robert C. Martin summed up a couple of these architectural approaches in his Clean Architecture:

image

It keeps and even details the separation of concerns, but changes the direction of the dependencies. They are pointing from technical to non-technical, from infrastructure to domain. The maxim is: don´t let domain specific code depend on technologies. This is to further the decoupling between concerns.

This leads to implementations like this:

For example, consider that the use case needs to call the presenter. However, this call must not be direct because that would violate “The Dependency Rule”: No name in an outer circle can be mentioned by an inner circle. So we have the use case call an interface (Shown here as Use Case Output Port) in the inner circle, and have the presenter in the outer circle implement it.

The same technique is used to cross all the boundaries in the architectures. We take advantage of dynamic polymorphism to create source code dependencies that oppose the flow of control so that we can conform to “The Dependency Rule” no matter what direction the flow of control is going in.

For Robert C. Martin the rings represent implementations as well as interfaces and calling an outer ring “module” implementation from an inner ring “module” implementation at runtime is ok, as long as design time dependencies of interfaces are just inward pointing.

While the Clean Architecture diagram looks easy, the actual code to me seems somewhat complicated at times.

Suggestion for a next evolutionary step

So far the evolution of software anatomy has two constants: it´s about separating concerns and aligning dependencies. Both is good in terms of decoupling and testability etc. – but my feeling is, we´re hitting a glass ceiling. What could be the next evolutionary step? Even more alignment of dependencies?

No. My suggestion is to remove dependencies from the primary picture of software anatomy altogether. Dependencies are important, we can´t get rid of them – but we should stop staring at them.

Here´s what I think is the basic anatomy of software (which I call “software cell”):

image

The arrows here do not (!) mean dependencies. They are depicting data flow. None of the “modules” (rectangles, triangles, core circle) are depending on each other to request a service. There are no client-service relationships. All “modules” are peers in that they do not (!) even know each other.

The elements of my view roughly match the Clean Architecture like this:

image

Portals and Providers form a membrane around the core. The membrane is responsible for isolating the core from an environment. Portals and providers encapsulate infrastructure technologies for communication between environment and core. The core on the other hand represents the domain of the software. It´s about use cases, if you want, and domain objects.

My focus when designing software is on functionality. So all “modules” you see are functional units. They do, process, transform, calculate, perform. They are about actions and behavior.

In my view, the primary purpose of software design is wire-up functional units in a way so a desired overall behavior (functional as well as non-functional) is achieved. In short, it´s about building “domain processes” (supported by infrastructure). That´s why I focus on data flow, not on control flow. It´s more along the lines of Functional Programming, and less like Object Oriented Programming.

Here´s how I would zoom in and depict some “domain process”:

image

Some user interacts with a portal. The portal issues a processing request. Some “chain” of functional units work on this request. They transform the request payload, maybe load some data from resources in the environment, maybe cause some side effect in some resources in the environment. And finally produce some kind of result which is presented to the user in a portal.

None of these “process steps” knows the other. They follow the Principle of Mutual Oblivion (PoMO). That makes them easy to test. That makes it easy to change the process, because any data flow can be deviated without the producer or consumer being aware of it.

In the picture of Clean Architecture Robert C. Martin seems to hint at something like this when he defines “Use Case Ports”. But it´s not explicit. That, however, I find important: make flow explicit and radically decouple responsibilities.

Two pieces are missing from this puzzle: What about the data? And what about wiring up the functional units?

Well, you got me ;-) Dependencies returning. As I said, we need them. But differently than before, I´d say.

Functional units of data flows like above surely share data which means they depend on it:

image

If data is kept simple, though, such dependencies are not very dangerous. (See how useful it is to have to symbols for relationships between functional units: one for dependencies and one for data flow.)

So far, wiring up the flows just happens. Like building dependency hierarchies at runtime just happens. Usually the code to inject instances of layer implementations at runtime is not shown. But it´s there, and a DI container knows all the interfaces and their implementations.

For the next evolutionary step of software anatomy, however, I find it important to officially introduce the authority which is responsible for such wiring up; it´s some integrator.

image

If integration is kept simple, though, such dependencies are not very dangerous. “Simple” here (as above with data) means: does not contain logic, i.e. expressions or control statements. If this Integration Operation Segregation Principle (IOSP) is followed, integration code might be difficult to test due to its dependencies – but it´s very simple to write and check during a review.

Stepping back you can see that my dependency story is different from the ones so far:

  • There are no dependencies between functional aspects. They don´t do request/response service calls on each other, but are connected by data flows.
  • There are only dependencies between fundamental organizational concerns completely orthogonal to any domain: integration, operation, and data.

image

This evolved anatomy of software does not get rid of dependencies. You will continue to use your IoC and DI containers ;-) But it will make testing of “work horse code” (operations) easier, much easier. And the need for using mock frameworks will decrease. At least that´s my experience of some five years designing software like this.

Also, as you´ll find if you try this out, specifications of classes will change. Even with IoC a class will be defined by 1+n interfaces: the interface it implements plus all the interfaces of “service classes” it uses.

But with software cells and flows the class specifications consist only of 1 interface: the interface the class implements. That´s it. Well, at least that´s true for the operation classes which follow the PoMO. That´s useful because those classes are heavy with logic, so you want to make it as simple as possible to specify and test them.

Conclusion

The evolution of a basic software anatomy has come far – but there is still room for improvement. As long as everything revolves around dependencies between technological and domain aspects, there is unholy coupling. So the next consequent move is to get rid of those dependencies – and relegate them to the realm of organizational concerns. In my opinion the overall structure becomes much easier. Decluttered. More decoupled.

Why not give it a chance?

 

PS: For more details on flows, PoMO, and IOSP see my blog series here – or make yourself comfortable in a chair next to your fireplace and read my Leanpub eBook on it :-)

Print | posted on Saturday, March 22, 2014 10:42 PM | Filed Under [ Software architecture ]

Feedback

Gravatar

# re: Evolution of a Basic Software Anatomy

You might check out Hexagonal Architecture (ports & adapters), as it has a lot of similarities. Some good thoughts.
4/1/2014 1:12 PM | corey haines
Gravatar

# re: Evolution of a Basic Software Anatomy

You might check out Hexagonal Architecture. It has a lot of similarities to your approach. There's been a lot of talk in applying it to web application lately, as well. http://alistair.cockburn.us/Hexagonal+architecture
4/1/2014 1:13 PM | corey haines
Gravatar

# re: Evolution of a Basic Software Anatomy

Hi Ralf.
Happy New Year! Great article. Very inspiring!

Do you have a code example of a business application like a .Net MVC application with a domain model and a database back-end designed and implemented with the flow design and the evolved anatomy? I would like to compare it to the onion architecture which I so far have used for my .Net MVC applications.

Thanks in advance,
Anders
1/7/2015 5:05 PM | Anders Baumann
Post A Comment
Title:
Name:
Email:
Comment:
Verification:
 

Powered by: