Geeks With Blogs

News


Dylan Smith ALM / Architecture / TFS

So I started my new job at Anvil Digital a few weeks ago.  One of the first things I’ve been doing is just taking a look at the current projects and processes and finding some opportunities and/or weaknesses where improvements can be made.  One of the projects under-way is the development/maintenance of a large-ish .Net application.  When I took a look under the covers at the design/architecture I noticed there is clearly a lot of room for improvement.  In this specific instance, what has happened is the application has grown from several smaller applications that along the way have been merged together and grown drastically beyond what the original developers expected.  Compounding the problem is that there has been no real strategy in place for the application design or architecture, so each of the 8-10 developers working on the application has gone off and done their own thing when it comes to design.  Some of the approaches taken are reasonably well implemented, some not so much; but the key point is that there is little to no consistency between the various areas in the code-base.

I’ll go into a bit more detail in what some of the quality problems as I see them are in a minute.  But what I’m really interested in discussing in this post is what is the cost of having poor internal quality?  How do you justify the cost of taking time to improve quality through refactoring or other methods?  I have some thoughts on this that I’ll share, but first let me give some examples of the quality weaknesses as I see them for this specific application:

  • The application doesn’t take advantage of OO really at all.  The key concepts in the application do not have corresponding classes to represent them.  There is very little attention paid to having cohesive classes that encapsulate specific behavior and data.  The abstractions/concepts are leaking all over the place.
  • The various components that make up the application are not layered very well.  For example there are many cases of circular dependencies that make compiling the application very tricky at times.
  • There are many cases of “god classes”.  I’ve seen single classes that have over 12,000 Lines Of Code.  Definitely not following the Single Responsibility Principle
  • Another symptom of the poor quality is the fact that building the application throws 500+ compiler warnings, and I think some of those are just saying that the max warnings threshold has been reached, so there are probably many more not being reported.

So what is the cost of having poor internal quality like this?  I know there’s been some discussion recently in the blogosphere by Joe Rainsberger on the Speed / Quality Barrier, Ron Jeffries on Quality-Speed Tradeoff - You're kidding yourself, and also by Uncle Bob responding to Ron's post.  In my current context I wanted to focus on improving the internal quality of the application in a significant way in the not-too-distant future (referring to Joe’s post, I wanted to invest in crossing over the Speed/Quality barrier).  To accomplish this I wanted to dedicate 1-2 developers full time to working on specific tasks with the goal of improving the level of quality within the application (in my next blog post I’ll talk about the specific approach we’re planning on taking to attack this problem).  My first challenge of course was trying to justify taking the developers off of the typical work of implementing new features and fixing defects.  I’m curious how other people in this position justify it?  Do you do as Ron does and try to get your developers to focus on technical debt and trying to do tiny improvements every time they touch an area of code?  What if you want to drive up the quality level faster than that allows?  How do you justify pulling developers off of the work that delivers clearly visible customer-value to focus on internal quality which doesn’t deliver immediate customer-value (at least not in a highly visible way like new features or defect resolution).

 

When I was trying to justify the cost of dedicating 1-2 developers to focus on improving quality I identified 3 key costs of poor internal quality (or benefits that we could realize by improving the quality):

  1. Fragile Code Base - Changes in one area often break things in one or more other areas unexpectedly. This results in either many regressions, or an excessively long QA cycle to prevent regressions. It can also cause programmers to “fear” making some changes due to the unknown impact to the system as a whole, which tends to result in changes that are made in such a way to isolate them from the rest of the code as much as possible (i.e “hacks”), which just serves to further deteriorate the overall application quality.
  2. Slow Velocity - Any changes to the system, both enhancements and bug fixes take much longer to implement than would be the case with a system with a strong design. Part of this is due to the fragile nature of the system as described above. More importantly though, the amount of time spent just trying to understand the code will drastically increase with weak system designs.
  3. Performance Issues - Another common trend I’ve seen in systems with weak designs is that widespread performance issues are very common. A system with a strong design will tend to centralize behavior and logic, ensuring it’s only run when necessary. Whereas, in systems with weak designs often the same logic and behavior is not only duplicated but executed many different times, possibly in slightly different ways, since one part of the code doesn’t know what the rest is doing. These performance issues can become extremely difficult to resolve since the root cause often requires massive system refactoring to properly address the issue(s).

In my specific scenario I then provided some anecdotal evidence illustrating some examples where we were witnessing these effects.

Anybody else out there have any thoughts about how to deal with applications with weak internal quality?  And depending on the answer to that question, thoughts on how to justify the costs associated with improving the quality?

Posted on Monday, February 9, 2009 2:15 PM | Back to top


Comments on this post: Costs of Low Quality

# re: Costs of Low Quality
Requesting Gravatar...
Dylan,

Very interesting post; I really cannot reply here in this context and due the subject justice, but I will try.

Management needs solutions. Do not focus on explaination of the problems (unless you have a technical manager that really knows what you are talking about). Focus on what NEEDS to be done. Provide a timeline on how to do it.

Present the alternatives:
1. You can offer them that this particular project will have difficult bugs to fix, will have "unknown" time to fix them. Adding features will be "unknown" as to the effort required. The poor coding practices will cause additional poor code to be developed whenever the project is worked on, and likely will leak into other projects.

2. Under this scenario #2, you will bring the code to a state where bug fixes are predictable, new features are added with predictable schedules and predictable results.

They will choose door #2.

Now, the hard part. What culture exists at this company that allowed this to occur in the first place? Figure that out. You may have to start with a coding standard; you may have to start with some training. You need to get to a place where check-in policies prohibit this code, and code reviews/design reviews clearly would not allow it.

If you can tell WHO in the organization wrote some particular code just by looking at it, you have lots of work to do. All the code should look similar in quality, design, organization, etc, because everyone on the team has their eye on the same prize.

So, to add to your list:

4. Predictability. The code you have now will be completely unpredictable as to the costs of fixing a bug or adding a feature, and changes are likely to result in bug injection.

5. "Hero Model". Modifications of the code will require the "heor" - the code jockey who wrote it. Management cannot stick just anybody on that code. They hate that. [you need to write this another way, but the jist is that the code does not follow the company standards therefore you have no flexibility in who to assign to work on it.
Left by Bill on Feb 09, 2009 5:42 PM

# re: Costs of Low Quality
Requesting Gravatar...
Great additions Bill. I especially like the Predictability point. I have definately witnessed that being true.

In my case my manager is technical, but it's kind of moot because for the most part I have the authority to act on things like this without seeking approval. But even if I don't need to seek approval I still like to be able to justify the investment in quality, even if I'm only justifying it to myself.

Left by Dylan on Feb 09, 2009 10:42 PM

# re: Costs of Low Quality
Requesting Gravatar...
Does the system make money? Does it work?

What is the expected time horizon of the system?
What is the current churn rate of features being added to the system?

These questions all need to be asked to associate ROI to a given piece of code before one would attempt to fix a "problem" system.

Not that fixing it is a bad thing, just remember that from a business point of view much value can be derived from a steaming pile of shit.
Left by Greg Young on Jun 09, 2010 12:49 PM

Your comment:
 (will show your gravatar)


Copyright © Dylan Smith | Powered by: GeeksWithBlogs.net