« Back to home

Pragmatic Testing

Posted on

I have recently moved from an environment that treated automated testing (tests written in code) as a necessary evil if at all to one that is test driven at all levels. The transition has been interesting to say the least and resulted in more than a few vigorous discussions - this article describes my approach to testing.

One of the first things I should do is get the terminology out of the way - I (and many others) have used the term 'unit tests' as a catch all for any automated test written in code and integrated into the build process. This isn't entirely correct though, unit tests are a very specific part of the whole testing environment and working from different definitions just added to the initial confusion. A common way to visualise the type and purpose of various tests is the testing triangle:

The Testing triangle

At the top we have the functional tests - for a web or desktop application that means driving it through the UI and making everything (or at least the thing you just changed) still works the way you expect it to. I would say every developer does this automatically - hit run in the IDE (or compile and load the firmware) and put the software through it's paces to make sure you get the result you expect.

Next we have the integration tests, additional code run as part of the build process that exercise the modules that make up the application and ensure that they play well together.

Finally at the bottom layer we have unit tests. These test each module in isolation by simulating the modules dependencies with mock objects and only exercising the code in the module itself.

... mock objects are simulated objects that mimic the behavior of real objects in controlled ways. A programmer typically creates a mock object to test the behavior of some other object, in much the same way that a car designer uses a crash test dummy ... Wikipedia

When I was talking about unit tests it seems I was actually talking about integration tests; these made up most of the test code in my previous environments. This approach had worked quite well so I was at a bit of a loss as to why it was considered such a bad thing. It turns out it's a matter of scale.

There are two main goals in commercial software development from a companies point of view:

  1. A working product.
  2. Minimal development cost.

Any cost incurred, be it additional development time or direct monetary costm has to contribute to the first goal or it shouldn't be done. An important thing to note here is that 'working product' is from the end users perspective, internal architecture and code quality are not factors. A mess of spaghetti code that does what the user expects of it is perfectly acceptable regardless of how much you might cringe at it as a developer.

With these goals it is very hard to justify unit tests because they are not intended to find bugs. This realisation was a big eye opener for me and it's worth repeating - unit tests are not for finding bugs.

... understand what role unit tests play within the Test Driven Development (TDD) process, and squash any misconception that unit tests have anything to do with testing for bugs.

So what are they for then? If they do not find bugs in the end product why would anyone devote time to writing them?

Good code or bad tests?

The goal of unit tests is to improve the quality of the code being tested - the idea is that if the code is easy to test it is well designed and adheres to SOLID principles. This in turn reduces the acquisition of technical debt and should make the code easier to modify in the future.

This, however, doesn't help the immediate goals of a commercial application. Delaying the current release to make future releases easier is a hard argument to make. Unless code quality is having an immediate effect on the functionality of the software you are not going to get much traction.

Quality cost of code

Here is where the scale factor comes in - for smaller applications (10s of thousands lines of code) with smaller teams (3 or 4 people in the same room) code quality issues can be worked around and unit testing is difficult to justify. For larger applications (100s of thousands lines of code) and larger, distributed teams technical debt accrues rapidly and starts to have an immediate effect - unit testing is justifiable; the additional resources required are balanced by a reduction in resources needed to work around code quality issues.

The trouble is that all applications start small - at some point someone writes the very first line of code, a massive social networking site starts with a late night hacking session. Retrofitting unit tests to this code when they become valuable can be a nightmare. On the other hand, if Zuckerberg had spent an additional few months making sure that everything was unit tested a different product may have gained the first mover advantage.

So what is the solution? I would argue that for smaller projects (and for the initial stages of a larger project) integration tests provide far more value than unit tests. There are a number of advantages to integration tests:

  • They use much the same frameworks and infrastructure as unit tests so those elements are in place to add unit tests at a later date.
  • They are intended to find bugs - they directly contribute to the goal of a working final product.
  • They still test the functionality of individual modules so design issues are exposed earlier rather than later.
  • They involve less code to implement - tests are written to exercise a single module specifically but with dependencies in place, you don't have to write additional code to mock the dependencies just for the purpose of testing.

This is the approach I have been using for some time and it has been very effective. Very few of my previous projects grew large enough to justify unit testing but looking back over them now with a slightly different view point I can see that it wouldn't be that difficult to add unit tests if they were required.

So does this mean you should stop writing unit tests altogether? Absolutely not - there is no silver bullet, no single approach that will solve all your problems. It might however be worth looking at what benefits you are getting from the effort expended - perhaps a change in focus could improve delivery time without sacrificing reliability.