Unit Tests - Love, Ignorance and Getting Real

February 09, 2010 / category: Testing / 4 comments

Disclaimer: most of the following observations refer to web applications development. It shouldn't be difficult, though, to find their counterparts in the desktop world.

Introduction

Unit Tests are often a subject of a heated debate. People either love or hate them, there is no much space left between these two attitudes. Since I've realized that my beliefs evolved from love to something intermediate, I decided to sum up two attitudes I encounter most often and describe thoroughly the third one, which I incline to agree with. I've called these attitudes, respectively: "Love", "Ignorance", and "Getting real".

My eBook: “Memoirs of a Software Team Leader”
Read more »


First, let's go into what lovers and their antagonists say about unit tests.

Love

100% code coverage is a must. I've never wondered why, anyway. It's a relligion and one can't discuss with commandments.

Write tests for everything, including controllers, HTML templates etc.

If even a tiny part of the code cannot be easily tested, use mocks and create complicated infrastructure to avoid leaving the code untested. Use any means necessary just to fill the coverage report up.

A test is not a unit test if it talks to the database.

If your code can be thoroughly unit tested, it's also reusable and well-grained.

Writing unit tests is the only way to make refactoring possible.

Ignorance

Why should I learn to write unit tests? Anyway, why should I learn OOP, ORM, new libraries or anything new? My software works well without these things. I know how to write the code. There's nothing bad in sending raw SQL queries. Whoops, what is that SQL injection you're talking about?

All those pragmatic guidelines are just a waste of my time.

We hire some testers, it's their job to find bugs, not mine. I'm getting paid for writing software, not for testing it.

Getting Real

Because I don't agree with most of the statements mentioned above, I decided to enter into polemics with their imaginary authors. I've called my attitude "Getting Real", which has something to do with the book by 37signals, but in fact I didn't mean it when picking this name.

Now, let me express my opinion.

"100% code coverage is a must".

Pay attention to Return on Investment (ROI). Math parser tests are good example: they involve low-level functionality, which can be easily and reliably tested. Test code is not much longer than the parser code itself, moreover, it's not long in general. In this case, there's no frequently changing API and you're sure that the tests will do these things you would otherwise have to do manually. Can you imagine finishing a math parser without verifying that adding two numbers works? I'm sure not. It's a basic, core functionality and therefore it's too important.

On the other side, no one will die if you don't test whether post title is formatted properly. And even if there's something wrong with it, the application will still work and you can always fix that later.

Use tests to cover the most important functions of your application. You may prepare Selenium tests for critical functions like registration, payment etc. Test whether users can properly use your application and arrange things, which is what they came to your website for. Pick core functionalities. Choose a method that places an order, don't test creating/updating/deleting order record from the database table. Check the method you're going to call. If your method is too large to be tested from the outside, consider splitting it into smaller parts.

"Write tests for everything, including controllers, HTML templates etc.".

Dont't do it, every template or CSS file is often a subject of changes (e.g. adding a new CSS class, changing document's structure). Writing tests reflecting such details as HTML elements sequence requires frequent updating or even rewriting your tests. Maintaining such tests may become inconvenient.

However, such tests may be useful in some circumstances, for example when writing Nagios checks. Verifying whether application still works, is it responding, does it provide basic functionalities (like displaying main page, registering, signing in) will probably require pottering with HTML tags and HTTP redirects. But I don't think that reflecting templates structure in the test code on a large scale is a good idea.

"If even a tiny part of the code cannot be easily tested, use mocks and create complicated infrastructure to avoid leaving the code untested".

Don't overuse mocks and tests facilities. If unit tests require creating a complicated class hierarchy, it may mean that something (or someone) went too far. Also, creating mocks may sometimes become art for art's sake. Mocks won't make you absolutely sure that your code will work properly in the production environment. Will it integrate seamlessly with mass mailing API provided by another company? Mock won't tell you if e-mail messages you've prepared are appropriate to be sent. It will only tell you that the "prepare_message" method was called x times. (On the other hand, mock will also warn you if you call a heavy calculation method more times than expected.)

Furthermore, not only test code may grow too complex. To some extent you should avoid complicating or changing application code only to make it testable.

"A test is not a unit test if it talks to the database".

You're right, it's not a unit test (in fact, it should be considered as an integration test), but I don't know any simplier way to check how my code will work in its destination environment. Database simulators could help, but they may not be fully DB-compliant. How would you test a transaction-dependent code or reconstruct differences between MyISAM and InnoDB storage engines?

I write database models to work with database, not with an artificial storage mechanism implementation. There are libraries available (incl. DataMapper) to equalize differences between different databases, key-value stores etc.

Think practically. Tests are there to help you, not to make you struggling with them. Don't make a martyr of yourself and don't try to write a database-agnostic application. Are you going to change database system every week? I don't think so. And even if you changed database for some reason (e.g. because of license or performance issues), it would require more work than just pushing a button. Old tests wouldn't work in new conditions. However, if your code is meant to use different storage mechanisms (database in most cases, XML file from time to time), tests should reflect that. But again, don't make a martyr of yourself if you're not going to gain too much.

On the other hand, if your tests take too long to run because of numerous database operations, consider using a database simulator at least for those tests which don't depend on transactions and other database-specific capabilities.

"If your code can be thoroughly unit tested, it's also reusable and well-grained."

You're absolutely right. In most cases, if you can't reuse a method or a class, it indicates that it has grown too much in size and that it does something more than it was meant to. That is, calling these method may cause unwanted side effects.

Performing complicated database operations in a controller is a good example. How can you reuse such code in another controller? Extracting database-dependent code to a model seems the only reasonable choice and writing unit tests since the beginning of the development process will help you to achieve this goal.

"Writing unit tests is the only way to make refactoring possible".

I can imagine refactoring without unit tests, but I agree they are powerful allies. No matter if you're refactoring or just introducing a small change - unit tests will save you from checking everything manually. With unit tests, you can feel more confident. Without them, it's almost sure you're going to miss something.

I accept the fact that writing unit tests requires additional work. But once I finish a feature and as long as the requirements don't change, I don't have to look back and come up with bugfixes from time to time. And when the time for requirements change will come, refactoring should be faster and (there's no inconsistency!) more secure.

This time, I almost completely agree with the "Love" attitude believer. All I have to add is that you must balance between time spent on writing unit tests and time required for delivering new functionalities. Take priorities into consideration (core functionalities should go first) and find your own golden mean.

"Why should I learn to write unit tests? Why should I learn anything new"?

What can I tell... Time flows and staying in place as a developer and denying to learn any new libraries and technologies may not end up well. However, if being stuck to the particular technology satisfies you, then your attitude is quite right.

"My software works well without unit tests".

You may be right, especially if you're an experienced developer and you test your code manually after every change (note that you're doing the job that unit tests could do for you). But everyone sometimes make mistakes and unit tests can protect you from small, insidious bugs like accidental removing a function that is called only when user comes through a specific, rarely-used path in your application.

"We hire some testers, it's their job to find bugs, not mine".

It may be right when you're working on an already put in practice, well-tested application. But when you write an entire application (or at least a new module) from scratch without unit tests, you need more external testing to catch bugs that could be caught by yourself.

Sometimes, you're the only person to know what may go wrong - testers sometimes can't see low-level, application logic problems. What if after some time it turns out that a bug has caused some real problems and you were the only person that could prevent it?

Remember, sometimes it's not only about the money, it's about reputation. Especially when it comes to paid services.

A good practice is to add unit tests for bugs found. It will help you not only during bugfixing, but also later, to ensure that the particular problem no longer exists.

A Final Thought

Stop for a moment, take a deep breath and answer a simple question: Which side are you on?

Maybe it's time for a little change...

Comments

There are 4 comments / Submit your comment

holek
February 12, 2010 12:39 PM

Nicely written.

Congrats.

Lukasz Wrobel
February 12, 2010 05:35 PM

Thanks, I hope you find my article thought-provoking.

sourav tripathy
September 16, 2010 09:50 AM

Well written!

dsueiro
May 11, 2011 06:50 PM

100 % and testing everything (as your example about testing how the title is formatted) is not the same thing.

You can get 100% coverage without having tested every important case (whatever the criteria to decide what should be tested).

On the other hand, if you're dealing with an interpreted language it's nice to have a test discovering a typo makes your program to invoke an inexistent function / method.

Furthermore, testing metadata (either in a database or configuration files) or at least verifying its consistence / correctness should be considered in some cases. I guess that's not usually the case.

IMHO you should make whatever is reasonable to catch bugs before the user does.

You can use Markdown in your comments if you wish. Examples:

*emphasis*
emphasis
**strong**
strong
`inline code`
inline code
[My blog](http://lukaszwrobel.pl)
My blog
# use 4 spaces to indent
# a block of code
    def my_method(x)
      x = x + 1
    end
def my_method(x)
  x = x + 1
end

* First.
* Second.
  • First.
  • Second.

> This is a citation.
> Even more citation.

I don't agree with you.

This is a citation. Even more citation.

I don't agree with you.


Submit your comment

(required)

(optional)

(required, Markdown supported)


Preview:

My eBook: “Memoirs of a Software Team Leader”

Read more »