Test Driven Thermostat Development

I’ve recently become very excited about test driven development (TDD) and come to really integrate it into my daily work for Thermostat. The idea behind TDD is to only ever write production code to fix a failing test. Which implies that you write the test *first*. I found that the obvious advantage — the fact that you end up with a fully tested codebase — is really only a side effect, a very nice one though. What seems more important is the thought processes and change in work style that TDD provokes. It forces you to think first about what you actually want. It encourages a great design in the code you write. It encourages you to write clean code (becauseĀ  messy code is difficult to test), it encourages you to write code with little dependencies and which separates concerns nicely (because otherwise, you guessed it, it’s difficult to test). It forces you to focus on one thing at a time. In the long run, this results in a continuously clean code base, low bug count and therefore high productivity.

One thing that must not be neglected in TDD is refactoring. TDD effectively turns around the classical work cycle of design-code-test. Instead of designing our software first, then writing the code, and then test it, we write the test first, then write the code to fix that test, and then refactor our code to yield the best possible design for that code. Test-code-design (or test-code-refactor if you want). And it works really well. But one must not forget that last step, refactoring. This is what keeps the code clean all the time. Some people tend to shy away from refactoring, because ‘it might break stuff’. But not here! We have tests in place as safety nets. Consider this: codebases need to change all the time. They need to adopt to new requirements, and the design of the code needs to be enhanced to support new ideas. We need to be able to change. And we prove that we’re able to change things, by changing them all the time! YAY. And it works surprisingly well.

Just recently we had a nice example how this way of working yields significant improvement in the design. Consider this MainWindowFacadeImpl class and its associate MainWindowFacadeImplTest. That’s basically a controller in an MVC triad. There was one thing that was bugging me a lot: the use of a spy (partial mock) for the class-under-test. I.e. we wanted the real object, but mock just a little bit with it. That was because we wanted to verify that stop() would be called under certain conditions (when a shutdown event got fired from the view). That stop() method would then stop the timer that sends updates to the view. (You don’t need to understand what exactly this stuff does, just that there’s a timer that fires periodically to update some data in the view.) Also, how should I go about testing the timer itself? I basically wanted to verify that, upon calling start(), the view would get updated every 10 seconds. Now, I could go on and write a test that calls start(), then wait 10 seconds (or a little more) and verify that the view got updated, then wait another 10 seconds, and verify again. This would have made the test rather long-running though. And one of the rules for TDD is: tests should be executed fast (because we want to run them all the time, and not wait minutes for them to complete). Short summary, that timer caused all sorts of troubles. In order to test it, I’d need to create a partial mock out of the class-under-test, and write brittle threading-aware tests. Not good.

So I took a step back and thought about what is actually going on? Why is this so difficult to test? And it occured to me that this controller class was mixing at least 2 concerns: the actual timing of stuff (including the associated threading, etc), and the actual timer *action*, that is updating the view. Clearly, the latter one is a valid thing that the controller should handle. But the timer? Maybe we can externalize it somehow. (And you need to know that we have those timers in other places as well, scatter throughout the code.)

What we actually needed here is a way to provide (and inject) timing capabilities to the controller and other APIs. So I created a TimerFactory that creates Timer objects. Those can be configured by the controller, and started and stopped. They are very similar to java.util.Timer or java.util.concurrent.ScheduledExecutorService (on purpose), but more object oriented as I find. Such a TimerFactory would be made available in the ApplicationContext (kindof a global application wide scope), so that each controller or whatever code needs a timer, can grab one from there. I made one implementation of that TimerFactory, which is called ThreadPoolTimerFactory, which is backed by a ScheduledExecutorService. This relatively simple change in the design yielded a number of big advantages:

  • The timing code is now separated nicely from the timer action. For the purpose of test, I can now inject a mock Timer to the application, which I can actually *control* from the test. I.e. I can fire the timer action at my sole descretion, instead of relying on some undeterministic threading behaviour. And I don’t need to wait 10 seconds to do that.
  • I can actually test the timer itself in one place, rather writing threading-aware tests for every component that uses such a timer.
  • I only get one (or very few) thread(s) to support all the timing needs of the various components of the application, rather than having one timer per component.
  • Finally, I am now able to get rid of that partial mock. Instead of testing that controller.stop() has been called, I can now more easily verify that stop() has been called on the timer (because I injected a mock timer). Bingo!

Now that was quite a little success. And all only because I was unnerved by that little seemingly innocent use of the partial mock. Some people would probably neglect such issue and say, what the hell? As long as it works! But it *is* a code smell, and code smells are not to be taken lightly. You understand this when you’ve worked with code bases which ended up to be only a godaweful amount of sh***. And it shows why the ‘refactoring’ step in test-code-refactor is so important. Paraphrasing some popular saying: Quality is not some magical thing that is sent by a great god (or flying spagetti monster if you will) in the sky, it is 100s of little acts of care.

About these ads

One Response to Test Driven Thermostat Development

  1. Pingback: Roman Kennke: Test Driven Thermostat Development | Java Coder Resources

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 159 other followers

%d bloggers like this: