Wednesday, 12 May 2010

Developer Testing

When I took over a "failing" development team in a high-profile banking project in London, I introduced a simple form for handing-over code from the development team to the system testing team. Apart from details such as why the change was being made and how the code should be transported from Dev to Test environments (and properly installed), the form included a tick-box to say that developer testing had been done (and fields to specify where the test code and test output was archived). Prior to this the developers had complained of being pressured to hand-over untested code in a management rush to get development work "completed".

I told the team that I wanted the form filled-in honestly and openly. I told them that if they had done no testing because they had been pressured into delivering before they had time to test it, they should write this fact on the form and must not tick the "tested" box. I told them they had my full support if anybody came back to them and complained about the quality of the code after they'd not tested it due to management pressure. Of course, I also told them I wanted to be told if they felt they were being pressured to skip testing. And finally, I told them that I expected them to include appropriate time for appropriate amounts of testing in any plans they put forward.

In response, did I get queues of developers telling me they were being pressured to skip testing; and did I get queues of system testers or project managers telling me that my team's code was of low quality? No, I didn't. AND, I didn't get project managers complaining that developers were showing testing steps in their development plans.

What's my point? Twofold. 

Firstly, developer testing is good, and developer testing is necessary. It should not be seen as an optional part of the project plan. There are plenty of metrics that show that the cost of resolving is problem is lower when it is caught sooner in the software development life-cycle. In other words it is quicker and cheaper to fix a bug found in developer testing than it is for a bug found in system testing, and both are much quicker and cheaper to fix than a bug found once the code has gone into production.

Secondly, everybody aspires to better behaviour when that behaviour is made visible. By insisting on the use of the tick-box on the hand-over form, I made the developers be honest and open about the amount of constructive testing they'd done, but I also made the system testing team and the project managers think twice about applying inappropriate pressure on the developers.

And thirdly(!), I gave the developers the confidence to do what they had hitherto known to be the right course but had felt insufficient support from their own management to push the point.

So, it was a win-win outcome, right?Yes, I believe it was. Whilst it may have appeared that some developments took a little longer than they might have done before I took the helm, the truth of the matter is that the code that got put into production was much more robust and caused far fewer production problems. And the cost-saving of that reduction in production problems, and the associated increase in the users' confidence in the system, was price-less.