Test driven development is the new mantra of modern day development. With the modern day complex software, it has become a necessity rather than a choice. It's not that the software in earlier days were less complex but many new complex dimensions has been introduced in the modern day development.
People have different notion of Test driven development. On one extreme is the philosophy of writing of test cases before writing the code. Write a test which even does not compiles and than write the actual program so that at the end of the day, the test starts passing. On the other extreme is of course not to write any test case assuming that one is a star programmer and one can never go wrong. (Lot of developers carry this halo effect behind their head and unfortunately they only see the reflection of that halo in mirror). When the code is done than throw it across the wall to a battery of testers who become responsible for finding the flaws in the code. My personal experience has been always of striking the right balance between the two extremes. This theory of balance is very effective. Maybe a Libran inclination. Humans have a tendency to exaggerate as in Tulipomania.
So what is a good definition of Test driven development. I would say it's about having a good set of Test cases at the end of the development cycle of a phase,iteration or unit of work. It's like ACID property , where in the development process you may or may not have test cases, the logic is all in fluid state but when you are ready to complete the transaction of that phase, you should have a working set of code with adequate testing suite.
For me following are the good practices of a good Test driven development approach:
- Measure, Measure, Measure. It's my favourite and it's applicable at every facet of life. Unless one cannot measure one cannot understand the implication of action. For TDD, it's also an important thing to measure the code coverage that is how many coverage you are providing through your test case suite.
- Write test cases which are atomic in nature.In simple terms it means if one bug has been introduced in the program, it should result in exactly one test case failing. If one refactoring has been done, only one test case should fail, no more no less. In practice, this is very hard to achieve.
- Have less number of test cases. I can see you ready to jump on me. It's counter intuitive But I would strive for less number of test cases and near 100% coverage in terms of line and conditional coverage. There is no point in writing 100 of test cases which are doing the same thing. Management loves the test case matrices and more the number, better it is. But is there a point to write 50 test cases for a program which measure the difference between two time periods, even if they are like million years apart. Strive for lower foot print of code both in main logic and test cases. The lesser the code, the lesser the chances of bugs. A good coverage with least amount of test cases for time difference could be
- Between two days, the days fall before 28th of the month.
- Between two days, one day falls on 31st of month. Two test cases, one for starting at 31st and another for ending at 31st.
- Between two days, one day falls on 29th of Feb. Two test cases, one for starting at 29th Feb and another ending at 29th.