Automated Testing
Jul. 13th, 2007 12:32 pmAs I head into my severalth week of working on test harnesses, it seems appropriate to pass on a few tips on the subject.
On the one hand, automated testing is a damned good thing, especially if you intend to have any sort of rapid release schedule. The last time we did a full ASAP release, I believe we had about 2000 manual regression tests to get through, which can (to say the least) slow things down. Being able to automate at least many of those tests can give you a lot of confidence in the product quickly, and can help to find bugs that can otherwise go undetected for weeks.
That said, *good* automated testing does not come cheap. I seem to be coming to a rule of thumb that a decent test suite will be at least as many lines of code as the product itself -- and a really thorough suite will probably be several times as many.
Moreover, automated testing is *programming*, dammit. Too many people think that, because it's QA, it's therefore someone easier and less technically sophisticated. My experiences over the past few years say otherwise: not only is the amount of code comparable to the size of the product, but the technical sophistication you need for the test harness is similar to the complexity of the product. A simple semantics-only product can probably get away with a simple test harness, but an architecturally complex product needs a complex harness. In particular, if the product involves systems integration, so does the test harness -- and the harness' needs are often much nastier than those of the product itself. Doing all that, while still keeping the harness straightforward enough for less-senior programmers to easily write tests in it, calls for real artistry sometimes.
So I encourage automated testing for all interesting programs. But have some respect for it...
On the one hand, automated testing is a damned good thing, especially if you intend to have any sort of rapid release schedule. The last time we did a full ASAP release, I believe we had about 2000 manual regression tests to get through, which can (to say the least) slow things down. Being able to automate at least many of those tests can give you a lot of confidence in the product quickly, and can help to find bugs that can otherwise go undetected for weeks.
That said, *good* automated testing does not come cheap. I seem to be coming to a rule of thumb that a decent test suite will be at least as many lines of code as the product itself -- and a really thorough suite will probably be several times as many.
Moreover, automated testing is *programming*, dammit. Too many people think that, because it's QA, it's therefore someone easier and less technically sophisticated. My experiences over the past few years say otherwise: not only is the amount of code comparable to the size of the product, but the technical sophistication you need for the test harness is similar to the complexity of the product. A simple semantics-only product can probably get away with a simple test harness, but an architecturally complex product needs a complex harness. In particular, if the product involves systems integration, so does the test harness -- and the harness' needs are often much nastier than those of the product itself. Doing all that, while still keeping the harness straightforward enough for less-senior programmers to easily write tests in it, calls for real artistry sometimes.
So I encourage automated testing for all interesting programs. But have some respect for it...