jducoeur: (Default)
jducoeur ([personal profile] jducoeur) wrote2007-07-13 12:32 pm
Entry tags:

Automated Testing

As I head into my severalth week of working on test harnesses, it seems appropriate to pass on a few tips on the subject.

On the one hand, automated testing is a damned good thing, especially if you intend to have any sort of rapid release schedule. The last time we did a full ASAP release, I believe we had about 2000 manual regression tests to get through, which can (to say the least) slow things down. Being able to automate at least many of those tests can give you a lot of confidence in the product quickly, and can help to find bugs that can otherwise go undetected for weeks.

That said, *good* automated testing does not come cheap. I seem to be coming to a rule of thumb that a decent test suite will be at least as many lines of code as the product itself -- and a really thorough suite will probably be several times as many.

Moreover, automated testing is *programming*, dammit. Too many people think that, because it's QA, it's therefore someone easier and less technically sophisticated. My experiences over the past few years say otherwise: not only is the amount of code comparable to the size of the product, but the technical sophistication you need for the test harness is similar to the complexity of the product. A simple semantics-only product can probably get away with a simple test harness, but an architecturally complex product needs a complex harness. In particular, if the product involves systems integration, so does the test harness -- and the harness' needs are often much nastier than those of the product itself. Doing all that, while still keeping the harness straightforward enough for less-senior programmers to easily write tests in it, calls for real artistry sometimes.

So I encourage automated testing for all interesting programs. But have some respect for it...

[identity profile] goldsquare.livejournal.com 2007-07-13 04:47 pm (UTC)(link)
Actually, not to toot my own horn, but the code for automated testing has to be MORE sophisticated and robust than the product itself. For it wastes a great deal of time and energy to hunt down complex bugs that are, in the end, the province of the test or harness. My code tests every call and return, in every way I can find, and is always instrumented to produce as copious an error message as possible for any failure.

This is many times truer of multi-threaded code harnesses, which can be quite a bear to deal with. When you consider that there are also many aspects to tests (I want them to be composable, myself, and the test tools to be as reusable as possible - and that it is useful to mark failures in the code as "Expected Pass", "Failure" and "Expected Failure: Bug No. ###" as well as other usability requirements in the harness, and other internal versus tested product errors.... it has quite a list of requirements.

I consider myself, as a QA engineer who codes and who has written several test suites and used more, to be as good a programmer as any. I just don't work on things that ship.

Welcome to my world. :-) And the question: who tests the testers? :-)