Automated Testing
Jul. 13th, 2007 12:32 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
As I head into my severalth week of working on test harnesses, it seems appropriate to pass on a few tips on the subject.
On the one hand, automated testing is a damned good thing, especially if you intend to have any sort of rapid release schedule. The last time we did a full ASAP release, I believe we had about 2000 manual regression tests to get through, which can (to say the least) slow things down. Being able to automate at least many of those tests can give you a lot of confidence in the product quickly, and can help to find bugs that can otherwise go undetected for weeks.
That said, *good* automated testing does not come cheap. I seem to be coming to a rule of thumb that a decent test suite will be at least as many lines of code as the product itself -- and a really thorough suite will probably be several times as many.
Moreover, automated testing is *programming*, dammit. Too many people think that, because it's QA, it's therefore someone easier and less technically sophisticated. My experiences over the past few years say otherwise: not only is the amount of code comparable to the size of the product, but the technical sophistication you need for the test harness is similar to the complexity of the product. A simple semantics-only product can probably get away with a simple test harness, but an architecturally complex product needs a complex harness. In particular, if the product involves systems integration, so does the test harness -- and the harness' needs are often much nastier than those of the product itself. Doing all that, while still keeping the harness straightforward enough for less-senior programmers to easily write tests in it, calls for real artistry sometimes.
So I encourage automated testing for all interesting programs. But have some respect for it...
On the one hand, automated testing is a damned good thing, especially if you intend to have any sort of rapid release schedule. The last time we did a full ASAP release, I believe we had about 2000 manual regression tests to get through, which can (to say the least) slow things down. Being able to automate at least many of those tests can give you a lot of confidence in the product quickly, and can help to find bugs that can otherwise go undetected for weeks.
That said, *good* automated testing does not come cheap. I seem to be coming to a rule of thumb that a decent test suite will be at least as many lines of code as the product itself -- and a really thorough suite will probably be several times as many.
Moreover, automated testing is *programming*, dammit. Too many people think that, because it's QA, it's therefore someone easier and less technically sophisticated. My experiences over the past few years say otherwise: not only is the amount of code comparable to the size of the product, but the technical sophistication you need for the test harness is similar to the complexity of the product. A simple semantics-only product can probably get away with a simple test harness, but an architecturally complex product needs a complex harness. In particular, if the product involves systems integration, so does the test harness -- and the harness' needs are often much nastier than those of the product itself. Doing all that, while still keeping the harness straightforward enough for less-senior programmers to easily write tests in it, calls for real artistry sometimes.
So I encourage automated testing for all interesting programs. But have some respect for it...
(no subject)
Date: 2007-07-13 04:47 pm (UTC)This is many times truer of multi-threaded code harnesses, which can be quite a bear to deal with. When you consider that there are also many aspects to tests (I want them to be composable, myself, and the test tools to be as reusable as possible - and that it is useful to mark failures in the code as "Expected Pass", "Failure" and "Expected Failure: Bug No. ###" as well as other usability requirements in the harness, and other internal versus tested product errors.... it has quite a list of requirements.
I consider myself, as a QA engineer who codes and who has written several test suites and used more, to be as good a programmer as any. I just don't work on things that ship.
Welcome to my world. :-) And the question: who tests the testers? :-)
(no subject)
Date: 2007-07-16 05:01 pm (UTC)(no subject)
Date: 2007-07-13 05:51 pm (UTC)(no subject)
Date: 2007-07-13 09:33 pm (UTC)Developers who create automated unit tests for their code, before it even arrives in the hands of the Test group, realize the scope, complexity, and sheer creativity necessary to produce good test automation.
A good book on automated unit testing, which also applies to further functional test automation, is "xUnit Test Patterns: Refactoring Test Code" by Gerard Meszaros
http://www.amazon.com/xUnit-Test-Patterns-Refactoring-Addison-Wesley/dp/0131495054
(no subject)
Date: 2007-07-14 12:34 pm (UTC)The real joy of automated testing is when something breaks in an area no one thought to look in since it had already been tested and no one thought that the changes made would break this area.
(no subject)
Date: 2007-07-16 02:34 am (UTC)