Actually, not to toot my own horn, but the code for automated testing has to be MORE sophisticated and robust than the product itself. For it wastes a great deal of time and energy to hunt down complex bugs that are, in the end, the province of the test or harness. My code tests every call and return, in every way I can find, and is always instrumented to produce as copious an error message as possible for any failure.
This is many times truer of multi-threaded code harnesses, which can be quite a bear to deal with. When you consider that there are also many aspects to tests (I want them to be composable, myself, and the test tools to be as reusable as possible - and that it is useful to mark failures in the code as "Expected Pass", "Failure" and "Expected Failure: Bug No. ###" as well as other usability requirements in the harness, and other internal versus tested product errors.... it has quite a list of requirements.
I consider myself, as a QA engineer who codes and who has written several test suites and used more, to be as good a programmer as any. I just don't work on things that ship.
Welcome to my world. :-) And the question: who tests the testers? :-)
no subject
This is many times truer of multi-threaded code harnesses, which can be quite a bear to deal with. When you consider that there are also many aspects to tests (I want them to be composable, myself, and the test tools to be as reusable as possible - and that it is useful to mark failures in the code as "Expected Pass", "Failure" and "Expected Failure: Bug No. ###" as well as other usability requirements in the harness, and other internal versus tested product errors.... it has quite a list of requirements.
I consider myself, as a QA engineer who codes and who has written several test suites and used more, to be as good a programmer as any. I just don't work on things that ship.
Welcome to my world. :-) And the question: who tests the testers? :-)