We should call them improvement reports

My career as a software developer has always been fraught with danger. I’ve been involved in several software projects now, and they all come with their challenges. The development cycle would involve the gathering of user requirement, planning, development and testing. The testing here refers to my testing. Where I work now, they have a whole department just to test the resulting applications of developers.

The gathering of user requirement is fairly simple. A couple of meetings, a few phone calls and some emails usually do the trick. Smiling helps. The simplicity of planning will vary, depending on deadlines (users will always say “as soon as possible”. Smile some more.), complexity of tasks involved and available manpower. For me, development is usually a breeze. I’ve shook my hand in the air (and mentally swore words that cannot be repeated here) whenever I do work with PowerBuilder, but that’s a different story. The internal tests are also easy. Enter a few values, make sure they are calculated and stored in the database correctly, and check that the reports are designed according to the user’s requirement.

The tests done by the dedicated testing department are, well, nerve-wrecking. I’ve had to duplicate the production environment onto the test environment. I’ve had to upload data, download data, change data and massage data from text to Excel and vice versa. I’ve got to refer to the finalised user requirement document to confirm (and defend) the veracity of my code.

There’ll be phone calls and emails to and fro, to confirm this, to help set up that, to ask why that number is supposed to be that number. I’ll probably be working on another project while smiling on the phone and scribbling notes on my already messy desk.

And I haven’t even gotten to the fun part.

At the end of the test period, I will be sent *drum roll*, defect reports. These evil documents contain what the testers had deemed unforgivable deviations from the righteousness of the user requirement document. They would identify the defect in my program, point their righteous fingers at me, and cackle “You made mistake. *dance around* Your program doesn’t work. *dance around some more* It’s all your fault”.

The thing about defect reports is that, well, they contain defects. When I pointed out that testers only focus on generating defect reports, I was faced with blank stares, with the implicit “Of course they look out for defects. That’s what they are supposed to do!”.

What you focus on, expands.

When the appraisal of a tester is determined in part by the number of defect reports the tester produces, what happens? That tester will produce tons of defect reports, on whatever bug or however slight the program misbehaviour was. This skews the perception of the accuracy of the program. Then when the defect report came to me, I would do whatever it took to downplay the report, to deny the existence of the purported defect, uh I mean inconsequential oversight, where I try to skew the perception back into balance.

This sparring of responsibility is damaging to the project, unhelpful to the user, and definitely a waste of the company’s resources. “Are you telling me you spent two hours typing that report about why there should be an ‘s’ for plural, when you could have been giving suggestions on how to improve the user experience?”. “Are you telling me you spent two hours going through every display message in the program code to make sure the ‘s’ was appended correctly, when you could have been implementing better features?”

Instead of keeping track of defects, we should be focusing on improvements. Yes, there are bugs. And yes, I will eliminate them. I also appreciate being told about them. I’m just a programmer. If you are a tester, you’ll be closer to being a user than I’ll ever be. Other than complaining about how lousy my program is, can you add anything to improve the user experience?