Looking for flaws

Or checking for correctness? When you test your code, do you look for all the wrong things that can happen, or do you make sure it’s working properly first?

The weak link by James Steidl @ iStockphoto

Of course, you should be doing both. But you’ve got to do one or the other first. You can’t do both at the same time, because they’re completely different. One is where you don’t know what will happen, like rolling your head on the keyboard to come up with random input. One is where you know exactly what has to happen, like a textbox accepting numeric input, and only between 10 and 23.

Correctness is defined here as “working according to what’s reasonably expected”. If you have specs, there you have it. Negatives can count as correct too. For example, “must not show information of other users” is a negative requirement. If the application indeed does not show other users’ information, then it’s working correctly according to the requirement.

The fastest way to complete a project is to code according to specs, test for correctness, then test for flaws. Because the specifications are known, there are thus well defined finite areas of correctness, which provide a basis for deadline estimation.

Flaws on the other hand, are practically infinite. You can always find something to change for the better. When you test for flaws first, you will find tons of minor irritating aspects of the application that you can change. None of which will significantly bring you any closer to what’s required.

You’ll only stop because the deadline has mysteriously gotten closer, so you decide it’s better to stop nitpicking and actually start testing for the requirements.

Let’s take the example given before, a textbox accepting numeric input, and only between 10 and 23. Reading through the specs, you find out that it’s for hours of operation, as in anything between 10 am and 11 pm. So the value obtained from the textbox is used as an integer.

So you test by punching in numbers and making sure only numbers 10 through to 23 are accepted. Numbers outside the range as input will give an error. You also make sure alphabets and symbols give errors too. This satisfies the correctness requirement.

Then you test for flaws. Things like 12.0 or +19 are accepted, but preferably not.

Alright, as a contrived example, the textbox input requirement seems rather lame. But I hope you got the point. Testing for correctness means you get a working application quickly. Then when you test for flaws, you’re just adding on to the correctness.

The thing is, some people must overcome whatever their misgivings about your application before they even check how cool your application is and how correct it is. These people are usually your users, or your managers, or whoever uses the application but don’t really give a rodent’s behind about your code.

I’ve met some of these people. And I understand their position and view point. They don’t quite like this colour, or they want that button somewhere else. I get it. There’s a reason why I understand this behaviour, and that I even expect it of them.

They already assume your application is working correctly. Hence they go nitpicking. They have a right to nitpick. In their minds, the application is working according to specifications, and they’re tuning it to better suit them.

Now, the authors of defect reports… I don’t understand them. Dedicated testers are supposed to give feedback on the application, which includes both correctness and flaws. And correctness should be a higher priority. They should be the safety net, catching the uncommon cases that escaped the programmer, but contribute to the correctness of the application.

Yet the testers I know seemingly focus exclusively on the flaws with zero regards to the correctness. Once, my colleague had to come up with a suitable image for a trashcan, for use as a delete button in a datagrid. The original image was done by me, bluish in theme. My colleague even apologised to me that he’s not using my image. It was fine. The image wasn’t artist-grade, but it serves its purpose.

He had to do it, or the testers refuse to test the intended web form at all. They don’t want to test if a new record can be correctly inserted into the database. They don’t want to test if they can update existing records correctly. They don’t want to do anything on the web form, refusing to understand how it works, until they’ve got their trashcan image.

One image after another was sent. They don’t like the look. They don’t like the colour. They don’t think it looks like a trashcan. Finally I suggested to my colleague, just ask them for an image instead.

They couldn’t. They don’t like the images submitted to them, and they couldn’t give us an image they approve of. After wasting many days over this trivial matter, my colleague came up with an image, digitally manipulated to fit what hopefully suits the testers’ perceived requirements.

It was ugly. It was dull red, like someone had a nosebleed and sneezed, and one of the blood drops happened to hit the floor in a vague squarish pattern. Then a digital photograph was taken of it, and the result had a slight yellowish-green cast to it. And because of the looming deadline, the testers finally went with it.

I am working hard with the users to bring the application to live status as soon as possible. The sooner it goes live, the sooner users get to benefit. The testers, on the other hand, seem bent on delaying the project. Their fears of causing calculation errors, of users using their approved interface design (but might possibly be perceived as unfriendly), of maybe tens of other irrational fears have stopped them from growing. And their fears have essentially stopped the company from growing.

It’s about mindset. Do you expect greatness and code and test for correctness? Or do you fear and nitpick on minutiae?

  1. Ben

    One thing I have found through out my programming career is that not matter how meticulously you debug and check your application there is always a user out there who can find a flaw you missed.

  2. Vincent Tan

    Which is why I focus on getting applications working correctly first. Correctness has a finite set of conditions. Either something is correct or not. It’s objective.

    Flaws are usually subjective, and as such, there is practically an infinite set of conditions. Everyone has a different opinion.

    I don’t mind people telling me something’s working wrongly. It’s when they want me to change the wording of a message without/before telling me if the application’s working, that I get a tad upset.

  3. zouze

    you have mentioned that you have your specs as the basis for doing the program but you did not specify what is its contents. the best approach there is, in the specs, you let your users specify what are the business scenarios and they should give test data for each one of the scenarios mentioned. that’s the main purpose of the test case scenarios portion of the technical specs. so you can deliver what users expect that the program will do. any other scenarios that comes out as error during user-acceptance testing, thaty should not be the foult of the programmer anymore…but somehow, this does not work if your users are comletely dumb.

  4. Vincent Tan

    Yes you are right, zouze, I did leave out what my tech specs contain. I’ve been in projects where test data was included, and must be tested against. I’ve also been in projects where test data was generated by other people, to be tested by other people.

    So I left it out because I thought it would detract from what I wanted to highlight. I think another article is in order…


  5. Ben Barden

    I have worked as a developer and as a tester, so I can see both sides of this.

    When I first started testing, I went on a training course with a dozen others from different companies. The very first thing the instructor asked was: what is the objective of testing? I said it was to ensure that your code is correct. He disagreed that said that the objective of testing is to find faults. I think I prefer your comments above – that you should test for correctness first, and flaws second.

    The testing that I did focused on test cases. In your example listed above, one test case is that the text box accepts values between 10 and 23. Another test case might be that the text box does not accept values outside this range. Some testers combine these two as “the text box works” (bad description!) but I prefer to split them up.

    Now, each test case has a clear pass/fail status. If it fails, then it means the actual result differs from the expected result. The actual result is documented and a bug is logged.

    You can also do exploratory testing, i.e. “playing” with the application without using any test scripts to see what happens. We actually did this before we did any other testing because it was a sanity check to see if the application even functioned correctly. We had a large complex system and sometimes things didn’t go quite right in the deployment steps, so we did unscripted testing before spending any time on the proper test scripts.

    I worked with some people who were also quite new to testing, and I do remember someone who didn’t want to carry on testing if he found a bug. However, I persuaded him to keep going so we could potentially fix several bugs at a time instead of doing them one by one. Another person was more than happy to log every possible error she found. But this was fine, because she ran the test scripts and indicated which ones passed and which failed.

    Refusing to proceed until one bug is fixed certainly sounds counter-productive, but to be honest, it sounds more like the testers you dealt with did not really understand how structured testing works. A tool like TestDirector can really help when it comes to running test scripts (either manual or automated) and it’s a good way to organise the bugs found in testing.

  6. Vincent Tan

    The testers I work with need to physically type in stuff and click buttons in the application as testing.

    Actually, I don’t quite understand how test suite programs such as TestDirector work myself… I’ve never been exposed to them before. I guess I’ve got a lot to learn about this…

    I’ve been trying to see if I can work up a suggestion to management for a bug/feature tracking software like FogBugz. Budget constraints, small team, no justification…

  7. Ben Barden

    TestDirector is a bit clunky – it isn’t quite as easy to use as it should be – but it really does a good job of recording information relating to testing. Towards the end of the time spent at my last job, I use WinRunner to write some automated test scripts, and the results were recorded in TestDirector when we ran the WinRunner scripts. Very interesting stuff.

    The drawback is that this software does not come cheap. If you only need bug tracking at this stage, it might be easier to look for an open source solution. You could try Bugzilla. I’ve only used it briefly and I haven’t installed it, but it might help you. Or you might be able to find a free PHP script that will do the same kind of thing. Of course, quality varies a lot, so you might have to do a bit of searching to find the tool that’s right for you.

Comments are closed.