I was in my 4th year in university. The course was on digital image processing, touching on both theory and application in equal measure. There were only 3 students, including me.

The course was interesting, albeit mind-numbing when some of the equations march into the lecture. The programming assignments were more fun, since we got to apply the theories. One of them was a rotating-an-image assignment, which formed the basis of my bilinear interpolation code. That was fun.

There’s this assignment where the professor gave us a set of texture images as samples. I can’t remember how many there were, so let’s say there were 200 of them. Then he gave us, say, 50 images. **The assignment was to match those 50 images with the controlled set of textures**. All textures were greyscale to simplify the assignment.

The 50 unknowns didn’t match pixel for pixel with the controlled samples. But they were of the same texture. For example, the controlled samples had one of marbled floor. One of the unknown images was taken with that marble floor, but in a different position. Of course, the professor could have given us red herrings to match, but he said all 50 were taken from the sample set.

Then there’s the fact that he wanted to play with his new camera back then (he admitted to it), and took lots of pictures to give us as assignments… There was an assignment with a picture of a rubber ducky…

I can’t remember exactly all the tests I used to match the textures. What I did was come up with a theory/test, and compute that test for all the samples. Then I did the same thing for the unknown textures. Then I match the unknowns with the knowns. If they were within some threshold of acceptance, that unknown texture was deemed matched to the respective sample texture.

Basically, I’m matching the textures using heuristics.

One of the tests used histograms. Basically I charted from 0 to 255 the number of pixels with a specific greyscale value. Pure white pixels have a value of 255, and pure blacks have 0 value. Then I matched the unknowns with the samples using mean squared error. If the sample matched with the least error was less than some threshold I set, then that sample was the matched texture.

I had another test involving Fast Fourier Transforms (FFT). I think I discarded the complex values and matched the unknowns using the real values part.

There’s another test involving median filtering. The idea was to capture the groups of neighbouring pixels as some usable data. So instead of a 128 by 128 pixel sample, I reduced it to a 16 by 16 matrix. You know, this one’s a bit iffy… I can’t remember whether I actually did it, or I just came up with it writing this…

Anyway, there’s a test to capture “pattern” data. The histogram test involves all pixels. The median filter test (if I actually did one) cluster pixel information in groups. Let me see if I can explain this better…

In the image above, the top right corner has more black swirly thingies close together than other parts of the image. The histogram test cannot detect that the top right corner has more black. It can only detect how much black in total there is in the image. Positional information is lost. Hence the need for a pattern test.

The histogram test is objective. Test results are verifiable and repeatable. However, matching the unknown textures require that I set a threshold. This is where the tests become subjective. Who’s to say a particular threshold value is more accurate than another?

In the end, I think I had 5 or 6 tests, and gotten a 94 (or was it 96?) percent accuracy. I was tweaking my threshold values so I could yield higher accuracy rates. See how subjective those tests of mine were? *smile*

The programming language of choice was MATLAB (yes, Will?), as dictated by the professor. So everything was coded in MATLAB. Which was good, because I’d hate to implement FFT on my own…

There’s something else too. I weighted those test results. Say test A was supposedly more accurate than test B. Then I gave the results of test A more weight in my final calculation. Thus, roughly speaking, if 3 tests out of 6 say texture A was the one, then that’s the one. It could also mean 2 tests had more sway if both carried high weights, and the other 4 tests weren’t conclusive enough.

One of my classmates got higher accuracy rates (97 or 98 percent) than I did, no matter how much I tweaked threshold values and weights, no matter how many kinds of tests I added (or took out).

But here’s the thing, and I want you to note this. Given a larger sample size, and a different set of unknown textures to match, my set of tests might actually yield *better* results than those of that irritatingly smug classmate of mine.

Here’s another takeaway. **No one test can conclusively confirm and match the unknowns** (even with some error margin). It took a few tests working in concert to obtain a relatively high accuracy rate. Think about *that*.