Digital image processing is mostly for aesthetics

I intended to write an explanation of some digital image processing methods, followed by the point of this article’s existence (see title). Since I can’t remember much about the topic already, I rummaged through my pile of stuff to find my university textbook. Found it, and skimmed over the content pages to see what I needed to research on.

I am ashamed to say, I have forgotten a lot about the topic already… This is hard! There’s graphs, functions, formulas, integration, summations and matrices.

The book is “Digital Image Processing” (second edition) by Rafael C. Gonzalez and Richard E. Woods, published by Prentice Hall. Unfortunately, it’s not for sale in USA, Mexico or Canada, so if you’re from those countries, sorry. But it’s really good.

[UPDATE: Commentor Will has pointed out that Gonzalez’s and Woods’ book is in fact available in the United States. Go get it.]

Anyway, I was planning on stunning you with my brilliance on histograms, median filtering, Gaussian blurs, pixel neighbours and the like. Sadly, I’m woefully ill-equipped to do that… At least, not without doing some serious reading and research first. Well, I can’t do that and still write about it in one night.

But I want to tell you about something my professor said. There are many things we do to process images. We apply median filtering to remove noise. We do some operation to sharpen images and blur them. Images can be modified to black and white, or set to a faded tone to simulate old photographs.

There are other kinds of information we can extract after processing images, such as detection of edges, shapes and even faces. But as far as I can tell, they are mostly subjective. Subjective because it is up to us, humans, who finally dictate whether the final image is what’s required.

For example, noise reduction. Simply put, image noise is the small specks of “something” in the image. They could be caused by an inferior camera, or simply because of the environment (such as in space images). Median filtering means for each pixel, check out the neighbouring pixels, and if they are mostly of one colour, then that’s the “correct” colour to be.

But there are different strengths when applying the median filter. The software cannot tell you how strong the filter should be. It can guess, it can have default values, it can apply the strength most commonly used. But it’s ultimately up to the user to decide when enough is enough. Perhaps at the highest strength, the algorithm produces an image that’s somehow pleasing (or useful) to the user, even if that result isn’t what the user wanted originally.

And this was what my professor said. Image processing is mostly about aesthetics, digital or analog. Digital computations just make it faster and easier to play around with the images. You can physically process images to produce sepia tones. Or you could do it digitally.

I’m introducing the concept of image processing because I have another story to tell you. And it’ll make more sense if you know about image processing first.

Have you written code to do image processing before? Please share your experience with your comments.