Python image processing

By leonardo maffi, V.1.1, Feb 14 2008
Keywords: image processing, Python, PIL, programming

[Go back to the article index]

Software of this article:

A computer language can allow to be compiled to very fast programs. While Python doesn't shine in that regard, it's very good for its flexibility and its large availability of third party libraries (many are present into its standard library too). The availability of good external libraries is sometimes even more important than the language itself. Python flexibility allows explorative programming, that can be quite useful for certain purposes, like certain kinds of data/text/signal/image processing.

The following is a Jpeg image that is masked with some random white and black pixels:

With the help of the Python Imaging library PIL (and few other things), it's very easy and fast to explore ways to filter the image. This image filtering problem is easy, because it's rather easy to tell the noise from the signal, but it's not immediate either because the original image is in a compressed Jpeg format, so those white points are noisy because of the lossy nature of the Jpeg compression, and around them there are more Jpeg artifacts. A face is a very easy subject, because the human visual system needs only a little data (probably as little as an 13x13 image, in 8 greys, that's less than 64 bytes) to tell them apart.

In a short time I have written the following code, it contains few wrappers that help reduce the processing code to a minimum, allowing a faster exploration of image processing ideas:

from random import choice
from colorsys import rgb_to_yiq
import Image # Python PIL
from select import median # my libs
import psyco; psyco.full()

class InImage:
    def __init__(self, filename):
        self.inim =
        self.nx,self.ny = self.inim.size
        self.inmat = self.inim.load()
    def __getitem__(self, (x, y)):
        return self.inmat[x % self.nx, y % self.ny]
    def all_coords(self):
        for y in xrange(self.ny):
            for x in xrange(self.nx):
                yield x, y
    def neigh(self, x, y, lenx=3, leny=3):
        assert lenx & 1 and leny & 1
        for posy in xrange(-leny/2+1, leny/2+1):
            for posx in xrange(-lenx/2+1, lenx/2+1):
                yield self[x+posx, y+posy]
    def neigh33(self, x, y):
        return [self[x-1, y-1], self[x, y-1], self[x+1, y-1],
                self[x-1, y],   self[x, y],   self[x+1, y],
                self[x-1, y+1], self[x, y+1], self[x+1, y+1]]

class OutImage:
    def __init__(self, in_image):
        self.nx,self.ny = in_image.nx, in_image.ny
        self.outim = in_image.inim.copy()
        self.outmat = self.outim.load()
    def __setitem__(self, (x, y), value):
        self.outmat[x % self.nx, y % self.ny] = value
    def save(self, filename, *args, **kwds):, *args, **kwds)

def luminance((r,g,b)):
    return rgb_to_yiq(r/255.0, g/255.0, b/255.0)[0]

def main():
    m1 = InImage("face_part.jpg")
    m2 = OutImage(m1)
    K = 0.8

    lastPixel = (0, 0, 0)
    for x,y in m1.all_coords():
        #m2[x,y] = sorted(m1.neigh33(x, y), key=luminance)[4] # 3x3 median
        #m2[x,y] = max(m1.neigh33(x, y), key=luminance) # 3x3 max
        #m2[x,y] = min(m1.neigh33(x, y), key=luminance) # 3x3 min
        if luminance(m1[x, y]) < K:
            lastPixel = m1[x, y]
            nonwhite = [(luminance(p), p) for p in m1.neigh33(x, y) if luminance(p) < K]
            if nonwhite:
                lastPixel = median(nonwhite)[1]
            m2[x,y] = lastPixel"face_part_elab.jpg", quality=99)


In the code m1 is the input image, and m2 is the output one. The OutImage copies the whole input image (that is a fast operation, because it's done in C), so you can skip plotting the pixels that you don't change. The luminance() uses the really easy color conversion functions of the Python std lib, using the Y is rough, but it's enough for this simple processing. I have wrapped both __getitem__ and __setitem__ so it uses modular coordinates, this is a slow but high-level way to solve the border problem that most neighborhood filters have.

To speed up the exploration I have cut a small pice of the original image (using jpegCrop, it's a lossless operation because no Jpeg DCT is recomputed):

Now it's really easy to use Python to do some of the usual image processing, like a median on a 3x3 neighborhood:

m2[x,y] = sorted(m1.neigh33(x, y), key=luminance)[4]

Minimum in the same 3x3 neighborhood:

m2[x,y] = min(m1.neigh33(x, y), key=luminance)

Max 3x3:

m2[x,y] = max(m1.neigh33(x, y), key=luminance)

The filtering algorithm that can be seen in the code is quite simple. if a pixel isn't very bright, then skip it, because probably it's not noise. Otherwise among its 3x3 neighborhood take only non-white pixels, and plot their median, sorting them according to their luminance again. If all pixels are white, then it uses the color of the last plotted pixel (you can see it on the nails). This is the result on the whole image:

The algorithm can be improved to take care of the few black dots present in the original image. Some zones are gray, maybe because around the white pixels the Jpeg didn't encode the two color channels much. To fix that problem you can color them by hand with a painting program in few minutes, or you can find the saturation & value in some neighborhood of the pixel (using a good enough chromatic transform, but the CIECAM97 may be overkill for this job).

Software of this article:

[Go back to the article index]