Showing posts with label image processing. Show all posts
Showing posts with label image processing. Show all posts

Saturday, May 10, 2014

My new textbook and job

So many things have happened in the last few months and I have not had time to add new entries to my blog. The first is the textbook that I co-authored with Sridevi Pudipeddi which was released at the end of February.  It was marathon run to complete and get the book ready for publishing.  I also moved to a new job in California in April.


During my work as image processing consultant at the Minnesota Supercomputing Institute, I have worked with students in various disciplines of science.  In all these cases, images were acquired using x-ray, CT, MRI, Electron microscope and Optical microscope.  It is important that the students have knowledge of both the physical methods of obtaining images and the analytical processing methods to understand the science behind the images. Thus, a course in image acquisition and processing has broad appeal across the STEM disciplines and is useful for transforming undergraduate and graduate curriculum to better prepare students for their future.

There are books that discusses image acquisition alone and there are books on image processing alone.  The image processing algorithms depend on the image acquisition method. We wrote a book that discusses both, so that students can learn from one source. You can check out sample chapter of the book at reedwith.us. You can buy the book at Amazon by clicking on the image below.


I also changed job.  I started working as Senior Engineer at Elekta in Sunnyvale, CA. I will be focussing mostly on x-ray and CT during my tenure at Elekta.

Friday, February 17, 2012

Under-graduate education in Image processing

SURVEY URL:  http://goo.gl/ORDDz

Image acquisition and processing have become a standard method for qualifying and quantifying experimental measurements in various Science Technology Engineering and Mathematics (STEM) disciplines.  Discoveries have been made possible in medical sciences by advances in diagnostic imaging such as x-ray based computed tomography (CT) and magnetic resonance imaging (MRI).  Biological and cellular functions have been revealed with new imaging techniques in light based microscopy.  Advancements in material sciences have been aided by electron microscopy analysis of nanoparticles.    All these examples and many more require both knowledge of the physical methods to obtain images and the analytical processing methods to understand the science behind the images. 

Imaging technology continues to advance with new modalities and methods available to students and researchers in STEM disciplines.  Thus, a course in image acquisition and processing would have broad appeal across the STEM disciplines and be useful for transforming undergraduate and graduate curriculum to better prepare students for their future.

Image analysis is an extraordinarily practical technique that need not be limited to highly analytical individuals with a math and engineering background.  Since researchers in biology, medicine, and chemistry along with students and scientists from mathematics, physics and various engineering fields use these techniques regularly; there is a need for a course that provides a gradual introduction to both acquisition and processing. 

Such a course will prepare students in the common image acquisition techniques like CT, MRI, light  microscope and electron microscope.   It will also introduce the practical aspects of image acquisition such as noise, resolution etc.  The students will also be programming image processing using Python as a part of the curriculum.  They will be introduced to python modules such as numpy and scipy.   They will learn the various image processing operations like segmentation, morphological operations, measurements and visualization.

We wanted our understanding of this need with the data collected from surveying people interested in image processing or people who wish that they had such a course during their senior year in under-graduate or in graduate school.   We created a survey to obtain your feedback.    It will take only a minute of your time.   We request that you fill as much information as you can.  Please forward this URL or this blogpost to your friends as well.


Sunday, February 12, 2012

Python modules for scientific image processing



Recently my friend, Nick Labello working at University of Chicago performed a large scale, large data image processing.  It took one week to process the data acquired over 12 hours.  The program was written in Matlab and was run on a desktop. If the processing time needs to be reduced, the best course of action is to parallelize the program and run it on multiple cores / nodes.  Matlab parallelization can be expensive as it is closed-source commercial software.  Python on the other hand is free and open-source and hence can be parallelized to large number of cores.  Python also has parallelization modules like mpi4py that eases the task.  With a clear choice of programming language, Nick worked on evaluating the various python modules.  The chosen python module was used to rewrite the Matlab program.   Specifically, he reviewed the following

  1. Numpy
  2. Scipy
  3. Pymorph
  4. Mahotas
  5. Scikits-image
  6. Python Imaging Library (PIL)
Nick’s view on the various modules along with some of my view is given below.  

Numpy

It doesn't actually give you any image processing capabilities, but all of the image processing libraries rely on Numpy arrays to store the image data.  As a happy result, it is trivial to bounce between all of the different image processing libraries in the code because they all read and write the same datatype. 

It provides a stable, solid, easy to use very basic functionality (erosion, dilation, etc..).    It is missing a lot of the more advanced functions you might find in Matlab such as functions to find endpoints, perimeter pixels, etc.  These must be pieced together from elementary operations.  The documentation for NDImage is NOT very good for new users.  I had a hard time with it at first.  

It has lots of image processing functions that do not rely on anything but Numpy.  It does not depend on Scipy.  It provides a python only library.  A second interesting thing about PyMorph is that it has quite a lot of functionality compared to the other libraries.  Unfortunately, since it is written in Python, it is hundreds of times slower than Scipy and the other libraries.  This will become an issue for advanced image processing users.


Mahotas  

It provides only the most basic functions, but blazing fast, and, in my experience, twice as fast as the equivalent functions in Scipy.

Scikits-Image  

It picks up where Scipy leaves off and offers quite a few functions not found in Scipy.

Python Imaging Library (PIL)  

It is free to use but is not open-source.  It has very few image processing algorithms and not necessarily useful for scientific imaging.


Enthought  

It is __not__ an image processing library.  It is a python distribution ready for scientific programming.  It has over 100 scientific packages pre-installed, and is compiled against fast MKL/BLAS.  My tests were a little bit faster in Enthought-python than the regular Python by 2-3%.  Your mileage may vary.  The big advantage, though, is that it has everything I need except Mahotas and PyMorph.  Also, Mahotas installs easily on Enthought, whereas it was difficult on the original python installation, due to various dependencies. Enthought is free for students and employees of Universities and Colleges. Others need to pay for the service.

Ravi's view: Personally, I do not have much problem installing python modules to regular python interpreter. The only time it becomes difficult is installing scientific packages like Scipy that have numerous dependencies like Boost libraries. Readymade binary packages do eliminate the installation steps but are not necessarily well tuned for optimal performance. Enthought is a great alternative that is as easy to install and yet optimized for performance.

Friday, October 22, 2010

Calculating the number of repeating objects in an image

Recently I was asked to help solve a problem determining the number of seats in an airline. An example of such an airline layout is shown below (click to view the bigger image.)



I decided to use the method I knew best, Cross Correlation. The idea is to cross correlate the template of the seat (i.e., an image of a single seat) with every pixel in the airline layout image. The template (coordinate origin is its center) is moved over to a particular pixel in the image. The cross correlation coefficient is calculated and the value is used as the intensity of a new image. This is repeated by moving the template to every pixel in the airline layout image. The pixels for which the template matches perfectly with the airline layout image, will have the correlation coefficient close to 1.


Above: A hard to see template of the seat.

The results of the cross correlation is shown below (click to view the bigger image.)

The bright spots in this image are the points with the highest correlation. It then becomes a simple process of segmenting the high intensity pixel.

To perform these operations, my natural choice was Matlab and its Image processing toolbox.

im = imread('airline_seating.jpg');
im = rgb2gray(im);

im_template = imread('template.jpg');
im_template= rgb2gray(im_template);

C = normxcorr2(im_template, im);
C1 = C>0.7;

stat = regionprops(C1);
noofseats = size(stat,1);
disp(['Number of seats = ',num2str(noofseats)]);


In the first 4 lines of code, we read the airline layout image and the template image. To obtain the correlation image, I did not have to write my own correlation function instead Matlab has one already ready to be used. This function, normxcorr2 needs the airline layout and the template matrix. Once the correlation image is obtained, we segment it based on the logic that any pixel with value more than 0.7 is considered as pixels corresponding to the center of seat. Since the center of the seat did not segment as a single pixel, I could not count the number of pixels as the number of seats. Instead I calculated the number of regions using regionprops and store it as a structure. The number of elements of the structure is the number of seats.