Click here for Top of Page
Newsletter

The book “Augmented Vision Perception in Infrared: Algorithms and Applied Systems”, edited by Riad Ibrahim Hammoud with contributions from a number of excellent researchers in the fields of Infrared Thermography, Intensified Imagery, Thermal Imagery, Infrared Imagery and Hyperspectral Imagery, introduces itself as “a comprehensive review of recent deployment of infrared sensors in modern applications of computer vision, along with in-depth description of the world’s best machine vision algorithms and intelligent analytics”.

Indeed, the book includes a wide range of machine perception applications in intensified, near infrared, thermal infrared, laser, polarimetric, and hyperspectral bands. Nevertheless, the book is not only devoted to infrared technologies as stated in its title. It would be better defined by the title provided by the same editor and Professor James W. Davis in their co-edited Special Issue of Computer Vision and Image Understanding on “Advances in Vision Algorithms and Systems beyond the Visible Spectrum” that appeared during year 2007. In fact, this book introduces many chapters related to machine vision beyond the visible. On the other hand, the vision perception is called “augmented” in a double sense in the book. Firstly, some chapters emphasize the merging of visual and non-visual imaging, and in the second place, there are some approaches described for the fusion of multiple visible and thermal sensors. For the previous reasons, the book offers an outstanding opportunity to look into the current trends in computer vision technologies beyond the visible spectrum.

There are some very instructive chapters that could be taught in an advanced graduate course. While some chapters are accessible to readers of all levels, many others are too hard for a starter, and some proposals provide details of very specific as well as complex methodologies. Therefore, there is a significant amount of prior knowledge necessary for a student to understand most of the book chapters.

The book contains eighteen chapters organized into seven parts. Next, a summary for each chapter contained in the book is offered.

Chapter 1: Infrared Thermography for Land Mine Detection.

This chapter proposes a landmine detection approach with a thermal-infrared camera, taking into account physical characteristics of the soil, mines geometric and thermal properties (depth, height), and establishes a method to classify detected objects based on the thermal and geometric properties of the detected anomalies. The approach consists of three stages for data acquisition, preprocessing, and anomaly detection to classify the detected anomalies. The main emphasis is devoted to the detection and classification of buried objects in terms of geometric and thermal properties.

 

Chapter 2: Passive Polarimetric Information Processing for Target Classification

This chapter presents a brief overview of electromagnetic waves, polarization, and refraction, as well as the particulars of the sensor placement geometry necessary for successful measurement of the angle of incidence. Two methods for exploiting information that can be derived from passive polarimetric imagery are shown. First, a method for extracting 3D information and indices of refraction from a scene by means of a pair of polarimetric passive-imaging sensors is presented, and, for the second case, an approach to the extraction of attributes that remain invariant through different polarization transformation is presented.

 

Chapter 3: Vehicle Classification in Infrared Video Using the Sequential Probability Ratio Test

This chapter presents a single-look, vehicle classification system for infrared video. The approach supposes an existing algorithm to get ROIs (chips). The classifier takes a sequence of chips and extracts a signature from each chip based on a histogram of gradient orientations selected from a set of regions that cover the detected object. Signatures are matched with learned templates using multinomial pattern matching and its scores are fused by means of the sequential probability ratio test.

 

Chapter 4: Multiresolution Approach for Noncontact Measurements of Arterial Pulse Using Thermal Imaging

This chapter introduces an approach for noncontact and nonintrusive measurement of arterial pulse. A thermal IR camera is used to capture the heat pattern from superficial arteries and propose a blood vessel model to describe the pulsatile nature of the blood flow. A multiresolution wavelet-based signal analysis is applied to extract the pulse waveform. This requires a multiscale image decomposition to identify the subbands at which the pulse propagation is more pronounced and the noisy heat patterns are minimal.

 

Chapter 5: Coalitional Tracker for Deception Detection in Thermal Imagery

This chapter proposes a tracking method based on a network of independent particle filter trackers whose interactions are modeled using a coalitional game theory. The tracking is able to monitor the motion of the target’s surface even in the presence of deformation or partial occlusion, and can work on both infrared and visual video without an explicit modeling. The trackers are viewed as players in a cooperative game in which the objective is to increase their influence by forming coalitions with others. The winning coalition is used to compute the state vector of the target and to propagate its influence onto the entire tracking network.

 

Chapter 6: Thermal Infrared Imaging in Early Breast Cancer Detection

This chapter provides a survey of recent achievements from pathophysiological-based understanding of IR images. The problems arising from the current techniques, such as mammography, magnetic resonance imaging, or computed tomography, are pointed out. IR techniques have been applied for breast cancer detection trying to find asymmetric hot spots and vascularity in IR images of the breasts. This phenomenon is due to the heat emanating from the high metabolic rate of cancer cells compared to the normal cells.

 

Chapter 7: Hyperspectral Image Analysis for Skin Tumor Detection

This chapter presents hyperspectral imaging of fluorescence for noninvasive detection of skin tumors. Hyperspectral imaging sensors collect two-dimensional image data in a number of narrow, adjacent spectral bands. Hyperspectral, fluorescence, signals are measured using a laser excitation source, and an acousto-optic filter is used to capture individual spectral bands images. Support vector machines with polynomial kernel functions provide decision boundaries to classify malignant tumor and normal tissue.

 

Chapter 8: Spectral Screened Orthogonal Subspace Projection for Target Detection in Hyperspectral Imagery

Spectral screening is defined as reducing the hyperspectral data to a representative subset of spectra. The spectra selection step calculates the distances between one spectra and each presented in the subset.  Two algorithms were presented: Max SS reduces the overlap among the similarity sets for the spectra in the subsets, and Min SS tries to identify spectra that will be as close as possible to the selected ones and yet remain dissimilar. These algorithms are adapted for target detection by choosing as the initial spectrum in the subset the target signature. A few classification procedures are applied (OSP approach, KOSP, SA and SID) that result in classification images for the targets.

 

Chapter 9: Face Recognition in Low-Light Environments Using Fusion of Thermal Infrared and Intensified Imagery

This chapter presents a slightly modified version of the CSU Face Identification Evaluation System. The aim is to test the effect of illumination level in the face recognition task when using intensified near infrared (I2) imagery in conjunction with thermal infrared imagery. The results show that performance for I2 imagery was much better than the visible counterpart at the lowest light levels. At the brightest levels, the performance difference between visible and I2 became quite small (by applying some preprocessing).

 

Chapter 10: Facial Expression Recognition in Nonvisual Imagery

This chapter presents two different approaches for an artificial vision system that allows FER using images of signals beyond the visible spectrum. The first one uses automatic feature localization based on interest point clustering combined with a PCA-based classification approach using thermal imagery which are: face localization, facial feature estimation, eigenimage analysis, and classification (SVM).  The second uses an evolutionary learning algorithm that searches for an optimal set of ROIs and a set of texture features (such as entropy, contrast, homogeneity, correlation,…).

 

Chapter 11: Runway Positioning and Moving Object Detection Prior to Landing

This chapter describes an enhanced vision system (EVS) approach to provide accurate positioning of a runway and detecting moving objects on it from an onboard infrared sensor. This is based on a two-step process. First, the sensor image is analyzed to identify and segment the runway coordinates. These estimates are used to locate the runway structure and detect moving obstacles. Fitting models are used to match a runaway template (from synthetic images) to the detected edges. The predicted coordinated and detected edges are correlated to determine the location of the actual runway coordinates and to perform the dynamic stabilization of the image sequence in the obstacle detection process. And, secondly, the stabilized sequence is normalized to compensate for the global intensity variations. A background model is created to get an appearance model of the runway in order to identify moving objects by comparing the image sequence with the background model.

 

Chapter 12: Moving object Localization in Thermal Imagery by Forward-Backward Motion History Images

This chapter proposes an MHI-based method able to detect localization and shape of moving objects. The approach consists of three main modules: preprocessing, where the previous frame is stabilized into the coordinate system of the current frame; MHI generation, where the motion image is computed by frame difference, the forward MHI is computed as a function of the previous forward MHI and the computed difference, and similarly the backward MHI; object localization module, where the forward MHI is combined with the backward MHI to determine the moving objects in the current image.

 

Chapter 13: Feature-Level Fusion for Object Segmentation Using Mutual information

This chapter presents fusion as a feature selection problem solved by utilizing a selection criterion based on mutual information. The method starts with the detection of object features in one sensor (IR) and uses the other sensor (Visible) information to improve the quality of the original detection. For this, a feature representation based on contour fragments to capture object shape is defined. Fusion is approached as a variation of the mutual information feature selection problem. Mutual information is computed between features extracted from both sensors. And finally, a heuristic selection scheme is proposed to identify the set of contour features having the highest mutual information. Fusion is made by comparing a set of features with high relevance (if provided both redundant and complementary information).

 

Chapter 14: Registering Multimodal Imagery with Occluding Objects Using Mutual Information: Application to Stereo Tracking of Humans

This chapter introduces an approach to registering multimodal imagery that is able to register occluding objects at different disparities in the scene. A disparity voting (DV) technique that uses the accumulation of disparity values from sliding correspondence windows gives reliable and robust registration results for initial segmentations. Mutual information is used to perform images fusion. This is computed from two images entropy and their joint entropy.

 

Chapter 15: Thermal-Visible Video Fusion for Moving Target Tracking and Pedestrian Motion Analysis and Classification

This chapter presents a system for pedestrian surveillance. The approach integrates a tracker with spatial-temporal motion information by fusing color and IR videos. The system first builds a background model from a multimodal distribution of colors and temperatures.  A particle filter scheme is constructed to maximize the probability of the scene model. Observation likelihoods of moving objects account for their three-dimensional locations with respect to the camera and occlusions by other objects or obstacles. A classifier based on periodic gait analysis is used to detect a symmetrical pattern in human gait (to differentiate humans from other moving objects).

 

Chapter 16: Multi Stereo-Based Pedestrian Detection by Daylight and Far-Infrared Cameras

This chapter presents a tetravision system for the detection of pedestrians using two far infrared and visible camera stereo pairs. Different approaches are shown for pedestrians’ detection in the two image domains: warm area detection, vertical edge detection, and an approach based on the simultaneous computation of disparity space images in the two domains. These detection methods output a list of bounding boxes that enclose potential pedestrians. Later, taking the assumption that a human shape is mostly symmetrical, a symmetry-based process is used to refine and further filter out the ROIs. After that, a number of validators are used to evaluate the presence of a human inside each bounding box by searching characteristics such as head position.

 

Chapter 17: Real-Time Detection and Tracking of Multiple People in Laser Scan Frames

This chapter presents a tracker to detect and track multiple people in a crowded and open area in real time. Raw data are obtained that measures two legs for each person with multiple scanners, and a stable feature is extracted using accumulated distribution of successive laser frames. A probabilistic tracking model with a sequential inference process using a Bayesian rule is described (independent tracking with Kalman Filters and Joint tracking with RBMC-DAF). The chapter also presents an approach for laser and visual information fusion to deal with broken trajectories. Eleven control points are set to associate laser readings with image information.

 

Chapter 18: On Boosted and Adaptive Particle Filters for Affine-Invariant Target Tracking in Infrared Imagery

This chapter shows a generalization of the usual white noise acceleration target model by introducing an affine transformation to model the target aspects. This transformation is parameterized by scalar variables and obeys a first-order Markov chain. The first action is a boosting step, by which a local detector is defined based on the most recent tracker output to include additional high-quality boosting particles. The second action is an adaptation step in which the system model self-adjusts to enhance tracking performance.

 

Right Arrow: Next
Right Arrow: Previous

BOOKSBOOKSBOOKS

 

Augmented Vision Perception in Infrared:  Algorithms and Applied Systems

 

Edited by Riad Ibrahim Hammoud

Springer, Advances in Pattern Recognition Series, 2009

 

Reviewed by

Antonio Fernández Caballero (Spain)

Click on the image (right) to go to the publisher’s web page for this book where you will find a description of the book and the Table of Contents.