Thursday, September 30, 2010
background subtraction (problems)
be addressed by a good background removal algorithm to
correctly detect moving objects. A good background removal
algorithm should handle the relocation of background objects,
non-stationary background objects e.g. waving trees,
and image changes due to camera motion which is common
in outdoor applications e.g. wind load. A background removal
system should adapt to illumination changes whether
gradual changes (time of day) or sudden changes (lightswitch),
whether global or local changes such as shadows and interreflections.
A foreground object might have similar characteristics
as the background, it become difficult to distinguish
between them (camouflage). A foreground object that becomes
motionless can not be distinguished from a background
object that moves and then becomes motionless
(sleeping person). A common problem faced in the background
initialization phase is the existence of foreground
objects in the training period, which occlude the actual background,
and on the other hand often it is impossible to clear
an area to get a clear view of the background, this puts serious
limitations on system to be used in high traffic areas.
Some of these problems can be handled by very computationally
expensive methods, but in many applications, a short
processing time is required.
for a few see the following: POST1 - POST2 - POST3
Image Filtering (Gaussian)
An example of a 1D filter:
1 2 3 4 3 2 1
An example of a 2D filter:
1 1 1 1 1
1 2 2 2 1
1 2 3 2 1
1 2 2 2 1
1 1 1 1 1
In Matlab:
n represents the number of cells so for a 2D 3x3 filter n=9 for a 1D filter the number of cells in my example above is 7
% 2D Gaussian Filter
x = -1/2:1/(n-1):1/2;
[Y,X] = meshgrid(x,x);
f = exp( -(X.ˆ2+Y.ˆ2)/(2*sˆ2) );
f = f / sum(f(:));
%1D Gaussian Filter
x = -1/2:1/(n-1):1/2;
f = exp( -(x.^2)/(2*sigma^2) );
g = f / sum(sum(f));
Monday, September 27, 2010
Object detection Problem
Object detection is a necessary step for any tracking algorithm because the algorithm needs to identify the moving object. The objective of object detection is to find out the position of the object and the region it occupies in the first frame.
This task is extremely difficult due to the complex shapes of the moving targets. In order to simplify the problem, a variety of constrains are imposed to help in finding the object in image plane, such as the number of objects to be tracked, types of objects, etc. Currently, there are three popular techniques used in object tracking: image segmentation, background subtraction and supervised learning algorithms.
PROBLEMS
background subtraction is used for fixed cameras for moving camera we can use SIFT to stabilize the camera and perform background subtraction. However we do not get 100% stability with this method.
The image segmentation methods are able to partition the image into perceptually similar regions, but the criteria for a good partition and the efficiency are two problems that the image segmentation algorithms need to address.
The main drawback for supervised learning methods is that they usually require a large collection of samples from each object class and the samples must be manually labeled.
SOURCE : Tracking of Multiple Objects under Partial Occlusion - Bing Han, Christopher Paulson, Taoran Lu, Dapeng Wu, Jian Li - 2010
Saturday, September 25, 2010
SIFT tracking set1
Video: Result of identification for tracking
the objective is to find an object in the foreground place a rectangle around the object and find SIFT keypoints in that rectangle, for the next frames only match those keypoints from the rectangle to the image
images and details from (based on my understanding):
Lowe, David G. “Distinctive Image Features from Scale Invariant Keypoints”. International Journal of Computer Vision, 60, 2 (2004)
The STEPS:
- go through all the video and find SIFT keypoints in each frame
- from an initial frame, select a moving object or foreground and place a rectangle around it
- get all the SIFT keypoints in that rectangle
- match the points from that rectangle to all the other points found in step 1
- if match is found place a rectangle in succeeding frames
problem: in some frames keypoints dont match,thats the reason why i loose track of the vehicle
its either there arent enough matches or those keypoints are different from the first frame
again i consider the keypoints found only in the first frame and compare to new frames
TO DO: compare points from first frame to the next.if object found get the new SIFT points of that object from the new frame and then compare to succeeding frame.
objective: track using SIFT points without needed to preselect the cars
code:
img = imread([imgDir '\' fileNames(3).name]);
[r c bands] = size(img);
h=figure;
imshow(img);
hold on;
....
title('Click the top left corner of an object to track...');
[x1 y1] = ginput(1);
plot(x1, y1, 'rx');
title(sprintf('Click the bottom right corner (Right click indicates last object)'));
[x2 y2 button] = ginput(1);
objectCount = objectCount+1;
rectangle('Position', [x1 y1 x2-x1 y2-y1], 'LineWidth', 4, 'EdgeColor', colors(mod(objectCount, size(colors,1))+1, :));
.....
% Find the sift features for the object
imwrite(trackingObjects{objectCount}.img, 'tmp.pgm', 'pgm');
[jnk des locs] = sift('tmp.pgm');
trackingObjects{objectCount}.des = des;
....
% Read in the next image
img = imread([imgDir '\' fileNames(ii).name]);
% Show the image
figure(h)
cla;
imshow(img); hold on;
% Look for features in the image for each object we are tracking
.....
% Match the features for this object
clear match;
match = matchFeatures(trackingObjects{jj}.des, des);
clear newLocs
newLocs = locs(match(match>0), :);
% Only proceed if we found at least one match
if ~isempty(newLocs)
print('match ound');
....
Tuesday, September 21, 2010
SIFT by lowe using ImageJ (PARTIAL)
from within ImageJ, open any image
From the ImageJ menu select PLUGINS -> MACRO's -> RUN
results show the first 2 steps of SIFT, however the 3 step takes alot of time to process because we are going through each pixel and thats done for all 4 octaves.
for short overview of SIFT please see my post HERE
//*************************************************
// SIFT Step 1: Constructing a scale space
//*************************************************
//Lowe claims blur the image with a sigma of 0.5 and double it's dimensions
width=getWidth(); height=getHeight();
//------------------------------------------------------------------------
// convert to 8-bit gray scale and create scale space for the first octave
//------------------------------------------------------------------------
run("Duplicate...", "title=[Copy of original]");
Oid = getImageID;
run("8-bit");
run("Duplicate...", "title=[Octave 1 blur1]");
P1id1 = getImageID;
run("Gaussian Blur...", "slice radius=2"); //blur 1
run("Duplicate...", "title=[Octave 1 blur 2]");
P1id2 = getImageID;
run("Gaussian Blur...", "slice radius=4"); //blur 2
run("Duplicate...", "title=[Octave 1 blur 3]");
P1id3 = getImageID;
run("Gaussian Blur...", "slice radius=8"); //blur 3
run("Duplicate...", "title=[Octave 1 blur 4]");
P1id4 = getImageID;
run("Gaussian Blur...", "slice radius=16"); //blur 4
run("Duplicate...", "title=[Octave 1 blur 5]");
P1id5 = getImageID;
run("Gaussian Blur...", "slice radius=16"); //blur 5
showMessage("DONE", "First OCTAVE");
//----------------------------------------------------------------------------
// Perform image resize on original and create second octave
//----------------------------------------------------------------------------
selectImage(Oid);
run("Size...", "width="+width/2+" height="+ height/2);
run("Duplicate...", "title=[Octave 2 blur1]"); P2id1 = getImageID; run("Gaussian Blur...", "slice radius=2"); //blur 1
run("Duplicate...", "title=[Octave 2 blur 2]"); P2id2 = getImageID; run("Gaussian Blur...", "slice radius=4"); //blur 2
run("Duplicate...", "title=[Octave 2 blur 3]"); P2id3 = getImageID; run("Gaussian Blur...", "slice radius=8"); //blur 3
run("Duplicate...", "title=[Octave 2 blur 4]"); P2id4 = getImageID; run("Gaussian Blur...", "slice radius=16"); //blur 4
run("Duplicate...", "title=[Octave 2 blur 5]"); P2id5 = getImageID; run("Gaussian Blur...", "slice radius=16"); //blur 5
showMessage("DONE", "Second OCTAVE");
//----------------------------------------------------------------------------
// Perform image resize on original and create Third octave
//----------------------------------------------------------------------------
selectImage(Oid);
run("Size...", "width="+width/4+" height="+ height/4);
run("Duplicate...", "title=[Octave 3 blur1]"); P3id1 = getImageID; run("Gaussian Blur...", "slice radius=2"); //blur 1
run("Duplicate...", "title=[Octave 3 blur 2]"); P3id2 = getImageID; run("Gaussian Blur...", "slice radius=4"); //blur 2
run("Duplicate...", "title=[Octave 3 blur 3]"); P3id3 = getImageID; run("Gaussian Blur...", "slice radius=8"); //blur 3
run("Duplicate...", "title=[Octave 3 blur 4]"); P3id4 = getImageID; run("Gaussian Blur...", "slice radius=16"); //blur 4
run("Duplicate...", "title=[Octave 3 blur 5]"); P3id5 = getImageID; run("Gaussian Blur...", "slice radius=16"); //blur 5
showMessage("DONE", "Third OCTAVE");
//------------------------------------------------------------------------------
// Perform image resize on original and create Forth octave
//------------------------------------------------------------------------------
selectImage(Oid);
run("Size...", "width="+width/8+" height="+ height/8);
run("Duplicate...", "title=[Octave 4 blur1]"); P4id1 = getImageID; run("Gaussian Blur...", "slice radius=2"); //blur 1
run("Duplicate...", "title=[Octave 4 blur 2]"); P4id2 = getImageID; run("Gaussian Blur...", "slice radius=4"); //blur 2
run("Duplicate...", "title=[Octave 4 blur 3]"); P4id3 = getImageID; run("Gaussian Blur...", "slice radius=8"); //blur 3
run("Duplicate...", "title=[Octave 4 blur 4]"); P4id4 = getImageID; run("Gaussian Blur...", "slice radius=16"); //blur 4
run("Duplicate...", "title=[Octave 4 blur 5]"); P4id5 = getImageID; run("Gaussian Blur...", "slice radius=16"); //blur 5
showMessage("DONE", "Forth OCTAVE");
//**********************************************************
// SIFT Step 2: Laplacian of Gaussian Approximation
//**********************************************************
// input is 5 blurred images per octave
// output is 4 images per octave
imageCalculator('subtract', P1id1, P1id2); Q1I1 = getImageID; rename("Result Subtract Octave 1");
imageCalculator('subtract', P1id2, P1id3); Q1I2 = getImageID; rename("Result Subtract Octave 1");
imageCalculator('subtract', P1id3, P1id4); Q1I3 = getImageID; rename("Result Subtract Octave 1");
imageCalculator('subtract', P1id4, P1id5); Q1I4 = getImageID; rename("Result Subtract Octave 1");
imageCalculator('subtract', P2id1, P2id2); Q2I1 = getImageID; rename("Result Subtract Octave 2");
imageCalculator('subtract', P2id2, P2id3); Q2I2 = getImageID; rename("Result Subtract Octave 2");
imageCalculator('subtract', P2id3, P2id4); Q2I3 = getImageID; rename("Result Subtract Octave 2");
imageCalculator('subtract', P2id4, P2id5); Q2I4 = getImageID; rename("Result Subtract Octave 2");
imageCalculator('subtract', P3id1, P3id2); Q3I1 = getImageID; rename("Result Subtract Octave 3");
imageCalculator('subtract', P3id2, P3id3); Q3I2 = getImageID; rename("Result Subtract Octave 3");
imageCalculator('subtract', P3id3, P3id4); Q3I3 = getImageID; rename("Result Subtract Octave 3");
imageCalculator('subtract', P3id4, P3id5); Q3I4 = getImageID; rename("Result Subtract Octave 3");
imageCalculator('subtract', P3id1, P3id2); Q4I1 = getImageID; rename("Result Subtract Octave 4");
imageCalculator('subtract', P3id2, P3id3); Q4I2 = getImageID; rename("Result Subtract Octave 4");
imageCalculator('subtract', P3id3, P3id4); Q4I3 = getImageID; rename("Result Subtract Octave 4");
imageCalculator('subtract', P3id4, P3id5); Q4I4 = getImageID; rename("Result Subtract Octave 4");
//************************************************************
// SIFT Step 3: Finding key points
//************************************************************
//need to generate 2 extrema images
// maxima and minima almost never lies exactly on a pixel. since its between pixels we need to locate it mathematically using Tylor expansion
//--------------------------------------------------------------------------------
// first octave
//--------------------------------------------------------------------------------
/*
selectImage(Q1I2);
width=getWidth(); height=getHeight();
num = 0; //number of points
numRemoved = 0; //failed tested points
for (xi=1; xi<width; xi++)
{
for(yi=1; yi<height; yi++)
{
justSet = 1;
currentPixel = getPixel(xi, yi);
if ( (currentPixel > getPixel(yi+1, xi)) &
(currentPixel > getPixel(yi+1, xi) ) &
(currentPixel > getPixel(yi, xi-1) ) &
(currentPixel > getPixel(yi, xi+1)) &
(currentPixel > getPixel(yi-1, xi-1) ) &
(currentPixel > getPixel(yi-1, xi+1) ) &
(currentPixel > getPixel(yi+1, xi+1) ) &
(currentPixel > getPixel(yi+1, xi-1)) )
{
selectImage(Q1I1);
if ( (currentPixel > getPixel(yi+1, xi)) &
(currentPixel > getPixel(yi+1, xi) ) &
(currentPixel > getPixel(yi, xi-1) ) &
(currentPixel > getPixel(yi, xi+1)) &
(currentPixel > getPixel(yi-1, xi-1) ) &
(currentPixel > getPixel(yi-1, xi+1) ) &
(currentPixel > getPixel(yi+1, xi+1) ) &
(currentPixel > getPixel(yi+1, xi-1)) )
{
selectImage(Q1I3);
if ( (currentPixel > getPixel(yi+1, xi)) &
(currentPixel > getPixel(yi+1, xi) ) &
(currentPixel > getPixel(yi, xi-1) ) &
(currentPixel > getPixel(yi, xi+1)) &
(currentPixel > getPixel(yi-1, xi-1) ) &
(currentPixel > getPixel(yi-1, xi+1) ) &
(currentPixel > getPixel(yi+1, xi+1) ) &
(currentPixel > getPixel(yi+1, xi-1)) )
{
num++;
}
}
}
}
}
print("first part= " + num);
selectImage(Q1I3);
width=getWidth(); height=getHeight();
for (xi=1; xi<width; xi++)
{
for(yi=1; yi<height; yi++)
{
justSet = 1;
currentPixel = getPixel(xi, yi);
if ( (currentPixel > getPixel(yi+1, xi)) &
(currentPixel > getPixel(yi+1, xi) ) &
(currentPixel > getPixel(yi, xi-1) ) &
(currentPixel > getPixel(yi, xi+1)) &
(currentPixel > getPixel(yi-1, xi-1) ) &
(currentPixel > getPixel(yi-1, xi+1) ) &
(currentPixel > getPixel(yi+1, xi+1) ) &
(currentPixel > getPixel(yi+1, xi-1)) )
{
selectImage(Q1I2);
if ( (currentPixel > getPixel(yi+1, xi)) &
(currentPixel > getPixel(yi+1, xi) ) &
(currentPixel > getPixel(yi, xi-1) ) &
(currentPixel > getPixel(yi, xi+1)) &
(currentPixel > getPixel(yi-1, xi-1) ) &
(currentPixel > getPixel(yi-1, xi+1) ) &
(currentPixel > getPixel(yi+1, xi+1) ) &
(currentPixel > getPixel(yi+1, xi-1)) )
{
selectImage(Q1I4);
if ( (currentPixel > getPixel(yi+1, xi)) &
(currentPixel > getPixel(yi+1, xi) ) &
(currentPixel > getPixel(yi, xi-1) ) &
(currentPixel > getPixel(yi, xi+1)) &
(currentPixel > getPixel(yi-1, xi-1) ) &
(currentPixel > getPixel(yi-1, xi+1) ) &
(currentPixel > getPixel(yi+1, xi+1) ) &
(currentPixel > getPixel(yi+1, xi-1)) )
{
num++;
}
}
}
}
}
print("second part= "+num);
*/
//+++++++++++++++++++++++++++++++++++++++++++
// process takes too long... alternative MY IDEA
//+++++++++++++++++++++++++++++++++++++++++++
Randomly select points and compare it to a threshold if its maximum or minimum to the 26 neighbors then consider that point to be on a blob based on:
T. Lindeberg. Scale-Space Theory in Computer Vision.
Kluwer Academic Publishers, Norwell, MA, USA, 1994.
Quote: "local extrema points are located in a blob region"
So instead of going through each pixel of the image in multiple scales: select a specific number of random points and compare with the 26 neighbors. if you get a specific number of points that are maximum of minimum then continue to the next step if not get another set of random points until your specific number of maximum and minimum points are found.
CODE TO FOLLOW
Friday, September 3, 2010
Angled stereo vision
However we have 2 cameras which are angled in contrast to the stereo vision camera where both cameras are parallel on a line