Frame differencing, also known as temporal difference, uses the video frame at time t-1 as the background model for the frame at time t. This technique is sensitive to noise and
variations in illumination, and does not consider local consistency
properties of the change mask.
This method also fails to segment the non-background objects if they stop moving. Since it uses only a single previous frame, frame differencing may not be able to identify the interior
pixels of a large, uniformly-colored moving object. This is commonly known as the aperture problem.
a major flaw of this method is that for objects with uniformly distributed intensity values, the pixels are interpreted as part of the background. Another problem is that objects must be continuously moving. If an object stays still for more than a frame period (1/fps), it becomes part of the background.
This method does have two major advantages. One obvious advantage is the modest computational load. Another is that the background model is highly adaptive. Since the background is based solely on the previous frame, it can adapt to changes in the background faster than any other method (at 1/fps to be precise). As we'll see later on, the frame difference method subtracts out extraneous background noise (such as waving trees), much better than the more complex approximate median and mixture of Gaussians methods.
A challenge with this method is determining the threshold value.
The video is a result of stabilization using SIFT features.
Code:
clear all;close all;clc;
source = aviread('stabilized');
thresh = 40;
bg = source(1).cdata; % read in 1st frame as background frame
bg_bw = rgb2gray(bg); % convert background to greyscale
% ----------------------- set frame size variables -----------------------
fr_size = size(bg);
width = fr_size(2);
height = fr_size(1);
fg = zeros(height, width);
% --------------------- process frames -----------------------------------
for i = 2:length(source)
fr = source(i).cdata; % read in frame
fr_bw = rgb2gray(fr); % convert frame to grayscale
fr_diff = abs(double(fr_bw) - double(bg_bw));
for j=1:width
for k=1:height
if ((fr_diff(k,j) > thresh))
fg(k,j) = fr_bw(k,j);
else
fg(k,j) = 0;
end
end
end
bg_bw = fr_bw;
figure(1),subplot(3,1,1),imshow(fr)
subplot(3,1,2),imshow(fr_bw)
subplot(3,1,3),imshow(uint8(fg))
end
variations in illumination, and does not consider local consistency
properties of the change mask.
This method also fails to segment the non-background objects if they stop moving. Since it uses only a single previous frame, frame differencing may not be able to identify the interior
pixels of a large, uniformly-colored moving object. This is commonly known as the aperture problem.
a major flaw of this method is that for objects with uniformly distributed intensity values, the pixels are interpreted as part of the background. Another problem is that objects must be continuously moving. If an object stays still for more than a frame period (1/fps), it becomes part of the background.
This method does have two major advantages. One obvious advantage is the modest computational load. Another is that the background model is highly adaptive. Since the background is based solely on the previous frame, it can adapt to changes in the background faster than any other method (at 1/fps to be precise). As we'll see later on, the frame difference method subtracts out extraneous background noise (such as waving trees), much better than the more complex approximate median and mixture of Gaussians methods.
A challenge with this method is determining the threshold value.
The video is a result of stabilization using SIFT features.
Code:
clear all;close all;clc;
source = aviread('stabilized');
thresh = 40;
bg = source(1).cdata; % read in 1st frame as background frame
bg_bw = rgb2gray(bg); % convert background to greyscale
% ----------------------- set frame size variables -----------------------
fr_size = size(bg);
width = fr_size(2);
height = fr_size(1);
fg = zeros(height, width);
% --------------------- process frames -----------------------------------
for i = 2:length(source)
fr = source(i).cdata; % read in frame
fr_bw = rgb2gray(fr); % convert frame to grayscale
fr_diff = abs(double(fr_bw) - double(bg_bw));
for j=1:width
for k=1:height
if ((fr_diff(k,j) > thresh))
fg(k,j) = fr_bw(k,j);
else
fg(k,j) = 0;
end
end
end
bg_bw = fr_bw;
figure(1),subplot(3,1,1),imshow(fr)
subplot(3,1,2),imshow(fr_bw)
subplot(3,1,3),imshow(uint8(fg))
end
Thank you
ReplyDeleteIt's very good example.
ReplyDeleteHi
ReplyDeleteI had used the above code for background subtraction but if i read my video using aviread it gives me an error
??? Error using ==> aviread at 76
Unable to locate decompressor to decompress video stream
so i had used mmreader instead of avi read but still it gives me the error mentioned below
??? Error using ==> mmreader.subsref at 76
There is no 'cdata' property in the 'mmreader' class.
Error in ==> backgd at 5
bg = source(1).cdata; % read in 1st frame as background frame
Can you please let me know what part of the code should i change if i use mmreader. Please help
Hi, can you give me name of the paper that you used to do this code???
ReplyDeleteplease
Thanks
please, can you give me video that you used in this code ?
ReplyDeletethanks
video is a copyright of Ateneo innovation center.
ReplyDeletebg = source(1).cdata; % read in 1st frame as background frame
ReplyDeleteCan you please let me know what part of the code should i change if i use mmreader. Please help
??? Error using ==> mmreader.subsref at 76
ReplyDeleteThere is no 'cdata' property in the 'mmreader' class.
Error in ==> backgd at 5
bg = source(1).cdata; % read in 1st frame as background frame
Can you please let me know what part of the code should i change if i use mmreader. Please help
what is i=2:length(source)
ReplyDeleteclear all;
ReplyDeleteclose all;
%warning('OFF','all')
clc;
thresh = 0.05;%originalmente era 40 pero a mi me funciona con este valor 0.005 aproximadamente
filename = 'video2.wmv';
hvfr = vision.VideoFileReader(filename, 'ImageColorSpace', 'RGB');
image = step(hvfr);
% ----------------------- frame size variables -----------------------
bg = image; % read in 1st frame as background frame
bg_bw = rgb2gray(bg); % convert background to greyscale
% ----------------------- set frame size variables -----------------------
fr_size = size(bg);
width = fr_size(2);
height = fr_size(1);
fg = zeros(height, width);
% --------------------- process frames -----------------------------------
while ~isDone(hvfr)
fr = step(hvfr);
fr_bw = rgb2gray(fr); % convert frame to grayscale
fr_diff = abs(double(fr_bw) - double(bg_bw));
for j=1:width
for k=1:height
if ((fr_diff(k,j) > thresh))
fg(k,j) = 255;
else
fg(k,j) = 0;
end
end
end
bg_bw = fr_bw;
figure(1),subplot(3,1,1),imshow(fr)
subplot(3,1,2),imshow(fr_bw)
subplot(3,1,3),imshow(uint8(fg))
end
Pls give me the matlab code for Block Based Background Subtraction method ...??
DeleteHI, how can I get this given me a Video as the result?
DeleteThank you.. It's really helpfull...
ReplyDeleteCan you post me the Background Subtraction method matlab code...??
ReplyDeletedo you have code for keyframe extraction from videos?
ReplyDeletehow can i calculate true positive,false negative to evaluate performance of background subtraction using frame difference ?
ReplyDeletehow can i extract the moving object frame from the video
ReplyDeleteI propose to change the loop for each pixel on next code:
ReplyDeletefg = zeros(size(fr));
fr_logical = (fr_diff > thresh);
fg = fr_bw.*fr_logical;