Tuesday, December 11, 2012

Ball Tracking / Detection using OpenCV

   Ball detection is pretty easy on OpenCV. So to start with lets describe what steps we will go through.

                       LINK TO THE CODE




1.Load an image / start a video capture




2.Convert image from RGB space to HSV space . HSV(hue saturation value) space gives us better results while doing color based segmentation.

3.Seperate Image into its 3 component images(i.e H  S  V each of which is a one dimensional image or intensity image)
H component

S component

V component


4.Use a condition for intensity values in the image and get a Binary image.
  i.e let say we taken H intensity image .If our ball is red color .Then in this image we will find that the values of the pixel where the ball is present , lies in a specific range. so we define a condition for every pixel . if                                (pixel > threshold_min & pixel < threshold_max )= pixel of o/p image is 1 else it is zero.

NOTE:
FOR THE PURPOSE OF CALIBRATION WE HAVE 2 SLIDERS ON EACH COMPONENT IMAGE TO SET THE LOWER AND UPPER LIMIT OF PIXEL VALUES.

H component after condition


We do this for all components i.e for S and V.


S component after condition

V component after condition

5.Now we have three binary images( only black and only white) . Which has the region of ball as 1's and every thing else which has the intensity values greater(less) than threshold .The pixels that do not pass this conditions will be zero.


6.We then combine all the above three Binary images (i.e we AND them all). All the pixels that are white in the three images will be white in the output of this step.So there will be regions too which will have 1's but with lower areas and of random shapes.

Combined image

7.Now we use houghs transform on the output of last operation to find the regions which are circular in shape.

8.Then we draw the marker on the detected circles as well as display the center and radius of the circles





Thursday, February 2, 2012

Setting up opencv on DEV C++



This is really simple if you know what to do .

  1. Download OpenCV : install ; check the path where it is installed. for eg C:\Opencv2.x
  2. Download DevCPP: install ;

Once done with both
Open Dev Cpp.Go to TOOLS - COMPILER OPTIONS . ADD new compiler (click on plus sign).
Name it OpenCV.

Add these lines and tick
Add these foll commands while calling compiler
-L"C:\OpenCV\lib" -lcxcore210 -lcv210 -lcvaux210 -lhighgui210 -lml210

while doing so change the lib path( C:\OpenCV\lib) according to the path you have saved.GO to the lib folder in opencv dir and check for the above files linked .eg cxcore210 check if there is some other name instead and replace accordingly.


Add these lines and tick
Add these foll commands to the linker command line
-lcxcore210 -lcv210 -lcvaux210 -lhighgui210 -lml210


Now go to Directories

first in Binaries Add path to opencv Bin folder
C:\OpenCV\bin
again change it according to your bin path

then go to Libraries Add path to opencv Lib folder
C:\OpenCV\lib
again change it according to your lib path


then go to C includes Add path to opencv Include folder
C:\OpenCV\include
again change it according to your include path

then go to C++ includes Add path to opencv Include folder
C:\OpenCV\include
again change it according to your include path


Now go to Environment Variables and edit path variable
and add Opencv/bin to path and save.
Again the bin path should be according to your install dir opencv path. change it accordingly.




click ok and your are done .
go to samples and run them.
if u get errors
Make sure you have selected operating compiler as openCV.

Project -Project options - Compiler

cheers

Hand gesture using opencv


Hi ! In this post I will be describing the code for hand gesture recognition using OpenCV.The code is written in C on Dev C++.For installing the necessary libraries on Dev C++ you can check my previous post. So basically to start with I had to extract the hand region .Which can be done by many ways for eg                                                                                                                                                                                  1) you can segment the hand region using RGB values i.e.R G B values of hand will be different from background                                                                                                                                                         
OR
2) you can use edge detection
 OR
3) background subtraction.

     I have used background subtraction model. OpenCV provides us with different back ground subtraction models I choose codebook ( no specific reason).What it does is it calibrates for some time to be exact for some frames.In which for all the images it acquires; it calculates the average and deviation of each pixel and accordingly designates boxes. For more information please refer a book.

     So at this stage we have removed the background and in the foreground we only have our hand. For those who are new to CV it is like a black and white image with only the hand as white.

  
   In the next part what we intend to do is recognise the gesture. Here we use Convex Hull to find the finger tips.Convex hull is basically the convex set enclosing the hand region.


     The red line bounding the hand is convex hull .Basically it’s a convex set ; means if we take any two points inside the red region and join them to form a line then the line entirely lies inside the set.



     The yellow dot is the defect point and there will be many such defect points i,e every valley has a defect point. Now depending upon the number of defect points we can calculate the number of fingers unfolded.



summary :-
  • The hand region extraction has been done using background substraction using codebook method.
  • For Tip points i have used convex hull 2 and for depth points convexity defects.
The main code for extracting the contour and detecting the convexity points is in the function
void detect(IplImage* img_8uc1,IplImage* img_8uc3);

Place the camera in front of a steady background ; run the code ,wait for some time .Once the calibration has been done . U see the connected component image showing some disturbance.Bring your hand in cameras view . Enjoy .

VIDEOS:-




CODES:-

Link 1 : Convex Hull2 usage

Link 2 : Hand gesture recognition

                    FOR OPENCV 2.4


 Background subtraction has been done using codebook.
My code has been written over the basic example available in the opencv examples for codebook.So all that i have written has been included in a new function named detect() .

void detect(IplImage* img_8uc1,IplImage* img_8uc3) {

//8uc1 is BW image with hand as white And 8uc3 is the original image


CvMemStorage* storage = cvCreateMemStorage();
CvSeq* first_contour = NULL;
CvSeq* maxitem=NULL;
double area=0,areamax=0;
int maxn=0;


//function to find the white objects in the image and return the object boundaries

int Nc = cvFindContours(
img_8uc1,
storage,
&first_contour,
sizeof(CvContour),
CV_RETR_LIST // Try all four values and see what happens
);


int n=0;
//printf( "Total Contours Detected: %d\n", Nc );


//Here we find the contour with maximum area

if(Nc>0)
{
for( CvSeq* c=first_contour; c!=NULL; c=c->h_next )
{
//cvCvtColor( img_8uc1, img_8uc3, CV_GRAY2BGR );
area=cvContourArea(c,CV_WHOLE_SEQ );
if(area>areamax)
{areamax=area;
maxitem=c;
maxn=n;
}

n++;
}



CvMemStorage* storage3 = cvCreateMemStorage(0);
//if (maxitem) maxitem = cvApproxPoly( maxitem, sizeof(maxitem), storage3, CV_POLY_APPROX_DP, 3, 1 );


if(areamax>5000) //
check for area greater than certain value and find convex hull
{
maxitem = cvApproxPoly( maxitem, sizeof(CvContour), storage3, CV_POLY_APPROX_DP, 10, 1 );
CvPoint pt0;
CvMemStorage* storage1 = cvCreateMemStorage(0);
CvMemStorage* storage2 = cvCreateMemStorage(0);
CvSeq* ptseq = cvCreateSeq( CV_SEQ_KIND_GENERIC|CV_32SC2, sizeof(CvContour),
sizeof(CvPoint), storage1 );
CvSeq* hull;
CvSeq* defects;
for(int i = 0; i < maxitem->total; i++ )
{ CvPoint* p = CV_GET_SEQ_ELEM( CvPoint, maxitem, i );
pt0.x = p->x;
pt0.y = p->y;
cvSeqPush( ptseq, &pt0 );
}
hull = cvConvexHull2( ptseq, 0, CV_CLOCKWISE, 0 );
int hullcount = hull->total;
defects= cvConvexityDefects(ptseq,hull,storage2 );
//printf(" defect no %d \n",defects->total);

CvConvexityDefect* defectArray;
int j=0;
//int m_nomdef=0;
// This cycle marks all defects of convexity of current contours.
for(;defects;defects = defects->h_next)
{
int nomdef = defects->total; // defect amount
//outlet_float( m_nomdef, nomdef );
//printf(" defect no %d \n",nomdef);
if(nomdef == 0)
continue;
// Alloc memory for defect set.
//fprintf(stderr,"malloc\n");
defectArray = (CvConvexityDefect*)malloc(sizeof(CvConvexityDefect)*nomdef);
// Get defect set.
//fprintf(stderr,"cvCvtSeqToArray\n");
cvCvtSeqToArray(defects,defectArray, CV_WHOLE_SEQ);
// Draw marks for all defects.
for(int i=0; i
{ printf(" defect depth for defect %d %f \n",i,defectArray[i].depth);
cvLine(img_8uc3, *(defectArray[i].start), *(defectArray[i].depth_point),CV_RGB(255,255,0),1, CV_AA, 0 );
cvCircle( img_8uc3, *(defectArray[i].depth_point), 5, CV_RGB(0,0,164), 2, 8,0);
cvCircle( img_8uc3, *(defectArray[i].start), 5, CV_RGB(0,0,164), 2, 8,0);
cvLine(img_8uc3, *(defectArray[i].depth_point), *(defectArray[i].end),CV_RGB(255,255,0),1, CV_AA, 0 );
}
char txt[]="0";
txt[0]='0'+nomdef-1;
CvFont font;
cvInitFont(&font, CV_FONT_HERSHEY_SIMPLEX, 1.0, 1.0, 0, 5, CV_AA);
cvPutText(img_8uc3, txt, cvPoint(50, 50), &font, cvScalar(0, 0, 255, 0));
j++;
// Free memory.
free(defectArray);
}

cvReleaseMemStorage( &storage );
cvReleaseMemStorage( &storage1 );
cvReleaseMemStorage( &storage2 );
cvReleaseMemStorage( &storage3 );
//return 0;
}
}
}


thank you!! :)