Technical Blog

Follow a region of an image frame by frame

My objective is to detect a moving object with an immobile camera, and then to track it with a moving camera. I will present how I made the second part.
My first algorithm detects the moving object, by drawing a square around it.

In this example, the object is a ball. It is quite straightforward to detect, but it could be something more complex like a cat. Therefore, I needed an algorithm that could handle any kind of object.

Once I’ve detected a moving object, I can go to the second step: tracking it.

Tracking the object

We detected an object in the frame N. We have to find it back in the next pictures: N + 1, N + 2 etc., as long as the object is in the field of view.

We cannot use the previous algorithm. The technique to detect the moving object is a video surveillance technique, that only works with a static camera.

It turns out that the camshift algorithm is the one we need. It is a modified meanshift algorithm, that can handle a scaling on the object. Therefore, it still works if the altitude of the camera changes. It can find where the region is in the next frame.

The process to use camshift is the following:

  • Initialize the tracking with the previously detected object in frame N
  • Call the tracker with the new frame. It will detect the object and update the position of the last detected zone. Do it again with each new frame.

We use the algorithm with the second frame, and display with an ellipse the result:

The ellipse should be a circle, but what matters is that the center of the shape is (almost) the center of the ball. A few parameters can be configured in the camshift algorithm, and therefore we can optimize it.

The algorithm works on real time:


    The result is convincing, however camshift has some drawbacks:

  • The algorithm uses the hue of the image, so it doesn’t work when the object to track is close to white or black.
  • It requires that the object did not move too much.

For example, in this image the object or the camera moved to much. The difference of position is to high, so the algorithm failed to detect it.

Some code

I use the OpenCV’s function cvCamShift, but it requires more code to work. Therefore, I decided used a wrapper. I found more or less the same code on a lot of different place, but I used Billy Lamberta’s C Wrapper. I used camshifting.[ch] to build my own program.

// returns a CvRect containing the object to track
// here the value is hardcoded, but we should put here the function
// that detects the moving object
static CvRect *get_object_rect(void)
  CvRect* object_rect =  malloc(sizeof (CvRect));
  *object_rect = cvRect(235, 40, 50, 50);
  return object_rect;

int main (void)
  const int nbImages = 6;
  const char *files[nbImages] = {"1.jpeg", "2.jpeg", "3.jpeg", "4.jpeg", "5.jpeg", "6.jpeg"};
  IplImage *in[nbImages];

  for (int i = 0; i < nbImages; ++i)
    in[i] = cvLoadImage(files[i], CV_LOAD_IMAGE_COLOR);

  cvNamedWindow("example", CV_WINDOW_AUTOSIZE);

  CvRect* object_rect = get_object_rect();
  /*  // Use this to check that the rectangle is correct
              cvPoint(object_rect->x, object_rect->y),
              cvPoint(object_rect->x + object_rect->width, object_rect->y + object_rect->height),
              cvScalar(255, 0, 0, 1), 1, 8, 0);
  cvShowImage("example", in[0]); // Display initial image

  TrackedObj* tracked_obj = create_tracked_object(in[0], object_rect);
  CvBox2D object_box; //area to draw around the object                                                                                                         

  IplImage* image;
  //object detection with camshift
  for (int i = 1; i < nbImages; ++i)
    image = in[i];
    //track the object in the new frame
    object_box = camshift_track_face(image, tracked_obj);

    //outline object ellipse
    cvEllipseBox(image, object_box, CV_RGB(0,0,255), 3, CV_AA, 0);
    cvShowImage("example", image);

  //free memory
  for (int i = 0; i < nbImages; ++i)

  return 0;