My Vo Duc and Andreas Zell

Real Time Face Tracking and Pose Estimation Using an Adaptive Correlation Filter for Human-Robot Interaction

European Conference on Mobile Robots (ECMR 2013) (Oral), Barcelona, Catalonia, Spain, 2013


Abstract

In this paper, we present a real time algorithm for mobile robots to track human faces and estimate face poses accurately, even when humans move freely and far away from the camera or go through different illumination conditions in uncontrolled environments. We combine the algorithm of an adaptive correlation filter with a Viola-Jones object detection to track the face as well as the facial features including the two external eye corners and the nose. These facial features provide geometric cues to estimate the face pose robustly. In our method, the depth information from a Microsoft Kinect camera is used to estimate the face size and improve the performance of tracking facial features. Our method is shown to be robust and fast in uncontrolled environments.


Downloads and Links

[pdf]


BibTeX

@inproceedings{voduc2013,
  author = {My Vo Duc and Andreas Zell},
  title = {Real Time Face Tracking and Pose Estimation Using an Adaptive Correlation
	Filter for Human-Robot Interaction},
  booktitle = {European Conference on Mobile Robots (ECMR 2013) (Oral)},
  year = {2013},
  address = {Barcelona, Catalonia, Spain},
  month = september,
  abstract = {In this paper, we present a real time algorithm for mobile robots
	to track human faces and estimate face poses accurately, even when
	humans move freely and far away from the camera or go through different
	illumination conditions in uncontrolled environments. We combine
	the algorithm of an adaptive correlation filter with a Viola-Jones
	object detection to track the face as well as the facial features
	including the two external eye corners and the nose. These facial
	features provide geometric cues to estimate the face pose robustly.
	In our method, the depth information from a Microsoft Kinect camera
	is used to estimate the face size and improve the performance of
	tracking facial features. Our method is shown to be robust and fast
	in uncontrolled environments.},
  url = {http://www.cogsys.cs.uni-tuebingen.de/publikationen/2013/voduc2013.pdf}
}