I'm planning to implement an application with augmented reality features. For one of the features I need an egomotion estimation. Only the camera is moving, in a space with fixed objects (nothing or only small parts will be moving, so that they might be ignored).

So I searched and read a lot and stumbled upon OpenCV. Wikipedia explicitly states that it could be used for egomotion. But I cannot find any documentation about it.

  1. Do I need to implement the egomotion algorithm by myself with OpenCV's object detection methods? (I think it would be very complex, because objects will move in different speed depending on their distance to the camera. And I also need to regard rotations.)
  2. If so, where should I start? Is there a good code example for a Kanade–Lucas–Tomasi feature tracker with support for scaling and rotation?

P.S.: I also know about marker based frameworks like vuforia, but using a marker is something I would like to prevent, as it restricts the possible view points.

Update 2013-01-08: I learned that Egomotion Estimation is better known as Visual Odometry. So I updated the title.

有帮助吗?

解决方案

You can find a good implementation of monocular visual odometry based on optical flow here.

It's coded using emgucv (C# opencv wrapper) but you will find no issues on convert it back to pure opencv.

其他提示

Egomotion (or visual odometry) is usually based on optical flow, and OpenCv has some motion analysis and object tracking functions for computing optical flow (in conjunction with a feature detector like cvGoodFeaturesToTrack()).

This example might be of use.

Not a complete solution, but might at least get you going in the right direction.

许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top