Question

I'm looking at outdoor positioning for a little virtual reality POC I'm doing. I want to control the movement (not rotation that is done with the IMU) of a game character with the GPS and IMU sensors.

I need to fuse GPS and ACC sensors to get as little latency and error as possible. Is there any such fusion around or do I have to invent it from scratch?

The code will be used in this Open source project https://github.com/AndersMalmgren/FreePIE

edit: This article suggest Kalman http://www.codeproject.com/Articles/326657/KalmanDemo But people here on SO suggets that the error of the accelerometer is too great and that it will not work.

Was it helpful?

Solution

You certainly won't have to "invent" it from scratch - GPS/INS fusion is a topic well covered in literatur. Just two well known books:

As others have pointed out (e.g. Kalman Filter for Android), there are also implementations of Kalman filters in Java / for Android.

The problem with Kalman filtering in your specific case is that need to fulfill several requirements for implementing a Kalman filter*. Theoretically, you need to make sure that the noise (unmodelled measurement errors) are white and uncorrelated over all times. The problem is that you don't get the raw GPS measurements from the internal receiver (which could approximately be considered white) but rather an already filtered solution (which definitely exhibits time-correlations).

Another problem is, that in a Kalman filter you need to do some tuning, i.e. you need to set parameters like measurement noise. Such parameters depend on the quality of the sensors used - so they differ between different devices. This might degrade your estimation quality on some devices.

That being said, there might be several solutions:

  • Use the velocity to extrapolate the position in between position updates. (I haven't done this on Android, but this question might help to get the velocity.) If the user's velocity doesn't change too frequently (compared to the position update frequency), this should work quite well in most cases.

  • Implement a full Kalman filter: Combining absolute position measurements with Accelerometers is pretty common as noted by the books cited above, even with cheap MEMS grade inertial sensors. In order to reduce the errors induced by the Acc sensors, estimate these in the Kalman filter state vector. Usually, a Kalman filter estimates position, velocity, attitude and Acc/Gyro biases in one filter. You can drop attitude and gyros if you want to assume that these are known well enough. Even though your sensors might exhibit many more errors, estimating biases is often good enough for estimating in between position updates.

    Implementing a full Kalman Filter could also mean you account for time correlations of your measurements, e.g. with techniques called Schmidt-Kalman-Filter (see books cited above). This might also mean you use Adaptive Kalman Filtering to estimate some parameters of your filter to account for different sensors in different devices. Note, however, that such things require a little experience in the field of navigation: The implementation is usually easy - just a few lines of matrix operations, but the tuning can be time-consuming. But that doesn't mean you shouldn't try it!

  • Only use the bias estimation of the above filter: With the Acc bias estimated, you can improve the first method (extrapolating the position with the velocity) by also extrapolating the velocity with the Accelerometer measurements.

*from a theoretical standpoint - you can always ignore theory and just try. Sometimes it will still work :-)

OTHER TIPS

I know this is a bit late, but I have an open source project with Kalman Filtering and Rauch-Tung-Striebel smoothing (backwards kalman) for Java. If your process model and/or measurement model is nonlinear there is also support for extended- and unscented filtering and smoothing.

https://github.com/karnstrand/Kalman4J

Lycka till Anders! :-)

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top