What is the real world accuracy of phone accelerometers when used for positioning?

Solution 1:

If you integrate the accelerometer values twice you get position but the error is horrible. It is useless in practice.

Here is an explanation why (Google Tech Talk) at 23:20.

I answered a similar question.

Solution 2:

I don't know if this thread is still open or even if you are still attempting this approach, but I could at least give an input into this, considering I tried the same thing.

As Ali said.... it's horrible! the smallest measurement error in accelerometers turn out to be rediculess after double integration. And due to constant increase and decrease in acceleration while walking (with each foot step in fact), this error quickly accumulates over time.

Sorry for the bad news. I also didn't want to believe it, till trying it self... filtering out unwanted measurements also doesn't work.

I have another approach possibly plausible, if you're interested in proceeding with your project. (approach which I followed for my thesis for my computer engineering degree)... through image processing!

You basically follow the theory for optical mice. Optical flow, or as called by a view, Ego-Motion. The image processing algorithms implemented in Androids NDK. Even implemented OpenCV through the NDK to simplify algorithms. You convert images to grayscale (compensating for different light entensities), then implement thresholding, image enhancement, on the images (to compensate for images getting blurred while walking), then corner detection (increase accuracy for total result estimations), then template matching which does the actual comparing between image frames and estimates actual displacement in amount of pixels.

You then go through trial and error to estimate which amount of pixels represents which distance, and multiply with that value to convert pixel displacement into actual displacement. This works up till a certain movement speed though, the real problem being camera images still getting too blurred for accurate comparisons due to walking. This can be improved by setting camera shutterspeeds, or ISO (I'm still playing around with this).

So hope this helps... otherwise google for Egomotion for real-time applications. Eventually you'll get the right stuff and figure out the jibberish I just explained to you. enjoy :)

Solution 3:

The optical approach is good, but OpenCV provides a few feature transforms. You then feature match (OpenCV provides this).

Without having a second point of reference (2 cameras) you can't reconstruct where you are directly because of depth. At best you can estimate a depth per point, assume a motion, score the assumption based on a few frames and re-guess at each depth and motion till it makes sense. Which isn't that hard to code but it isn't stable, small motions of things in the scene screw it up. I tried :)

With a second camera though, it's not that hard at all. But cell phones don't have them.