The robot is really cool, I would like to have one just for random shenanigans! But I guess showing the 3D model would suffice for understanding what the user is doing, or is it easier to understand when actually observing the phone being manipulated in space by the robot?
Great question, we tested how people interpreted the motions on both, and the hardware version was a bit easier to understand (but nothing statistically significant). The paper has a figure with the comparison results.
Colf stuff ! What will be next in your project ?
Is it just a side-project or do you want to get more involved into it and, for example, try to sell it as a product to companies ?
Thanks! For this project, we had a lot of ideas how to improve the build that we learned while doing it. We'll try to make those changes. But more exciting for us, is we've used some of the techniques here to build an augmented reality system that runs on smartphones. I'll try to post a video in the next couple of weeks.
I was going to write a sarcastic comment about how great this is for me as a user, but I can’t do it.
How is this not a total violation of users’ expectation of privacy? Why doesn’t the page even mention the ethics of subjecting people to total surveillance to sell them more crap?
The current alternative is video recordings, which is more invasive. This is mentioned on the website, and described in detail in the paper on the website.
One thing people don't realize is that a lot of web analytics software already do "session recording". So it can playback every wiggle of your mouse, every scroll, every keystroke, etc.
Most of them don't record orientation events though. These are very noisy, vary wildly between devices, create an endless stream of data - and are difficult (if possible at all) to derive any useful insight from.