How to learn mouse movement?
I've been attempting to develop a means of synthesizing human-like mouse movement in an application of mine for the past few weeks. At the start I used simple techniques like polynomial and spline interpolation, however even with a little noise the result still failed to appear sufficiently human-like.
In an effort to remedy this issue, I've been researching into ways of applying machine learning algorithms on real human mouse movement biometrics in order to synthesize mouse movements by learning from recorded real human ones. Users would be compiling a profile of recorded movements that would trainh= the program for synthesis purposes.
I've been searching for a few weeks and read several articles on application of inverse biometrics in generating mouse dynamics, such as Inverse Biometrics for Mouse Dynamics; they tend to focus, however, on generating realistic time from randomly-generated dynamics, while I was hoping to generate a path from specifically A to B. Plus, I still need to actually need to come up with a path, not just a few dynamics measured from one.
Does anyone have a few pointers to help a noob?
Currently, testing is done by recording movements and having I and several other developers watch the playback. Ideally the movement will be able to trick both an automatic biometric classifier, as well as a real, live, breathing Homo sapien, too.
Fitt's law gives a very good estimation of the time needed to position the mouse pointer. In the derivation section there is a simple explanation I think you could use this as one of the basic building blocks of your app. Start with big movements, put some inacurracy both in the direction and the length of the movement, then do a smaller correction movement and so on...
First, i guess you record human mouse movements from A to B. Because otherwise, trying to synthesize a model for such movement does not seem possible to me.
Second, how about measuring the deviations from the "direct" path, maybe in relation to time. I actually suspect that movements look different for different angles, path lengths etc., but maybe you can try a normalized model first, that you just stretch (in space and time) and rotate like you need it.
Third, the learning. The easiest thing would be to just have a collection of real moves (in the form i discussed above), and sample from that collection. Evaluate how that looks like. If you really want a probabilistic model, then you have to evaluate what kind of models fit. is it enough to blurr the direct path with gaussian noise whose parameters you learn from your training set? Or some (sin-)wavy deviation? Or seperate models for "getting near the button" and "final corrections". Fitts law might be useful for evaluation.
This question reminded me of a website I knew about years ago, so I visited it and found this in-depth discussion on the topic.
The timing is so similar as to make me think this question is related in some way. In fact, someone in the thread linked to the same article you did. If it's not related, well, there's a link to a lot of people discussing exactly what you're thinking about.