This definitely isn’t an area where we have as much experience as others (such as Karen Adolph, John Franchak, Chen Yu, Roy Hessels). But there were just a few things that I wanted to share about our head-mounted eyetracker, which we’ve had for a few years now (more on infant eyetracking data quality in general).
It’s really tricky to use – because the beam which comes in front of the eye is, obviously, in front of the eye, and so highly visible. To start your head mounted eyetracking, you have to calibrate it before you can use it – by getting the participant to look at five set places – but what happened time and again for us is that you’d calibrate it and then, just as you’re ready to start recording, the infant notices the beam and reaches out to touch it – so you have to begin again.
Head mounted eyetracking in screen based vs. natural experiments
Another point is that you calibrate the head mounted eyetracker to one 2D plane, but then you lose accuracy when you move forwards or back from that – to it works best with experiments that all take place on one plane.
One other fundamental difference between a screen-based eyetracker and these head-mounted ones is that the screen eyetracker can automatically analyse where they’re looking on each frame (because what they’re seeing is identical to each child). Whereas for a head-mounted eyetracker what they’re seeing isn’t identical – and so you can’t easily process it automatically. You get a video with cross-hairs showing where they are looking – which then generally you have to code by hand.
Another disappointment was that the fixation detection algorithms (more on fixation durations) I’d written for screen-based eyetrackers weren’t any use at all for a head-mounted eyetracker – because the algorithms I wrote can’t cope at all with times when they’re looking at a static position in space but moving as they’re doing so…