Skip to main content
European Commission logo print header

Infants predicting own and others’ actions: the neurocognitive development of action prediction

Final Report Summary - PREDICTING INFANTS (Infants predicting own and others’ actions: the neurocognitive development of action prediction)

The project “Predicting Infants” aimed at increasing our understanding of 1) how infants predict other people’s actions, and 2) how they predict the consequences of their own actions. A number of studies were conducted to fulfill these aims.
Three studies were conducted investigating the prediction of others’ actions. In the first study, 14-month-old infants were presented with short video clips of a point-light display representing a person walking towards, followed by reaching and grasping an object. While they observed the videos, gaze was tracked using 2D (screen-based) eye-tracking. Half of the videos were presented upside-down, a process called inversion, which is thought to disrupt biological processing. The results showed that infants looked at the target object before action completion, and hence showed signs of predicting the observed action performed by another person, when the actions were presented upright, but not when inverted. Predictions only emerged in the second half of the experiment, suggesting that some familiarization with point-light displays is needed before infants start predicting the actions. A follow-up experiment with infants of the same age confirmed that configural information is necessary for predictions to occur: without lines connecting the dots of the point-light displays, gaze was reactive rather than predictive.
The second study formed a bridge between screen-based and live eye-tracking. Most prior work on action prediction in infants has been performed in screen-based settings, in which the head of other person, the actor, was not visible in the scene. The reason is that not only hand movements, but head movements as well provide cues relevant for predicting the goal of the observed action as head movements typically accompany goal-directed actions such as reaching movements. Potentially, results of screen-based studies hence reflect inferior prediction abilities than that infants actually have in real-world settings. To test the relevance of head-movement cues for action prediction, 13-month-old infants observed an actor building a tower of rings while their gaze was recorded, employing 2D eye-tracking. In half of the cases, the head of the actor was visible, in the other half of the cases, the head was occluded. The results show that infants use hand movements as their primary cue for predicting the goal of the observed actions. Infants did look at the actor’s head when the head was visible, but if anything, this led to slower predictions rather than quicker.
The third study was a live eye-tracking interaction study in which 6-month-old infants observed an experimenter shifting his gaze from a central position to an object positioned left or right of the experimenter, and subsequently grasping the object. In all cases, two objects were present, and hence only the experimenter’s gaze shift was informative about which object he would subsequently grasp. The experimenter would either first look the infant in the eyes and then turn, or would first look at the edge of the table and then turn. After a sequence of gazing at and picking up objects, the experimenter offered the infant a small toy. Infants were not found to follow gaze of the experimenter, in accord with other studies in the field showing no gaze following at this young age, but in contrast to other studies who do find gaze following. Potentially, more salient objects are needed to attract infants’ attention, as goal salience is known to affect infants’ action prediction. Infants who had been stared at in the eyes, showed a tendency to be slower to accept the toy offered by the experimenter, than infants who had not had gaze contact with the experimenter during the object pick up events. Plausibly, infants who were stared at before felt a bit shy and were therefore slower to react.
Three other studies were conducted to address the second aim, concerning infants’ predictions of their own actions.
In the first study within this research line, 6-month-old infants were seated on a table surrounded by motion-tracking cameras which recorded the motion of the infants’ hands. The experimenter repeatedly stuck a small soft ball to one of the legs of the participating infant using double-sided tape. In half of the cases, a foam-filled bib was used to prevent the infants from seeing the ball. Hence, the infants either had visual access to the location of the ball or not. A soft band was tied around one of the infant’s upper legs, diminishing tactile access to the ball’s location when the ball was placed on the band instead of on the bare (other) upper leg. The question was whether infants would solely use vision to initiate reaches, or would rely on tactile information as well. The logic was that when predicting others’ actions, only visual and not tactile information is available about the action, which might make a difference between how one predicts own actions compared those performed by others. The results show that infants are capable of relying both on visual and on tactile information when initiating a reach, but that having access to tactile information about the goal location on top of visual information does not improve the reaching probability. This might suggest that predictions of own and others’ actions need not be very dissimilar as infants rely on the same sensory streams.
The second study in this research line included 8- to 9-month-old infants who were placed in a bicycle seat that could rotate clockwise and counterclockwise in an alternating fashion, creating a sinusoidal and hence predictable velocity profile. Participants experienced three situations: 1) the seat rotated, 2) the cylindrical walls around them rotated, 3) seat and walls rotated in synchrony. Participating infants wore a hat to which three reflective markers were attached. This served to register the movements of the infant’s head by means of motion-tracking. The results show that infants make use of vision for postural control, but they also use position sense information, for instance from the vestibular system, proprioception and tactile sensors. Moreover, infants were found to best stabilize their head when both streams of information were available (vision and position sense). The head movement data show no systematic predictive or reactive responses to the different sensory situations. However, EMG data of muscles on the infants’ back was also collected (data analyses in progress), which may shed more light on the predictive or reactive nature of postural control in the early development of independent sitting.
The third study in this line was a follow-up of the second study. Potentially infants in the second study had used an arbitrary spot on the inside of the cylinder as a visual landmark, which might have turned the task which was intended as a postural control task, into a visual task, namely to track a landmark in the scene. To investigate whether such a hypothesis could explain the results found in the second study, a new group of 8- to 9-month-old infants were invited to the lab to sit in the bicycle seat. Here, only two situations were explored, namely 1) the seat rotated, or, 2) the walls rotated. In front of the infants, mounted to the inside of the cylinder, a small music mobile was attached. The results show first of all no differences with the prior experiment, indicating that infants displayed comparable behaviors in the presence and absence of a clear visual target. Moreover, the results show that the infants’ heading direction was on average 38 degrees off target. Consequently, it seems unlikely that infants use a visual target in this set-up to stabilize themselves. Rather, the results provide an indication that infants’ postural control, when relying on vision, depends on optic flow.
The project was furthermore targeted to train the fellow on new (transferable) skills, preparing her for the next level in her career. Through the above-mentioned studies, the fellow has learnt to acquire and analyze EMG data, motion-tracking data, live eye-tracking data of two persons simultaneously, and analyze these signals combined. The fellow has gained team-building skills by organizing a group retreat, and management skills by taking part in the management team of the Uppsala Child and Babylab. Furthermore, she has trained on grant writing skills by writing two grant proposals, of which one is pending, and the other awarded, allowing her to work in the UK at the University of Oxford as a postdoctoral (visiting) researcher for two years. The fellow can be contacted via: janny.stapel@psy.ox.ac.uk.