Skip to main content

Language in our hand: The role of modality in shaping spatial language development in deaf and hearing children

Final Report Summary - LANGUAGE IN OUR HAND (Language in our hand: The role of modality in shaping spatial language development in deaf and hearing children)

In everyday life, we constantly experience (i.e. perceive, apprehend) and communicate about many events that are visual and spatial in nature. Entities can be located or moving in relation to each other (e.g. fork to the left of a plate, a man running to his car, etc.). Furthermore our language ability also allows us to express these relations. However the way languages encode spatial relations can be different from each other and is radically different between signed and spoken languages. Given this diversity, a major scientific question in research on language acquisition has been to what extent it follows a universal trajectory based on an innate design for language and a universal conceptual development, and to what extent it is shaped by specific properties of the language (modality, language type) that is being learned. To answer this question this ERC project investigated the acquisition of a spoken language with a language that uses a visuo-spatial format using hands and body, namely signed languages which provide a unique window for investigating this fundamental question.

In this project we aimed to focus on Turkish Sign Language, a relatively understudied sign language by deaf children learning it from their deaf parents and from which so far no data was collected. We compared development of Turkish Sign Language (TSL) to that of age-matched Turkish-speaking children and adults to see if learning a visual-spatial language such as TSL hinders, speeds up or does not affect at all its development compared to a spoken language like Turkish, which is typologically different from mostly studied Western languages. In addition we proposed to compare sign language development not only to spoken language of the hearing Turkish children and adults but also to their gestures as well- that is, to the multimodal co-expressive utterance that hearing speakers use.

To answer these novel and innovative questions we focused on development of spatial language in expressions of both static as well as dynamic relations (i.e. box is on the table; the girl pushes the toy under the chair; the dog jumps on the bed etc.). Spatial language is especially suitable for the question we are interested in since in signed languages the mapping of spatial relations onto linguistic structures is more analogue (i.e. iconic) to what is depicted compared to spoken languages and exhibit radically different structures.

We collected data from three age groups (4-6; 7-9 and adults (parents and nonparents) in both langauge gorups, 10 subjects in each cell. Previous research has claimed spatial language to pose difficulties for signing children due to its morphological complexity. However one of our findings is that in terms of encoding locative relations (i.e. in, on, under) deaf children seem to be following a similar pattern as hearing children. We have even found that in terms of encoding viewpoint relations (i.e as left right and front-back) deaf children are even ahead of speaking children taking advantage of the affordances of the modality. For motion event expressions (e.g. the girl walked to the car) , we found that signing children are found adult like in terms of simultaneous constructions in the earliest ages we tested and showed no sign of diffculty in the acquisition process. With regard to expressing Manner of motion (e.g walk), signing children are again found to be advantegous since Turkish- speaking children and adults, due to the typology of the language, does not express Manner frequently neither in their speech nor in gesture.

Thus spatial language does not seem to pose difficulties for signing children s claimed by previous research. The developmental patterns then are somewhat universal and are somewhat guided by the iconic mapping of the form to the concept (e.g. as in viewpoint coding terms where for example LEFT sign is a tap on the left arm) and also by the typology of the language. The results have implications not only for sign language field but also for the development of spatial language and cognition as well as for cross linguistic typology. They that both universal as well as language specific and iconic principles (specific to visual modality) guide learning spatial language.