Séminaire STL – SILEXA – 16 novembre 2007
" Geste, parole, espace : l'expression des relations locatives statiques dans le discours verbal".
(NB malgré le résumé en anglais, la présentation sera faite en français)
years, studies have investigated how speakers use speech and gesture when
communicating information of a spatial nature. These studies have looked, for
example, at how speakers communicate spatial instructions to a third party (Emmorey & Casey 2001) and at the ways in which speakers
of different languages describe dynamic motion events (McNeill 2000, Kita &
Özyürek 2003). However, the fundamental question of
how speakers use speech and gesture to express static locative
relationships (for example, “the car is in the street”) has been largely
neglected. Such a void in the literature is particularly surprising, given that
the need to express such information is recurrent in everyday life. The present
paper investigates this question by presenting data collected in
Drawing on filmed examples from this data, I argue that speakers divide the task of expressing salient, locative information between the verbal and gestural channels of communication. This twin approach to the expression of locative information entails that the granularity (considered here to mean “specificity” or level of detail, following Narasimhan and Cablitz 2002) of a locative utterance should be understood as a function of both speech and gesture. For example, the study highlights that speakers frequently employ the prepositional expression 'next to' to encode the location of one entity (the “figure”, following Talmy 2000) in relation to another entity (the “ground”, Talmy 2000) on the lateral axis. However, 'next to' does not specify whether the figure is actually to the right or to the left of the ground entity – speakers commonly express this information in gesture only. The use of gesture to provide such detail means that the granularity of the locative expression is not merely a function of prepositional locative semantics: rather, it is a function of both the prepositional unit and the gesturally presented information.
Emmorey, K. & Casey, S. (2001). “Gesture, thought and spatial language”, in: Gesture, 1.1, 2001.
Kita, S. & Özyürek, A. (2003). “What does cross-linguistic variation in semantic coordination of speech and gesture reveal?: Evidence for an interface representation of spatial thinking and speaking”, in: Journal of Memory and Language, 48, 16-32.
McNeill, D. (2000). “Analogic/Analytic representations and cross-linguistic differences in thinking for speaking*”, in: Cognitive Linguistics, 11-1/2, 43-60.
B. & Cablitz, G. (2002).
“Granularity in the crosslinguistic
encoding of motion and location”, 3rd Annual Workshop on Language and Space,
L. (2000). Toward a Cognitive Semantics.