Propositions de projets d'expériences

Christophe Pallier

1  Temps de réaction et niveaux de représentation

Article: Posner MI, Mitchell RF. Chronometric analysis of classification. Psychol Rev. 1967 Sep;74(5):392-409.

Les stimuli sont des paires de lettres (par exemple, 'Aa', 'AB', 'CC',...). Les sujets doivent dire si les deux lettres sont identiques ou différentes en ignorant la distinction majuscule/minuscule. Le principal résultat est que le temps de décision est plus important pour les paires de type 'Aa' que pour les paires de type 'AA' (même casse).

But : adapter cette expérience en auditif en remplacant les lettres par des syllables enregistrées par deux voix différentes. Mesurer les temps pour décider que deux syllabes sont identiques, en comparant les situations où elles sont prononcées par la même voix que par une voix différente. Et comparer avec le temps mis pour comparer un syllabe écrite avec une syllabe parlée.

Préparation : facile / Programmation : assez facile / Analyse de données : facile

2  Variations sur l'effet McGurk

H. McGurk & J. MacDonald. Hearing lips and seeing voices, Nature 264, 746-748 (1976). [pdf]

Voir la démo McGurk_large.mov (yeux ouverts puis yeux fermés)

But: reproduire l'effet et étudier l'un des facteurs suivants:

Note: En fait, ces facteurs ont déjà été examinés:

Walker S, Bruce V, O'Malley C. Facial identity and facial speech processing: familiar faces and voices in the McGurk effect. Percept Psychophys. 1995 Nov;57(8):1124-33.

An experiment was conducted to investigate the claims made by Bruce and Young (1986) for the independence of facial identity and facial speech processing. A well-reported phenomenon in audio-visual speech perception-the McGurk effect (McGurk & MacDonald, 1976), in which synchronous but conflicting auditory and visual phonetic information is presented to subjects-was utilized as a dynamic facial speech processing task. An element of facial identity processing was introduced into this task by manipulating the faces used for the creation of the McGurk-effect stimuli such that (1) they were familiar to some subjects and unfamiliar to others, and (2) the faces and voices used were either congruent (from the same person) or incongruent (from different people). A comparison was made between the different subject groups in their susceptibility to the McGurk illusion, and the results show that when the faces and voices are incongruent, subjects who are familiar with the faces are less susceptible to McGurk effects than those who are unfamiliar with the faces. The results suggest that facial identity and facial speech processing are not entirely independent, and these findings are discussed in relation to Bruce and Young's (1986) functional model of face recognition.

John MacDonald, Søren Andersen, Talis Bachmann. Hearing by eye: how much spatial degradation can be tolerated? Perception. 2000;29(10):1155-68.

In the McGurk effect (McGurk and MacDonald, 1976 Nature 264 746-748), illusory auditory perception is produced if the visual information from lip movements is discrepant from the auditory information from the voice. A study is reported of the tolerance of the effect to varying levels of spatial degradation (videotaped images of a speaker's face were quantised by a mosaic transform). The illusory effect systematically decreased with an increase in the coarseness of the spatial quantisation. However, even with the coarsest level (11.2 pixels/face) the illusion did not completely disappear. In addition, those participants who did not experience the illusion nevertheless showed the effects of auditory-visual interaction in their clarity ratings of the auditory stimulus. It is concluded that auditory-visual interaction in visible speech perception is based on relatively coarse-spatial-scale information.

Alsius A, Navarra J, Campbell R, Soto-Faraco S. Audiovisual integration of speech falters under high attention demands. Curr Biol. 2005 May 10;15(9):839-43. [pdf]

One of the most commonly cited examples of human multisensory integration occurs during exposure to natural speech, when the vocal and the visual aspects of the signal are integrated in a unitary percept. Audiovisual association of facial gestures and vocal sounds has been demonstrated in nonhuman primates and in prelinguistic children, arguing for a general basis for this capacity. One critical question, however, concerns the role of attention in such multisensory integration. Although both behavioral and neurophysiological studies have converged on a preattentive conceptualization of audiovisual speech integration, this mechanism has rarely been measured under conditions of high attentional load, when the observers' attention resources are depleted. We tested the extent to which audiovisual integration was modulated by the amount of available attentional resources by measuring the observers' susceptibility to the classic McGurk illusion in a dual-task paradigm. The proportion of visually influenced responses was severely, and selectively, reduced if participants were concurrently performing an unrelated visual or auditory task. In contrast with the assumption that crossmodal speech integration is automatic, our results suggest that these multisensory binding processes are subject to attentional demands.

Navarra J, Vatakis A, Zampini M, Soto-Faraco S, Humphreys W, Spence C. Exposure to asynchronous audiovisual speech extends the temporal window for audiovisual integration. Brain Res Cogn Brain Res. 2005 Oct;25(2):499-507. Epub 2005 Aug 31. [pdf]

We examined whether monitoring asynchronous audiovisual speech induces a general temporal recalibration of auditory and visual sensory processing. Participants monitored a videotape featuring a speaker pronouncing a list of words (Experiments 1 and 3) or a hand playing a musical pattern on a piano (Experiment 2). The auditory and visual channels were either presented in synchrony, or else asynchronously (with the visual signal leading the auditory signal by 300 ms; Experiments 1 and 2). While performing the monitoring task, participants were asked to judge the temporal order of pairs of auditory (white noise bursts) and visual stimuli (flashes) that were presented at varying stimulus onset asynchronies (SOAs) during the session. The results showed that, while monitoring desynchronized speech or music, participants required a longer interval between the auditory and visual stimuli in order to perceive their temporal order correctly, suggesting a widening of the temporal window for audiovisual integration. The fact that no such recalibration occurred when we used a longer asynchrony (1000 ms) that exceeded the temporal window for audiovisual integration (Experiment 3) supports this conclusion.

3  Integration des informations audio-visuelles

Waka Fujisaki, Shinsuke Shimojo, Makio Kashino & Shin'ya Nishida. Recalibration of audiovisual simultaneity. Nature Neuroscience 7, 773 - 778 (2004) [pdf]

To perceive the auditory and visual aspects of a physical event as occurring simultaneously, the brain must adjust for differences between the two modalities in both physical transmission time and sensory processing time. One possible strategy to overcome this difficulty is to adaptively recalibrate the simultaneity point from daily experience of audiovisual events. Here we report that after exposure to a fixed audiovisual time lag for several minutes, human participants showed shifts in their subjective simultaneity responses toward that particular lag. This 'lag adaptation' also altered the temporal tuning of an auditory-induced visual illusion, suggesting that adaptation occurred via changes in sensory processing, rather than as a result of a cognitive shift while making task responses. Our findings suggest that the brain attempts to adjust subjective simultaneity across different modalities by detecting and reducing time lags between inputs that likely arise from the same physical events.

But: Reproduire l'effet.

Préparation: facile. Programmation: assez facile (peut-être une difficulté dans la vérification du timing). Analyse: facile

4  Compétition entre mots

Spivey MJ, Grosjean M, Knoblich G. Continuous attraction toward phonological competitors. Proc Natl Acad Sci U S A. 2005 Jul 19;102(29):10393-8. [pdf]

Les auteurs étudient les trajectoires de déplacement de la souris quand les sujets doivent cliquer sur une image dont ils entendent le nom. A chaque essai, il y a deux images. Dans certains essais, les mots sont similaires (par exemple: plante/plume) ; dans d'autre ils sont complétement différent. Les trajectoires sont différentes dans les deux cas.

But: Reproduire le résultat de Spivey et al. et ajouter un condition où les mots sont similaires sur leur fin.

Préparation : moyenne (selection d'image et enregistrement de mots)

Programmation : assez complexe (enregistrement des mouvements de souris)

Analyse: assez complexe (normalisation des trajectoires).

Abstract: Certain models of spoken-language processing, like those for many other perceptual and cognitive processes, posit continuous uptake of sensory input and dynamic competition between simultaneously active representations. Here, we provide compelling evidence for this continuity assumption by using a continuous response, hand movements, to track the temporal dynamics of lexical activations during real-time spoken-word recognition in a visual context. By recording the streaming x, y coordinates of continuous goal-directed hand movement in a spoken-language task, online accrual of acoustic-phonetic input and competition between partially active lexical representations are revealed in the shape of the movement trajectories. This hand-movement paradigm allows one to project the internal processing of spoken-word recognition onto a two-dimensional layout of continuous motor output, providing a concrete visualization of the attractor dynamics involved in language processing.




File translated from TEX by TTH, version 3.01.
On 23 Oct 2005, 17:15.