Wikiversity:Fellow-Programm Freies Wissen/Einreichungen/Implementation and conduction of an experiment on bidirectional audiovisual interaction

Implementation and conduction of an experiment on bidirectional audiovisual interaction Bearbeiten

Projektbeschreibung Bearbeiten

Introduction Bearbeiten

The research area of multisensory integration studies the combined processing of signals supplied by our different sensory organs, which allows us to form a coherent picture of our internal and external environments. While each sense individually has been studied to a great detail, the existence of crossmodal biases, where perception in one sense is influenced by information from a different sense, suggests that interaction between modalities is an important aspect in the formation of conscious perception. Two prominent examples for crossmodal bias are the sound-induced flash illusion (Shams, Kamitani, & Shimojo, 2000), where the number of perceived light flashes is influenced by the number of concurrently presented auditory beeps, and the ventriloquist illusion (Choe, Welch, Gilford, & Juola, 1975), where the localization of sounds is influenced by visual information.

These phenomena illustrate that the direction of influence is dependent on different dimensions of sensory input: temporal accuracy is higher for the auditory modality, leading to dominance of auditory over visual information in the sound-induced flash illusion, whereas spatial accuracy is higher for the visual modality, leading to a reverse influence in the ventriloquist illusion. Another factor that has a bearing on the direction of crossmodal interaction is the reliability of the respective unisensory signals: when variance of one signal is high, its weight in integration is inversely reduced, leading to dominance of the other signal (Alais & Burr, 2004b).

Another example of crossmodal influence in perception is motion judgment: objects that cross each other’s path can be perceived ambiguously, either continuing in their respective direction or colliding and bouncing. The perception of bouncing is promoted by the introduction of a sound at the moment of coincidence (Sekuler, Sekuler, & Lau, 1997). Furthermore, motion perception becomes more accurate when visual motion is accompanied by congruent auditory signals (Gleiss & Kayser, 2014; Wuerger, Hofbauer, & Meyer, 2003).

The aim of this project is to establish an experimental paradigm where movement judgments of audiovisual stimuli are biased bidirectionally, depending on a parametric variation of unisensory reliabilities. This paradigm would constitute a dynamic extension of classic ventriloquism, where not the location, but the movement direction is to be judged by participants. Similar experiments have been performed before, albeit with a focus on modality-independent facilitation of correct responses (Alais & Burr, 2004a; Wuerger et al., 2003) or on unidirectional facilitation (Meyer & Wuerger, 2001), and without public access to the underlying datasets. The paradigm developed in this project could be further utilized in electroencephalography (or other neuroimaging) studies to investigate the mechanisms of long-range information transfer in the human brain, which is the more general aim of my research.

Methods, with application of open science principles Bearbeiten

The experimental paradigm will be implemented using the open-source psychophysics toolbox (Brainard, 1997) running on Octave (Eaton, Bateman, Hauberg, & Wehbring, 2015), an open-source software package for scientific computing. The experimental code will be version controlled and shared on Github or a similar service. Visual stimuli will consist of random dot kinematograms with varying movement coherence, auditory stimuli will consist of white noise convoluted with generic head-related transfer functions to change their apparent source location, overlaid with varying levels of static white noise. Participants will be asked to indicate perceived auditory and visual movement using a button press. The main working hypothesis is that the amount of crossmodal bias on motion judgments will depend on the relative reliability of the bimodal input signals. The result of the analysis will be a quantification of crossmodal bias and the identification of stimulus combinations that result in bidirectional influence. The experimental results will be shared on Zenodo or a similar service in a non-proprietary format. The data analysis will be conducted using open-source software (Octave, Python, or R as appropriate), again the code will be version controlled in a public repository. The manuscript will be submitted to an open-access journal, and preprints will be made available on PsyArXiv. During the course of the project, documentation and notes about progress and emerging problems will be published, applying the principles of open notebook science.

Although the project does not directly deal with knowledge access, there are several benefits in opening this research: sharing of the experimental design will allow other researchers to adapt it to investigate related questions, and sharing of data will enable other researchers to apply advanced analysis techniques like Bayesan modeling (e.g. Körding et al., 2007) that are beyond the immediate scope of this project. Furthermore, the publication of data and methods will facilitate use as an educational resource.

References Bearbeiten

Alais, D., & Burr, D. (2004a). No direction-specific bimodal facilitation for audiovisual motion detection. Cognitive Brain Research, 19(2), 185–194. https://doi.org/10.1016/j.cogbrainres.2003.11.011

Alais, D., & Burr, D. (2004b). The Ventriloquist Effect Results from Near-Optimal Bimodal Integration. Current Biology, 14(3), 257–262. https://doi.org/10.1016/j.cub.2004.01.029

Brainard, D. H. (1997). The psychophysics toolbox. Spatial Vision, 10(4), 433–436.

Choe, C. S., Welch, R. B., Gilford, R. M., & Juola, J. F. (1975). The “ventriloquist effect”: Visual dominance or response bias? Perception & Psychophysics, 18(1), 55–60.

Eaton, J. W., Bateman, D., Hauberg, S., & Wehbring, R. (2015). GNU Octave version 4.0.0 manual: a high-level interactive language for numerical computations. Retrieved from http://www.gnu.org/software/octave/doc/interpreter

Gleiss, S., & Kayser, C. (2014). Oscillatory mechanisms underlying the enhancement of visual motion perception by multisensory congruency. Neuropsychologia, 53, 84–93. https://doi.org/10.1016/j.neuropsychologia.2013.11.005

Körding, K. P., Beierholm, U., Ma, W. J., Quartz, S., Tenenbaum, J. B., & Shams, L. (2007). Causal Inference in Multisensory Perception. PLoS ONE, 2(9), e943. https://doi.org/10.1371/journal.pone.0000943

Meyer, G. F., & Wuerger, S. M. (2001). Cross-modal integration of auditory and visual motion signals. Neuroreport, 12(11), 2557–2560.

Sekuler, R., Sekuler, A. B., & Lau, R. (1997). Sound alters visual motion perception. Nature, 385(6614), 308–308. https://doi.org/10.1038/385308a0

Shams, L., Kamitani, Y., & Shimojo, S. (2000). What you see is what you hear. Nature, 408(6814), 788. https://doi.org/10.1038/35048669

Wuerger, S. M., Hofbauer, M., & Meyer, G. F. (2003). The integration of auditory and visual motion signals at threshold. Perception & Psychophysics, 65(8), 1188–1196. https://doi.org/10.3758/BF03194844