The English used in this article or section may not be easy for everybody to understand. (December 2013)
The McGurk effect shows how hearing and vision are used for speech perception. Named after the man who found it, Harry McGurk (23 February 1936 – 17 April 1998), it says that people hear speech with their ears, and use other senses to help interpret what they hear. The McGurk effect happens when watching a video of a person saying /ga/ with a sound-recording saying /ba/. When this is done, a third sound is heard: /da/.
The McGurk effect is robust: that is, it still works even if a person know about it. This is different from some optical illusions, which do not work anymore once a person can see it.
Overview[change | change source]
The McGurk effect describes a phenomenon that shows how speech perception is not dependent only on auditory information. Visual information in the form of reading the lips is also taken into account and combined with the auditory information that is heard to produce the final stimuli perceived. This can get particularly interesting when the auditory information of one sound, paired with the spoken lips of another sound, ultimately combine to form the perception of a third different sound.
Explanation[change | change source]
When humans perceive speech, they not only take in auditory information but also visual information as well in the form of reading lips, facial expression, and other body bodily cues. Usually, these two sources of information are consistent with each other so the brain simply combines them to form one unified perceived stimuli. When the McGurk effect is tested and the incoming information from the ears and eyes differ, the brian tries to make sense of the contradictory stimuli which ultimately results in a fusion of both. In humans, information received from the eyes dominates other sensory modalities, including audition, so for instance when 'ba' is heard and 'ga' is seen, the resulting stimuli is heard is 'da'. The resulting stimuli is what happens when the brain tries to make sense of the two different sets of information.
Similar work by others[change | change source]
Interestingly enough, around the same time that Harry McGurk and John Macdonald discovered what is now known as the McGurk effect, another British researcher also stumbled upon this phenomenon. Barbara Dodd discovered a similar effect with audio-visual speech interpretation but instead it was with the visual cue of 'hole' and the audio cue of 'tough' which ultimately generated the audio perception of 'towel'. These discoveries in audio-visual speech perception ultimately changed the way scientists and researchers view the interaction of different senses in the brain.
Infants[change | change source]
Infants also show signs similar to the McGurk effect. Obviously you cannot ask the infant what they hear since they cannot verbally communicate yet but by measuring certain variables such as their attention to audiovisual stimuli, effects similar to the McGurk effect can be seen. Very soon after infants are born, sometimes even within minutes of birth, they are able to imitate adult facial movements; an important first step in audiovisual speech perception. Next comes the ability to recognize lip movements and speech sounds a couple of weeks after birth. Evidence of the McGurk effect is not visible though until about 4 months of age, with a much stronger presence at around 5 months after birth. To test this effect on infants, infants are first habituated to a stimuli. Once the stimuli gets changed, the infant exhibits an effect similar to the McGurk effect. As infants grow older and continue to develop, the McGurk effect also becomes more prominent as visual cues start to override purely auditory information in audiovisual speech perception.
Effect in other languages and cultures[change | change source]
Although the McGurk effect has been primarily studied in English because of its origins in English speaking countries, research has now spread to others countries with different languages as well. In particular, the comparison between English and Japanese has been prominent. Research has shown that the McGurk effect is much more prominent in English listeners compared to Japanese listeners. One strong hypothesis for this is the difference between cultures and how each culture behaves and interacts. Japanese culture is notable for being politeness and avoiding direct eye or face contact when interacting.
This phenomenon has also been studied in French Canadian children and adults with similar findings. When compared to adults though, children tend to show less susceptibility to the McGurk effect since their primary sense of speech perception is dominated by auditory information. This is evident in children scoring lower on lip reading tasks when compared to adults. Nevertheless, the McGurk effect was present in certain contexts but the effects were much more variable than when the tests were run on adults.
Broader impact on society[change | change source]
Although the McGurk effect's importance may seem isolated to just psychological researchers and scientists, this phenomenon has been expanded to everyday audio and visual speech perception as well. Two researchers by the name of Wareham and Wright conducted a study in 2005 that may suggest that the McGurk effect can influence how everyday speech is perceived. This is especially important in witness testimony where the observations and accounts of the witness are usually expected to be accurate and correct. With this information, witness testimonies now must be interpreted with the notion that the witness may be unaware of their own inaccurate perception.
References[change | change source]
- ↑ McGurk H. & Lewis M. 1974. Space perception in early infancy: perception within a common auditory space? Science, 186, 649-650.
- ↑ McGurk H. 1988. Developmental psychology and the vision of speech. Inaugural Professorial Lecture, University of Surrey
- ↑ McGurk H. & MacDonald J. 1976. Hearing lips and seeing voices. Nature, 264, 746-748.
- ↑ Nath, A.R.; Beauchamp, M.S. (2012). "A neural basis for interindividual differences in the McGurk effect, a multisensory speech illusion". NeuroImage. 59 (1): 781–7. doi:10.1016/j.neuroimage.2011.07.024. PMC 3196040. PMID 21787869.
- ↑ O’Shea M. 2005. The brain: a very short introduction. Oxford University Press
- ↑ Colin C., Radeau M. & Deltenre P. 2011. Top-down and bottom-up modulation of audiovisual integration in speech. European Journal of Cognitive Psychology, 17(4), 541-560
- ↑ Dodd B. 1977. The role of vision in the perception of speech. Perception, 6, 31-40.
- ↑ Dodd B. & Campbell R. (eds) 1987. Hearing by eye: The psychology of lip-reading. Hillsdale, New Jersey: Lawrence Erlbaum.
- ↑ Bristow, D., Dehaene-Lambertz, G., Mattout, J., Soares, C., Gliga, T., Baillet, S. & Mangin, J.F. (2009). Hearing faces: How the infant brain matches the face it sees with the speech it hears. Journal of Cognitive Neuroscience, 21(5), 905-921
- ↑ Burnham, D. & Dodd, B. (2004). Auditory-Visual Speech Integration by Prelinguistic Infants: Perception of an Emergent Consonant in the McGurk Effect. Developmental Psychobiology, 45(4), 204-220
- ↑ Rosenblum, L.D. (2010). See what I’m saying: The extraordinary powers of our five senses. New York, NY: W. W. Norton & Company Inc.
- ↑ Rosenblum, L.D., Schmuckler, M.A. & Johnson, J.A. (1997). The McGurk effect in infants. Perception & Psychophysics, 59(3), 347-357
- ↑ Hisanaga S. et al 2009. Audiovisual speech perception in Japanese and English: inter-language differences examined by event-related potentials. Retrieved from http://www.isca-speech.org/archive_open/avsp09/papers/av09_038.pdf
- ↑ Sekiyama K. 1997. Cultural and linguistic factors in audiovisual speech processing: The McGurk effect in Chinese subjects. Perception and Psychophysics 59(1), 73-80
- ↑ "Studies of the McGurk effect: implications for theories of speech perception" (PDF). Archived from the original (PDF) on 2017-08-08. Retrieved 2013-11-27.
- ↑ Schmid G., Thielmann A. & Ziegler W. 2009. The influence of visual and auditory information on the perception of speech and non-speech oral movements in patients with left hemisphere lesions. Clinical Linguistics and Phonetics, 23(3), 208-221
Other websites[change | change source]
- Video examples