94 Katz, W. F., & Mehta, S. (2015). Visual feedback of tongue movement for novel speech sound learning. Frontiers in Human Neuroscience, 9, 612.
95 Kawase, T., Sakamoto, S., Hori, Y., et al. (2009). Bimodal audio–visual training enhances auditory adaptation process. NeuroReport, 20, 1231–1234.
96 Kim, J., & Davis, C. (2004). Investigating the audio–visual speech detection advantage. Speech Communication, 44(1), 19–30.
97 Lachs, L., & Pisoni, D. B. (2004). Specification of cross‐modal source information in isolated kinematic displays of speech. Journal of the Acoustical Society of America, 116(1), 507–518.
98 Lander, K. & Davies, R. (2008). Does face familiarity influence speechreadability? Quarterly Journal of Experimental Psychology, 61, 961–967.
99 Lidestam, B., Moradi, S., Pettersson, R., & Ricklefs, T. (2014). Audiovisual training is better than auditory‐only training for auditory‐only speech‐in‐noise identification. Journal of the Acoustical Society of America, 136(2), EL142–147.
100 Ma, W. J., Zhou, X., Ross, L. A., et al. (2009). Lip‐reading aids word recognition most in moderate noise: A Bayesian explanation using high‐dimensional feature space. PLOS ONE, 4(3), 1–14.
101 Magnotti, J. F., & Beauchamp, M. S. (2017). A causal inference model explains perception of the McGurk effect and other incongruent audiovisual speech. PLOS Computational Biology, 2017( 13), e1005229.
102 Massaro, D. W. (1987). Speech perception by ear and eye: A paradigm for psychological inquiry. Hillsdale, NJ: Lawrence Erlbaum.
103 Massaro, D. W., Cohen, M. M., Gesi, A., et al. (1993). Bimodal speech perception: An examination across languages. Journal of Phonetics, 21, 445–478.
104 Massaro, D. W., & Ferguson, E. L. (1993). Cognitive style and perception: The relationship between category width and speech perception, categorization, and discrimination. American Journal of Psychology, 106(1), 25–49.
105 Massaro, D. W., Thompson, L. A., Barron, B., & Laron, E. (1986). Developmental changes in visual and auditory contributions to speech perception, Journal of Experimental Child Psychology, 41, 93–113.
106 Matchin, W., Groulx, K., & Hickok, G. (2014). Audiovisual speech integration does not rely on the motor system: Evidence from articulatory suppression, the McGurk effect, and fMRI. Journal of Cognitive Neuroscience, 26(3), 606–620.
107 McGurk, H., & MacDonald, J. (1976). Hearing lips and seeing voices. Nature, 264, 746–748.
108 Ménard, L., Cathiard, M. A., Troille, E., & Giroux, M. (2015). Effects of congenital visual deprivation on the auditory perception of anticipatory labial coarticulation. Folia Phoniatrica et Logopaedica, 67(2), 83–89.
109 Ménard, L., Dupont, S., Baum, S. R., & Aubin, J. (2009). Production and perception of French vowels by congenitally blind adults and sighted adults. Journal of the Acoustical Society of America, 126(3), 1406–1414.
110 Ménard, L., Leclerc, A., & Tiede, M. (2014). Articulatory and acoustic correlates of contrastive focus in congenitally blind adults and sighted adults. Journal of Speech, Language, and Hearing Research, 57(3), 793–804.
111 Ménard, L., Toupin, C., Baum, S. R., et al. (2013). Acoustic and articulatory analysis of French vowels produced by congenitally blind adults and sighted adults. Journal of the Acoustical Society of America, 134(4), 2975–2987.
112 Miller, B. T., & D’Esposito, M. (2005). Searching for “the top” in top‐down control. Neuron, 48(4), 535–538.
113 Miller, R., Sanchez, K., & Rosenblum, L. (2010). Alignment to visual speech information. Attention, Perception, & Psychophysics, 72(6), 1614–1625.
114 Mitterer, H., & Reinisch, E. (2017). Visual speech influences speech perception immediately but not automatically. Attention, Perception, & Psychophysics, 79(2), 660–678.
115 Moradi, S., Lidestam, B., Ng, E. H. N., et al. (2019). Perceptual doping: An audiovisual facilitation effect on auditory speech processing, from phonetic feature extraction to sentence identification in noise. Ear and Hearing, 40(2), 312–327.
116 Munhall, K. G., Ten Hove, M. W., Brammer, M., & Paré, M. (2009). Audiovisual integration of speech in a bistable illusion. Current Biology, 19(9), 735–739.
117 Munhall, K. G., & Vatikiotis‐Bateson, E. (2004). Spatial and temporal constraints on audiovisual speech perception. In G. A. Calvert, C. Spence, & B. E. Stein (Eds), Handbook of multisensory processes (pp. 177–188) Cambridge, MA: MIT Press.
118 Münte, T. F., Stadler, J., Tempelmann, C., & Szycik, G. R. (2012). Examining the McGurk illusion using high‐field 7 Tesla functional MRI. Frontiers in Human Neuroscience, 6, 95.
119 Musacchia, G., Sams, M., Nicol, T., & Kraus, N. (2006). Seeing speech affects acoustic information processing in the human brainstem. Experimental Brain Research, 168(1–2), 1–10.
120 Namasivayam, A. K., Wong, W. Y. S., Sharma, D., & van Lieshout, P. (2015). Visual speech gestures modulate efferent auditory system. Journal of Integrative Neuroscience, 14(1), 73– 83.
121 Nath, A. R., & Beauchamp M. S. (2012). A neural basis for interindividual differences in the McGurk effect: A multisensory speech illusion. NeuroImage, 59(1), 781–787. PubMed.
122 Navarra, J., & Soto‐Faraco, S. (2007). Hearing lips in a second language: Visual articulatory information enables the perception of second language sounds. Psychological Research, 71, 4–12.
123 Nishitani, N., & Hari, R. (2002). Viewing lip forms: Cortical dynamics. Neuron, 36(6), 1211–1220.
124 Nygaard, L. C. (2005). The integration of linguistic and non‐linguistic properties of speech. In D. Pisoni & R. Remez (Eds), Handbook of speech perception (pp. 390–414). Oxford: Blackwell.
125 Olson, I. R., Gatenby, J., & Gore, J. C. (2002). A comparison of bound and unbound audio–visual information processing in the human cerebral cortex. Cognitive Brain Research, 14, 129–138.
126 Ostrand, R., Blumstein, S. E., Ferreira, V. S., & Morgan, J. L. (2016). What you see isn’t always what you get: Auditory word signals trump consciously perceived words in lexical access. Cognition, 151, 96–107.
127 Palmer, T. D., & Ramsey, A. K. (2012). The function of consciousness in multisensory integration. Cognition, 125(3), 353–364.
128 Papale, P., Chiesi, L., Rampinini, A. C., et al. (2016). When neuroscience “touches” architecture: From hapticity to a supramodal functioning of the human brain. Frontiers in Psychology, 7(631), 866.
129 Pardo, J. S. (2006). On phonetic convergence during conversational interaction. Journal of the Acoustical Society of America, 119(4), 2382–2393.
130 Pardo, J. S., Gibbons, R., Suppes, A., & Krauss, R. M. (2012). Phonetic convergence in college roommates. Journal of Phonetics, 40(1), 190–197.
131 Pardo, J. S., Jordan, K., Mallari, R., et al. (2013). Phonetic convergence in shadowed speech: The relation between acoustic and perceptual measures. Journal of Memory and Language, 69(3), 183–195.
132 Pardo, J. S., Urmanche, A., Wilman, S., & Wiener, J. (2017). Phonetic convergence across multiple measures and model talkers. Attention, Perception, & Psychophysics, 79(2), 637–659.
133 Pascual‐Leone, A., & Hamilton, R. (2001). The metamodal organization of the brain. Progress in Brain Research, 134, 427–445.
134 Paulesu, E., Perani, D., Blasi, V., et al. (2003). A functional‐anatomical model for lipreading. Journal of Neurophysiology, 90(3), 2005–2013.
135 Pekkola, J., Ojanen, V., Autti, T., et al. (2005). Primary auditory cortex activation by visual speech: An fMRI study at 3 T. Neuroreport, 16(2), 125–128.
136 Pilling, M., & Thomas, S. (2011). Audiovisual cues and perceptual learning of spectrally distorted speech. Language and Speech, 54(4), 487–497.
Читать дальше