52 Eberhardt, S. P., Auer, E. T., & Bernstein, L. E. (2014). Multisensory training can promote or impede visual perceptual learning of speech stimuli: Visual‐tactile vs. visual‐auditory training. Frontiers in Human Neuroscience, 8, 1–23.
53 Eskelund, K., MacDonald, E. N., & Andersen, T. S. (2015). Face configuration affects speech perception: Evidence from a McGurk mismatch negativity study. Neuropsychologia, 66, 48–54.
54 Eskelund, K., Tuomainen, J., & Andersen, T. S. (2011). Multistage audiovisual integration of speech: Dissociating identification and detection. Experimental Brain Research, 208(3), 447–457.
55 Fingelkurts, A. A., Fingelkurts, A. A., Krause, C. M., et al. (2003). Cortical operational synchrony during audio–visual speech integration. Brain and Language, 85(2), 297–312.
56 Fowler, C. A. (1986). An event approach to the study of speech perception from a direct‐realist perspective. Journal of Phonetics, 14, 3–28.
57 Fowler, C. A. (2004). Speech as a supramodal or amodal phenomenon. In G. Calvert, C. Spence, & B. E. Stein (Eds), Handbook of multisensory processes (pp. 189–201). Cambridge, MA: MIT Press.
58 Fowler, C. A. (2010). Embodied, embedded language use. Ecological Psychology, 22(4), 286–303.
59 Fowler, C. A., Brown, J. M., & Mann, V. A. (2000). Contrast effects do not underlie effects of preceding liquids on stop‐consonant identification by humans. Journal of Experimental Psychology: Human Perception and Performance, 26(3), 877–888.
60 Fowler, C. A., & Dekle, D. J. (1991). Listening with eye and hand: Cross‐modal contributions to speech perception. Journal of Experimental Psychology: Human Perception and Performance, 17(3), 816–828.
61 Fuster‐Duran, A. (1996). Perception of conflicting audio‐visual speech: An examination across Spanish and German. In D. G. Stork & M. E. Hennecke (Eds), Speechreading by humans and machines (pp. 135–143). Berlin: Springer.
62 Ganong, W. F. (1980). Phonetic categorization in auditory word perception. Journal of Experimental Psychology: Human Perception and Performance, 6(1), 110–125.
63 Gentilucci, M., & Cattaneo, L. (2005). Automatic audiovisual integration in speech perception. Experimental Brain Research, 167(1), 66–75.
64 Ghazanfar, A. A., Maier, J. X., Hoffman, K. L., & Logothetis, N. K. (2005). Multisensory integration of dynamic faces and voices in rhesus monkey auditory cortex. Journal of Neuroscience, 25(20), 5004–5012.
65 Gibson, J. J. (1966). The senses considered as perceptual systems. Boston: Houghton Mifflin.
66 Gibson, J. J. (1979). The ecological approach to visual perception. Boston: Houghton Mifflin.
67 Gick, B., & Derrick, D. (2009). Aero‐tactile integration in speech perception. Nature, 462(7272), 502–504.
68 Gick, B., Jóhannsdóttir, K. M., Gibraiel, D., & Mühlbauer, J. (2008). Tactile enhancement of auditory and visual speech perception in untrained perceivers. Journal of the Acoustical Society of America, 123(4), 72–76.
69 Gordon, P. C. (1997). Coherence masking protection in speech sounds: The role of formant synchrony. Perception & Psychophysics, 59, 232–242.
70 Grant, K. W. (2001). The effect of speechreading on masked detection thresholds for filtered speech. Journal of the Acoustical Society of America, 109(5), 2272–2275.
71 Grant, K. W., & Seitz, P. F. (1998). Measures of auditory‐visual integration in nonsense syllables and sentences. Journal of the Acoustical Society of America, 104, 2438–2450.
72 Grant, K. W., & Seitz, P. F. P. (2000). The use of visible speech cues for improving auditory detection of spoken sentences. Journal of the Acoustical Society of America, 108(3), 1197–1208.
73 Green, K. P., & Gerdeman, A. (1995). Cross‐modal discrepancies in coarticulation and the integration of speech information: The McGurk effect with mismatched vowel. Journal of Experimental Psychology: Human Perception and Performance, 21, 1409–1426.
74 Green, K. P., & Kuhl, P. K. (1989). The role of visual information in the processing of. Perception & Psychophysics, 45(1), 34–42.
75 Green, K. P., & Kuhl, P. K. (1991). Integral processing of visual place and auditory voicing information during phonetic perception. Journal of Experimental Psychology: Human Perception and Performance, 17, 278–288.
76 Green, K. P., Kuhl, P. K., Meltzoff, A. N., & Stevens, E. B. (1991). Integrating speech information across talkers, gender, and sensory modality: Female faces and male voices in the McGurk effect. Perception & Psychophysics, 50(6), 524–536.
77 Green, K. P., & Miller, J. L. (1985). On the role of visual rate information in phonetic perception. Perception & Psychophysics, 38(3), 269–276.
78 Green, K. P., & Norrix, L. W. (2001). Perception of /r/ and /l/ in a stop cluster: Evidence of cross‐modal context effects. Journal of Experimental Psychology: Human Perception and Performance, 27(1), 166–177.
79 Hall, D. A., Fussell, C., & Summerfield, A. Q. (2005). Reading fluent speech from talking faces: Typical brain networks and individual differences. Journal of Cognitive Neuroscience, 17(6), 939–953.
80 Han, Y., Goudbeek, M., Mos, M., & Swerts, M. (2018). Effects of modality and speaking style on Mandarin tone identification by non‐native listeners. Phonetica, 76(4), 263–286.
81 Hardison, D. M (2005). Variability in bimodal spoken language processing by native and nonnative speakers of English: A closer look at effects of speech style. Speech Communication, 46, 73–93.
82 Hazan, V., Sennema, A., Iba, M., & Faulkner, A. (2005). Effect of audiovisual perceptual training on the perception and production of consonants by Japanese learners of English. Speech Communication, 47(3), 360–378.
83 Hertrich, I., Mathiak, K., Lutzenberger, W., & Ackermann, H. (2009). Time course of early audiovisual interactions during speech and nonspeech central auditory processing: A magnetoencephalography study. Journal of Cognitive Neuroscience, 21(2), 259–274.
84 Hessler, D., Jonkers, R., Stowe, L., & Bastiaanse, R. (2013). The whole is more than the sum of its parts: Audiovisual processing of phonemes investigated with ERPs. Brain Language, 124, 213–224.
85 Hickok, G. (2009). Eight problems for the mirror neuron theory of action understanding in monkeys and humans. Journal of Cognitive Neuroscience, 21(7), 1229–1243.
86 Irwin, J., & DiBlasi, L. (2017). Audiovisual speech perception: A new approach and implications for clinical populations. Language and Linguistics Compass, 11(3), 77–91.
87 Irwin, J. R., Frost, S. J., Mencl, W. E., et al. (2011). Functional activation for imitation of seen and heard speech. Journal of Neurolinguistics, 24(6), 611–618.
88 Ito, T., Tiede, M., & Ostry, D. J. (2009). Somatosensory function in speech perception. Proceedings of the National Academy of Sciences of the United States of America, 106(4), 1245–1248.
89 Jerger, S., Damian, M. F., Tye‐Murray, N., & Abdi, H. (2014). Children use visual speech to compensate for non‐intact auditory speech. Journal of Experimental Child Psychology, 126, 295–312.
90 Jerger, S., Damian, M. F., Tye‐Murray, N., & Abdi, H. (2017). Children perceive speech onsets by ear and eye. Journal of Child Language, 44(1), 185–215.
91 Jesse, A., & Bartoli, M. (2018). Learning to recognize unfamiliar talkers: Listeners rapidly form representations of facial dynamic signatures. Cognition, 176, 195–208.
92 Jiang, J., Alwan, A., Keating, P., et al. (2002). On the relationship between facial movements, tongue movements, and speech acoustics. EURASIP Journal on Applied Signal Processing, 11, 1174–1178.
93 Jiang, J., Auer, E. T., Alwan, A., et al. (2007). Similarity structure in visual speech perception and optical phonetic signals. Perception & Psychophysics, 69(7), 1070–1083.
Читать дальше