137 Plass, J., Guzman‐Martinez, E., Ortega, L., et al. (2014). Lip reading without awareness. Psychological Science, 25(9), 1835–1837.
138 Reich, L., Maidenbaum, S., & Amedi, A. (2012). The brain as a flexible task machine: Implications for visual rehabilitation using noninvasive vs. invasive approaches. Current Opinion in Neurobiology, 25(1), 86–95.
139 Reisberg, D., McLean, J., & Goldfield, A. (1987). Easy to hear but hard to understand: A lip‐reading advantage with intact auditory stimuli. In B. Dodd & R. Campbell (Eds), Hearing by eye: The psychology of lip‐reading (pp. 97–113). Hillsdale, NJ: Lawrence Erlbaum.
140 Remez, R. E., Beltrone, L. H., & Willimetz, A. A. (2017). Effects of intrinsic temporal distortion on the multimodal perceptual organization of speech. Paper presented at the 58th Annual Meeting of the Psychonomic Society, Vancouver, British Columbia, November.
141 Remez, R. E., Fellowes, J. M., & Rubin, P. E. (1997). Talker identification based on phonetic information. Journal of Experimental Psychology: Human Perception and Performance, 23(3), 651–666.
142 Ricciardi, E., Bonino, D., Pellegrini, S., & Pietrini, P. (2014). Mind the blind brain to understand the sighted one! Is there a supramodal cortical functional architecture? Neuroscience & Biobehavioral Reviews, 41, 64–77.
143 Riedel, P., Ragert, P., Schelinski, S., et al. (2015). Visual face‐movement sensitive cortex is relevant for auditory‐only speech recognition. Cortex, 68, 86–99.
144 Rosen, S. M., Fourcin, A. J., & Moore, B. C. (1981). Voice pitch as an aid to lipreading. Nature, 291(5811), 150.
145 Rosenblum, L. D. (2005). Primacy of multimodal speech perception. In D. Pisoni & R. Remez (Eds), Handbook of speech perception (pp. 51–78). Oxford: Blackwell.
146 Rosenblum, L. D. (2008). Speech perception as a multimodal phenomenon. Current Directions in Psychological Science, 17(6), 405–409.
147 Rosenblum, L. D. (2013). A confederacy of senses. Scientific American, 308, 72–75.
148 Rosenblum, L. D. (2019). Audiovisual speech perception and the McGurk effect. In Oxford research encyclopedia of linguistics. https://oxfordre.com/linguistics/view/10.1093/acrefore/9780199384655.001.0001/acrefore‐9780199384655‐e‐420?rskey=L7JvON&result=1
149 Rosenblum, L. D., Dias, J. W., & Dorsi, J. (2017). The supramodal brain: Implications for auditory perception. Journal of Cognitive Psychology, 5911, 1–23.
150 Rosenblum, L. D., Dorsi, J., & Dias, J. W. (2016). The impact and status of Carol Fowler’s supramodal theory of multisensory speech perception. Ecological Psychology, 28, 262–294.
151 Rosenblum, L. D., Miller, R. M., & Sanchez, K. (2007). Lip‐read me now, hear me better later: Cross‐modal transfer of talker‐familiarity effects. Psychological Science, 18(5), 392–396.
152 Rosenblum, L. D., & Saldaña, H. M. (1992). Discrimination tests of visually‐influenced syllables. Perception & Psychophysics, 52(4), 461–473.
153 Rosenblum, L. D., & Saldaña, H. M. (1996). An audiovisual test of kinematic primitives for visual speech perception. Journal of Experimental Psychology: Human Perception and Performance, 22(2), 318–331.
154 Rosenblum, L. D., Schmuckler, M. A., & Johnson, J. A. (1997). The McGurk effect in infants. Perception & Psychophysics, 59(3), 347–357.
155 Rosenblum, L. D., Yakel, D. A., Baseer, N., et al. (2002). Visual speech information for face recognition. Perception & Psychophysics, 64(2), 220–229.
156 Rosenblum, L. D., Yakel, D. A., & Green, K. G. (2000). Face and mouth inversion affects on visual and audiovisual speech perception. Journal of Experimental Psychology: Human Perception and Performance, 26(3), 806–819.
157 Sams, M., Manninen, P., Surakka, V., et al. (1998). McGurk effect in Finnish syllables, isolated words, and words in sentences: Effects of word meaning and sentence context. Speech Communication, 26(1–2), 75–87.
158 Sanchez, K., Dias, J. W., & Rosenblum, L. D. (2013). Experience with a talker can transfer across modalities to facilitate lipreading. Attention, Perception & Psychophysics, 75, 1359–1365.
159 Sanchez, K., Miller, R. M., & Rosenblum, L. D. (2010). Visual influences on alignment to voice onset time. Journal of Speech, Language, and Hearing Research, 53, 262–272.
160 Santi, A., Servos, P., Vatikiotis‐Bateson, E., et al. (2003). Perceiving biological motion: Dissociating visible speech from walking. Journal of Cognitive Neuroscience, 15(6), 800–809.
161 Sato, M., Buccino, G., Gentilucci, M., & Cattaneo, L. (2010). On the tip of the tongue: Modulation of the primary motor cortex during audiovisual speech perception. Speech Communication, 52(6), 533–541.
162 Sato, M., Cavé, C., Ménard, L., & Brasseur, A. (2010). Auditory‐tactile speech perception in congenitally blind and sighted adults. Neuropsychologia, 48(12), 3683–3686.
163 Schall, S., & von Kriegstein, K. (2014). Functional connectivity between face‐movement and speech‐intelligibility areas during auditory‐only speech perception. PLOS ONE, 9(1), 1–11.
164 Schelinski, S., Riedel, P., & von Kriegstein, K. (2014). Visual abilities are important for auditory‐only speech recognition: Evidence from autism spectrum disorder. Neuropsychologia, 65, 1–11.
165 Schwartz, J. L., Berthommier, F., & Savariaux, C. (2004). Seeing to hear better: Evidence for early audio‐visual interactions in speech identification. Cognition, 93(2), B69–78.
166 Schweinberger, S. R., & Soukup, G. R. (1998). Asymmetric relationships among perceptions of facial identity, emotion, and facial speech. Journal of Experimental Psychology: Human Perception and Performance, 24, 1748–1765.
167 Sekiyama, K., & Tohkura, Y. (1991). McGurk effect in non‐English listeners: Few visual effects for Japanese subjects hearing Japanese syllables of high auditory intelligibility. Journal of the Acoustical Society of America, 90(4), 1797–1805.
168 Sekiyama, K., & Tohkura, Y. (1993). Inter‐language differences in the influence of visual cues in speech perception. Journal of Phonetics, 21(4), 427–444.
169 Shams, L., Iwaki, S., Chawla, A., & Bhattacharya, J. (2005). Early modulation of visual cortex by sound: An MEG study. Neuroscience Letters, 378(2), 76–81.
170 Shams, L., Wozny, D. R., Kim, R., & Seitz, A. (2011). Influences of multisensory experience on subsequent unisensory processing. Frontiers in Psychology, 2, 264.
171 Shahin, A. J., Backer, K. C., Rosenblum, L. D., & Kerlin, J. R. (2018). Neural mechanisms underlying cross‐modal phonetic encoding. Journal of Neuroscience, 38(7), 1835–1849.
172 Sheffert, S. M., Pisoni, D. B., Fellowes, J. M., & Remez, R. E. (2002). Learning to recognize talkers from natural, sinewave, and reversed speech samples. Journal of Experimental Psychology: Human Perception and Performance, 28(6), 1447–1469.
173 Simmons, D. C., Dias, J. W., Dorsi, J. & Rosenblum, L. D. (2015). Crossmodal transfer of talker learning. Poster presented at the 169th meeting of the Acoustical Society of America, Pittsburg, Pennsylvania, May.
174 Skipper, J. I., Nusbaum, H. C., & Small, S. L. (2005). Listening to talking faces: Motor cortical activation during speech perception. NeuroImage, 25(1), 76–89.
175 Skipper, J. I., van Wassenhove, V., Nusbaum, H. C., & Small, S. L. (2007). Hearing lips and seeing voices: How cortical areas supporting speech production mediate audiovisual speech perception. Cerebral Cortex, 17(10), 2387–2399.
176 Soto‐Faraco, S., & Alsius, A. (2007). Conscious access to the unisensory components of a crossmodal illusion. NeuroReport, 18, 347–350.
177 Soto‐Faraco, S., & Alsius, A. (2009). Deconstructing the McGurk–MacDonald illusion. Journal of Experimental Psychology: Human Perception and Performance, 35, 580–587.
Читать дальше