Of course, what your friend might do under these conditions is try to guess what you are talking about and ask for clarification (Do you think I’m coming/dying/
graduating on Wednesday?), but these kinds of conscious inferences are different from the automatic process of accessing the mental lexicon.
Now consider how a parallel-interactive perspective approaches the same problem. According to this view, all sorts of information are simultaneously used to access the lexicon, regardless of where in the processing system the information comes from. If, as in (191), the phonological information for accessing coming is not available, an interactive system can have recourse to information from another source so that lexical processing does not break down because of an inadequate input signal. Suppose, for example, you were talking about your friend’s visit to Britain before you produced (190), and that only the exact date still had to be fixed.
Then, he or she might understand (191) as (190), despite the degenerate signal, by having access to information from the surrounding context.
202
words
A very large number of experimental studies have attempted to differentiate
between the two approaches and to argue for the appropriateness of one or the other. Many of these studies involve complex experimental designs, the details of which we cannot engage in here due to space constraints. We can, however, offer a brief overview of two types of experiment which, intriguingly, lead to opposing conclusions.
Consider firstly, then, the sentence in (192):
(192)
The young woman had always wanted to work in a bank
Of course, bank is ambiguous in English, with the senses ‘financial institution’
and ‘side of a river’. From a parallel-interactive perspective, when listeners to
(192) hear bank, they take advantage of all the information available to them, including the contextual information supplied by their general knowledge of the world and earlier parts of the sentence. Since this information is incompatible with the ‘side of a river’ sense of bank, this possibility will not be considered and only the ‘financial institution’ sense will be accessed. The serial-autonomous view, on the other hand, sees lexical access as entirely driven by phonology and so maintains that both senses will be accessed – the phonology does not differentiate them. Now, suppose that immediately following the aural presentation of (192), subjects are presented with a visual word/non-word decision task, i.e. on a screen in front of them appears an English word, say garden, or a non-word sequence, say brogit. Their task is to respond as quickly as possible, by pressing one of two buttons, to indicate whether the visual item is a word or not.
In order to convey the major finding of this type of experiment, we need to make one further assumption explicit. This is that words are organised in the mind so that semantically related words (in the sense of section 12) are ‘close’ to each other. More technically, if you hear a token of dog, some (mental) activation spreads to semantically associated items such as cat or animal or bark, and we say that these latter items are primed. When an item is primed, we would expect it to be more readily available for lexical access than when it is not. We return to our experimental study.
A parallel-interactive approach will maintain that for subjects who have just heard (192), only the ‘financial institution’ sense of bank will be active and only lexemes semantically related to bank in this sense, e.g. money, cheque, will be primed. For the serial-autonomous theorist, however, both senses of bank are activated, so additional items such as river and tow-path will also be primed. The following experimental conditions are the crucial ones, where the capitalised words are the items presented visually for a word/non-word decision:
(193) a.
The young woman had always wanted to work in a bank. MONEY
b.
The young woman had always wanted to work in a bank. RIVER
c.
The small yellow car was found outside the village. MONEY
d.
The small yellow car was found outside the village. RIVER
Here, (193c) and (193d) are intended to provide neutral contexts; neither money nor river is primed in these contexts, so decisions that the visually presented items are words provide a measure of how long this process takes when these items
Lexical processing and the mental lexicon
203
are unprimed. For both the serial-autonomous and parallel-interactive accounts, (193a) provides a primed context for the recognition of money as a word. Both approaches predict that subjects’ responses to (193a) should be faster than their responses to (193c). For (193b), however, the two approaches make different predictions; this is a primed context only from the serial-autonomous perspective.
Thus, this approach predicts that subjects’ responses to (193b) will be significantly faster than their responses to (193d); the parallel-interactive approach predicts no significant difference in these cases. Results supporting the serial-autonomous position have appeared in the psycholinguistics literature, thereby suggesting that the perceptual mechanisms are ‘stupid’ in the sense that they do not utilise all available information. Lest we lose sight of it in the dispute between serial-autonomous and parallel-interactive accounts, we should also note that any priming effects depending on semantic similarity provide experimental support for the view of the structured lexicon we developed in section 12, namely that the mental lexicon is not just a list of items but rather a structured set over which a notion of psychological ‘distance’ can be defined, with semantic similarity contributing to this measure of distance.
Alongside studies which support the serial-autonomous view, the psycholin-
guistics literature contains many reports of experiments which favour the parallel-interactive position. Again, we offer just a brief outline of the thinking behind one of them.
Suppose that experimental subjects are instructed to respond as quickly as
possible, by pressing a button, to an occurrence of a designated word, say party.
They can be presented with tokens of party in a variety of contexts, illustrated in (194):
(194) a.
John and Mary shared a birthday last week when their party …
b.
The giraffe walked rapidly into the bedroom where its party …
c.
Ghost although out yesterday the runs street which my party …
These contexts represent three distinct categories. In (194a) we have an example which is syntactically and semantically well formed. The example in (194b)
is syntactically well formed but semantically odd, given our knowledge of
the world, and (194c) is just a random list of words exhibiting neither semantic nor syntactic structure. Again, we note that the serial-autonomous view regards word recognition as phonologically driven, so this approach ought to predict no differences in recognition times for party in these examples. By contrast, the parallel-interactive account expects that subjects will be able to take account of syntactic information in (194b) and of syntactic and semantic information in (194a); this should enable subjects to produce enhanced recognition times in these two conditions when compared with (194c). Using this technique, the parallel-interactive view has been supported, with recognition times being fastest for the condition in (194a), slowest for (194c) and of intermediate speed for (194b).
We conclude this brief discussion with some general remarks. Parallel-
interactive models of lexical processing are highly efficient in that they almost
204
words
always compute an output, even in cases such as (191) in which crucial information is not available via phonological recognition. Thus, they lead us to expect that words can be recognised in an appropriate context, even in circumstances where there are no phonological or orthographic cues at all. Serial-autonomous models cannot account for such context effects, except by suggesting that a
Читать дальше