the ‘basic’ level.
14
Lexical processing and the mental lexicon
An adult native speaker of English with a normal speech rate produces more than 150 words per minute – on average, more than one word every half second.
Indeed, under time pressure, for example, when you are calling your friend in New Zealand from a public telephone in Britain or the United States, a native speaker can produce one word every 200 ms, which is less than a quarter of a second, and your friend can still understand what you are saying. The lexicon of an average native speaker of English contains about 30,000 words. This means that in fluent speech you have to choose continuously from these 30,000 alternatives, not just once, but two to five times per second, and there is no clear limit on how long you can indulge in this process. Furthermore, your friend is recognising your words at the same rate at the other end of the telephone line. If you wanted to, and had enough money, you could make the telephone companies happy by talking to your New Zealand friend for hours, with a decision rate of one word every 200–400 ms.
Incredibly, despite the high speed of lexical processing, errors in the production and comprehension of words are very rare. Research has revealed that in a corpus of 200,000 words, getting on for twice the length of this book, only 86 lexical errors were found, i.e., fewer than 1 in every 2,000 words. Thus, lexical processing is speedy and very accurate, and decisions are made at very high processing rates although there are many alternatives.
In this section, we will discuss the sorts of processes that are involved in our production and comprehension of words. We will structure our account around
two general questions. These will enable us to raise some of the major issues surrounding the processing of words in contemporary psycholinguistics.
Serial-autonomous versus parallel-interactive
processing models
In the light of the figures mentioned above, we can begin by intuitively
considering what might be involved in recognising or producing a common word such as dog. It ought to be self-evident that these processes can be broken down into a number of sub-processes. Thus, focusing on recognition for the sake of concreteness, in order to recognise that a sequence of sounds impinging on your aural receptors constitutes a token of dog, it is necessary for you to recognise that the sequence contains an initial /d/, etc. Failure to do this, say by ‘recognising’ an 199
200
words
initial /b/, would result in an obvious misperception, and, under normal conditions, these are uncommon. Obviously, by complicating the word in question, we could offer similar observations for the perception of suprasegmental features such as stress (it is important to your interlocutors that when you say TORment, a noun with stress on the initial syllable, they do not ‘perceive’ torMENT, a verb with stress on the final syllable). It is incontestable that sound properties are generally important in spoken word recognition. It is also easy to see that information about the category to which a word belongs is important: if you are going to understand a simple sentence such as (189), then you had better categorise the token of dogs in that sentence as a verb and not as a noun:
(189)
A problem with speech perception dogs me wherever I go
Additionally, it is easy to agree that the morphological properties of words must be recognised: I bother Bill and Bill bothers me are interpreted quite differently, and these different interpretations are due to the choice between nominative I and accusative me and the related choice between bother and bothers. Finally, you can make the various decisions we are sketching here, but your decisiveness is unlikely to do you much good unless you also come to a view on what a
specific occurrence of dog or bother means. Recognising words in the sense
introduced above involves understanding them, and this presupposes semantic
choices.
Now, there are at least two ways in which we can conceptualise these various decisions being made. The first, which gives rise to serial-autonomous accounts of processing, maintains that these decisions are taken in sequence, with all decisions of a certain type being taken before decisions of the next type.
Furthermore, information which may be available on the basis of later decisions cannot inform earlier decisions. The alternative parallel-interactive approach takes the opposite perspective: in principle, information relevant to any decision is available at any point in processing, and there is no place for a strictly ordered set of sub-processes. We shall now try to be a little more specific.
Serial-autonomous models of lexical processing involve a series of steps in
which information is passed from one component of the mental lexicon to the
next. One characteristic property of serial-autonomous models is that each stage in the processing of a word is carried out by a specialised module which accepts input only from the previous module and provides output only to the next one.
Thus, crudely, we might suppose that word recognition begins with a module
which recognises a sequence of sounds, and this module presents its output to an independent module which assigns a morphological analysis to this sequence of sounds. At this point, if a token of (189) is being listened to, the word dogs may be analysed as either the verb stem dog plus the third person singular present suffix -s or as the noun stem dog plus the plural suffix -s. Of course, ultimately, only the first of these analyses is correct, but from the serial-autonomous perspective, the syntactic, semantic and contextual information that will force the listener to this decision is not available at this stage in the perceptual process. To use a
Lexical processing and the mental lexicon
201
notion introduced by Jerry Fodor, each specialised module is informationally encapsulated and can take account only of the information supplied by modules which operate earlier in the perceptual process. By contrast, supporters of parallel-interactive models claim that language perception (and production) involves
the activation of some or all sources of relevant information at the same time.
According to this view, then, the morphological analysis of dogs as the noun stem dog plus the plural suffix -s will not be produced in the course of perceiving a token of (189). This is because enough syntactic, semantic and contextual information is already available from earlier parts of the utterance to rule out the possibility of this analysis. We can try to sharpen up the difference between these two approaches by considering another (plausible) situation.
Suppose that the telephone companies are experiencing a technical problem, so that the line to your friend in New Zealand is occasionally interrupted by a crackling noise for about a quarter of a second. This occurs while you are saying (190); as a consequence, your friend hears (191): (190)
I thought you were coming on Wednesday
(191)
I thought you were (krrrrk) on Wednesday
As your friend listens to (191), we can ask whether any lexical recognition is going on during the crackle. According to the serial-autonomous view, the answer would be a definite ‘no’, while parallel-interactive models would answer with an equally clear ‘yes’. In a serial model, there is only one way to get access to a word form such as coming and that is through its phonological form (if we were concerned with written word recognition, we would again maintain that there is only one route to recognition, but in this case this would be via an orthographic analysis).
Since a phonological analysis is unavailable to your friend in (191), modules which would subsequently analyse coming as come + ing, assign appropriate morphosyntactic properties to these morphemes and then associate meanings with them cannot operate. Generalising, we can say there is no lexical access at this point.
Читать дальше