A third approach to measuring the linguistic resources of communication is through the use of developmental scores obtained from tasks that have been designed with developmental proficiency levels in mind (Purpura 2004; Ellis, 2005), the levels being derived from research on the role of orders and sequences in interlanguage development in SLA. For example, to investigate the potential of assessing knowledge of grammatical form by means of developmental levels for a computer‐delivered ESL test of productive grammatical ability, Chapelle, Chung, Hegelheimer, Pendar, and Xu (2010) first identified a corpus of morphosyntactic (simple past) and syntactic (SVO word order) forms along with their associated meanings (past achievement or accomplishment) as they corresponded to multiple stages of interlanguage development. They then used this information to design highly constrained LP tasks (e.g., respond by using one or more of the following words) and somewhat constrained EP tasks (e.g., begin your response with the following words). Scoring of the responses was then based on a three‐point scale from no evidence (0) to partial evidence (1) to full evidence of knowledge (2) of the form. Results showed three distinct developmental levels of the items. Also, the developmental scores produced moderate correlations with their placement test writing scores and with TOEFL iBT (Internet‐based Test of English as a Foreign Language) scores. While this study was well designed and the results remarkable, it focused only on the knowledge of semantico‐grammatical forms and meanings with no explicit measurement of how these forms served as a resource for communicating other meanings.
In another interesting study, Chang (2004) used developmental proficiency levels to design items that aimed to measure the difficulty order of English relative clauses with relation to the form, meaning, and pragmatic use dimensions. He found that when form alone was considered, the difficulty order of the relative clauses in his test generally matched that predicted in Keenan and Comrie's (1977) noun phrase accessibility hypothesis (NPAH), with some deviations. When the form and meaning of relative clauses were measured, the results in his test strongly supported the NPAH. However, when pragmatic use was considered, the difficulty order in Chang's test was different from that proposed in the NPAH. Interestingly, these results held when both developmental scores and accuracy scores were used. This study underscores the importance of moving beyond forms to a consideration of how semantico‐grammatical forms serve as a resource for communicating not only literal meanings, but also pragmatic meanings. Both studies also highlight the need to continue work with developmental scores.
In examining how the linguistic resources of L2 proficiency have been conceptualized and assessed in L2 assessment and SLA, this entry has argued that semantico‐grammatical knowledge defined uniquely in terms of forms and meanings presents only a partial understanding of the linguistic resources. A fuller definition of linguistic resources of communication needs to include the form‐meaning mappings that are intrinsically related to the conveyance of propositional and pragmatic meanings. It also argues that in the assessment of communicative effectiveness, the focus might be better placed on the use of grammatical ability for meaning conveyance, rather than solely on accuracy of the form. A number of assessment tasks have been used for eliciting performance relevant to linguistic resources, but test developers need to consider how grammar tasks and scoring procedures allow for the capture of information about the resources of communication at varied grain sizes. Research attempting to include the form, meaning, and form‐meaning mapping aspects of linguistic resources in assessments is ongoing. In the meantime, test score users should critically examine the meaning of scores obtained on assessments claimed to assess grammar.
SEE ALSO:Construction Grammar; Systemic Functional Linguistics; Task‐Based Language Assessment
1 Bachman, L. F. (1990). Fundamental considerations in language testing. Oxford, England: Oxford University Press.
2 Bachman, L. F., & Palmer, A. S. (1996). Language testing in practice. Oxford, England: Oxford University Press.
3 Bachman, L. F., & Palmer, A. S. (2010). Language assessment in practice. Oxford, England: Oxford University Press.
4 Bonk, B., & Oh, S. (2019). What aspects of speech contribute to the perceived intelligibility of L2 speakers? Paper presented at 2019 Language Testing Research Colloquium, Atlanta, GA.
5 Burnstein, J., van Moere, A., & Cheng, J. (2010). Validating automated speaking tests. Language Testing, 27(3), 355–77.
6 Canale, M., & Swain, M. (1980). Theoretical bases of communicative approaches to second language teaching and testing. Applied Linguistics, 1, 1–47.
7 Celce‐Murcia, M., & Larsen‐Freeman, D. (1999). The grammar book: An ESL/EFL teacher's course (2nd ed.). Boston, MA: Heinle & Heinle.
8 Chang, J. (2004). Examining models of second language knowledge with specific reference to relative clauses: A model‐comparison approach (Unpublished doctoral dissertation). Teachers College, Columbia University, New York, NY.
9 Chapelle, C. A. (2008). Utilizing technology in language assessment. In N. H. Hornberger (Ed.), Encyclopedia of language education. (2nd ed.). Heidelberg, Germany: Springer.
10 Chapelle, C. A., Chung, Y. R., Hegelheimer, V., Pendar, N., & Xu, J. (2010). Towards a computer‐delivered test of productive grammatical ability. Language Testing, 27(4), 443–69.
11 Cumming, A., Kantor, R., Baba, K., Eouanzoui, K., Erdosy, U., & James, M. (2006). Analysis of discourse features and verification of scoring levels for independent and integrated prototype writing tasks for new TOEFL (TOEFL Monograph No. MS‐30). Princeton, NJ: Educational Testing Service.
12 Ellis, R. (2005). Measuring the implicit and explicit knowledge of a second language: A psychometric study. Studies in Second Language Acquisition, 27, 141–72.
13 Ellis, R., & Barkhuizen, G. (2005). Analysing learner language. Oxford, England: Oxford University Press.
14 Enright, M. K., & Quinlan, T. (2010). Complementing human judgment of essays written by English language learners with e‐rater scoring. Language Testing, 27(3), 317–34.
15 Grabowski, K. (2009). Investigating the construct validity of a test designed to measure grammatical and pragmatic knowledge in the context of speaking (Unpublished doctoral dissertation). Teachers College, Columbia University, New York, NY.
16 Keenan, E., & Comrie, B. (1977). Noun phrase accessibility and the accessibility hypothesis. Inquiry, 9, 63–99.
17 Kim, H. J. (2009). Investigating the effects of context and language speaking ability (Unpublished dissertation). Teachers College, Columbia University, New York, NY.
18 Knoch, U., Macqueen, S., & O'Hagan, S. (2014). An investigation of the effect of task type on the discourse produced by students at various score levels in the TOEFL iBT® writing test (TOEFL Monograph No. RR‐14‐43). Princeton, NJ: Educational Testing Service.
19 Lado, R. (1961). Language testing. New York, NY: McGraw‐Hill.
20 Liao, Y.‐F. A. (2009). Construct validation study of the GEPT reading and listening sections: Re‐examining the models of L2 reading and listening abilities and their relations to lexico‐grammatical knowledge (Unpublished dissertation). Teachers College, Columbia University, New York, NY.
21 Purpura, J. E. (2004). Assessing grammar. Cambridge, England: Cambridge University Press.
22 Purpura, J. E. (2016). Assessing meaning. In E. Shohamy & L. Or (Eds.), Encyclopedia of language and education. Vol. 7: Language testing and assessment (3rd ed.). New York, NY: Springer International Publishing. doi: 10.1007/978‐3‐319‐02326‐7_1‐1
Читать дальше