Integrated tasks have been categorized into at least three types: text or content‐responsible, stimulus‐related, and thematically linked. The most common type is text or content‐responsible (Leki & Carson, 1997; Knoch & Sitajalabhorn, 2013), which requires the test taker to write or speak about the content of texts. Examples might include an oral summarization task or a persuasive essay that requires test takers to use ideas from source texts for supporting evidence. The prompt below illustrates such a task, which is referred to as a “content‐responsible” task.
Directions: Read the following passage that gives an opinion about technology. Then explain the writer's main point and why you agree or disagree with it.
Technology has not fulfilled the promises made when computers were first adapted for personal use. They have made life more complicated and increased the amount of work we do, rather than simplify life or minimize workloads. In one day, I must respond to at least 50 emails. Sometimes these emails are things that would not even warrant a phone call. Instead of meeting face‐to‐face, my meetings are online involving video cameras, microphone, etc. Time is spent in setting up the technology rather than in making important decisions. I am really starting to wonder if there is any advantage to having modern technology. Where are the benefits?
These tasks are common in assessment for academic purposes as well as in content‐based instruction where language is used as a medium for learning, for example in language‐immersion programs or sheltered instruction. Such assessment requires that test takers comprehend the reading and incorporate it into their response. If the answer could be completed without reading the text, then it is considered a stimulus‐based test task.
A stimulus task asks test takers to read or listen to a text but does not require that the content be reflected in their performance. The input texts or visuals, such as lists, charts, or graphs, serve as idea generators or background information (Leki & Carson, 1997). The example below presents a stimulus task given at the end of an ESL course.
Directions: Read the following pieces of advice about successful study skills.
General suggestions for study skills
Write down due dates for all assignments on a calendar.
Determine weekly goals for studying and focus on meeting them.
Study in a room where you have complete silence and can concentrate.
At the end of each study session, plan what to do for your next session.
Reward yourself for working hard.
Imagine you are writing to a classmate who will take this class in the future. What other study skills will you suggest that are specific for this class? What helped you succeed? Provide three suggestions with some explanation and an example for each. Write approximately 600–700 words on this topic; write essay style, not just listing. Your response will be evaluated on the following: organization, development, clarity, and grammatical accuracy.
These two task types, content‐responsible and stimulus‐based, differ in what they require test takers to do and how the resulting performance is rated. A stimulus‐based task integrates skills in the process only; for example, a test taker may need to comprehend the source texts and plan a response on the topic, but does not have to integrate the texts in the product of the assessment. In content‐responsible integrated tasks, both the process and the products require skill integration, and, therefore, the rating rubric should include criteria for assessing the test takers' use of more than one skill. The third test type considered is integrated assessment, which includes several test sections that are thematically linked (Esmaeili, 2002). For example, a section assessing reading comprehension would include a text that is also the topic for a subsequent writing prompt.
In addition to these three types of integration, other tasks that may be considered integrated are being used to assess language, as well. Some are new, while others are not, but are being viewed in a new light. For example, story‐completion writing tasks have been used in language acquisition research for some time; however, scholars are looking into the potential for these tasks to elicit integrated reading‐into‐writing performances (Wang & Qi, 2013). Another familiar task, short‐answer questions in a reading tests, can be considered for their assessment of both writing and reading (Weigle, Yang, & Montee, 2013). Although the writing is much shorter in these tasks, they can afford a means to assess integration for lower proficiency students who may not be able to produce a written essay. Another type of task that adds to the variety of ways to assess integrated skills is to reverse the direction of the skills in the task. For example, asking writers to free‐write on a topic before reading texts that delve into the topic can activate background knowledge to support comprehension (Plakans et al., 2018). There is great potential for continued innovation or reframing of language tasks to elicit skills integration.
Research in the Assessment of Integrated Skills
Researchers have attempted to understand integrated tasks by comparing test takers' performances on them with their performances on tasks requiring only one skill. Research comparing independent and integrated writing task performance has found that overall scores show similarities (Brown, Hilgers, & Marsella, 1991) and are positively correlated (Sawaki et al., 2013; Zhu et al., 2016). Yet closer investigation of discourse features has revealed some differences, in such features as grammatical accuracy, development, and rhetorical stance (Cumming et al., 2005). For example, in studying the prototype TOEFL iBT task, Cumming et al. (2005) found that integrated task responses were shorter, but used longer words and more variety in words when compared to independent writing tasks. The independent writing responses were scored higher in certain rhetorical features, such as the quality of propositions, claims, and warrants.
Studies investigating the test‐taking process across tasks types have found evidence that some test takers follow a similar approach for both independent and integrated tasks, while others treat integrated tasks as requiring synthesis and integration strategies, such as scanning the text for ideas to include in their essay (Plakans, 2009; Barkaoui, 2015). However, the study of test‐taking processes on two different integrated tasks also revealed differences across tasks: Ascención (2005) found that read‐and‐respond writing tasks required more planning and monitoring than a read‐and‐summarize task.
Researchers have also attempted to reveal effects of proficiency and task familiarity on integrated task performance. Not surprisingly, higher‐proficiency writers produce longer responses to integrated writing tasks than lower‐level writers (Cumming et al., 2005; Gebril & Plakans, 2013). Expected differences in grammatical accuracy also occur across levels of proficiency, as well as in organization and the use of integration source texts (Cumming et al., 2005; Gebril & Plakans, 2013; Plakans & Gebril, 2017). Research results also suggest that prior experience with integrated tasks, educational level, first‐language writing experience, and interest in writing may affect performance (Ascención, 2005; Wolfersberger, 2013). The role of texts is also important for developing integrated assessment, as well as for understanding scores. In a study of different source texts used in summary writing, Li (2014) found that test takers performed better in summarizing expository texts, despite their opinions that it was easier to summarize narrative text.
Читать дальше