Main Article Content
Real-world use of English involves speakers and listeners from various linguistic backgrounds whose primary goal is mutual comprehensibility and the majority of conversations in English do not involve speakers from the Inner Circle (Graddol, 2006; Kirkpatrick, 2007). Yet, rather than focusing on comprehensibility, many tests continue to measure spoken performance with reference to an idealised, native-speaker form, weakening the validity of these tests in evaluating authentic spoken communicative competence as it is used in a global lingua franca context and leading to a narrowing of the construct of ELF, or to the inclusion of construct irrelevant factors.
Validation of a test of English as a tool for global communication includes demonstrating the link between the construct (real-world communicative ability in a particular context) and the test tasks and rating criteria (McNamara, 2006), and evidence to support the interpretation of a test score needs to be presented as part of the overall validity argument. First, this paper argues that the context of English use that many high-stakes test-takers aspire to – that of English for Academic Purposes (EAP) – is frequently an ELF context; second, Toulmin’s (2003) argument schema is leveraged to explore what evidence is required to support warrants and claims that a test provides a valid representation of a test-taker’s ability to use ELF. The framework as it relates to the validation of language tests in general is presented and the model is then applied to two tests of spoken English by way of illustration. Although examples are included, the main aim is to provide a theoretical justification for a focus on comprehensibility and the inclusion of linguistic variation in the assessment of ELF and to present a validation framework that can be applied by test developers and test users.