College foreign language (FL) education, and higher education more generally, is witnessing an intensive increase in demands for assessment practices which support student learning, both in response to public accountability concerns and in recognition of the critical role that assessment should play within learner-centered education. Unfortunately, generic recommendations for the development and use of assessments, never mind off-the-shelf language tests, often prove inappropriate for meeting specific local purposes, especially where FL practitioners innovate curriculum and instruction in order to better address the unique needs of their learners. Likewise, test validation practices inherited from psychometric traditions frequently fail to provide information that is useful by actual test users for understanding and improving their assessment practices. In this presentation, I argue that: (a) valid assessment in FL education can only follow from a clear, initial specification of intended test use by the actual test users; and (b) validation should subscribe to evaluative (rather than psychometric) models which produce evidence that is directly relevant to the targeted concerns of specific constituents.
By way of example, I chart the application of these ideas to the development, use, and validity evaluation of several curriculum-based assessment practices within the unique programmatic context of the Georgetown University German Department. In particular, I show how validity evaluation efforts were generated to investigate priority concerns with the use of curriculum-based tests and how resulting information was applied to the improvement of assessment practices.
Posted October 19, 2003