Automated assessment and feedback for EFL texts
Ted Briscoe
The Alta Institute Computer Laboratory
University of Cambridge
The task of automated assessment (AA) of free text focuses in the EFL (English as a Further Language) context on analysing and assessing the quality and variety of writing competence. Automated assessment systems extract textual features in order to assign a score to a text. The earliest systems used superficial features, such as word and sentence length, as proxies for examiners’ judgements. More recent systems have used sophisticated automated text processing techniques to measure grammaticality, textual coherence, pre-specified errors, and so forth.
Deployment of AA systems gives a number of advantages, such as reduced workload and greater accuracy, especially when applied to large-scale assessments. In some cases, implementations include feedback with respect to the writers’ writing abilities, thus facilitating self-assessment and self-tutoring.
I will describe and motivate our approach to grading written text, which exploits recent advances in computational linguistics and machine learning, and then report experimental results which suggest that our AA system’s performance is essentially indistinguishable from a human examiner when applied to text similar to that seen during training. I will then demonstrate how AA can be applied at the script, sentence and subsentence level along with error detection and feedback, facilitating automated tutoring. Finally, I will report evaluations with a representative sample of B1/B2 CEFR-level students who have used the tutoring system as a web service via the cloud.