Automated answer scoring for engineering’s open-ended questions
Document Type
Article
Publication Date
2019
Department/School
Engineering Technology
Publication Title
International Journal of Research in Education Methodology
Abstract
Audience Response System (ARS), like “clicker,” has proven their effectiveness in students’ engagement and in enhancing their learning. Apart from close-ended questions, ARS can help instructors to pose open-ended questions. Such questions are not scored automatically for that Automated Text Scoring; ATS is vastly used. This paper presents the findings of the development of an intelligent Automated Text Scoring, iATS, which provides instantaneous scoring of students’ responses to STEM-related factual questions. iATS is integrated with an Audience Response System (ARS), known as iRes, which captures students’ responses in traditional classrooms environment using smartphones. iATS Research is conducted to code and test three Natural Language Processing (NLP), text similarity methods. The codes were developed in PHP and Python environments. Experiments were performed to test Cosine similarity, Jaccard Index and Corpus-based and knowledge-based measures, (CKM), scores against instructor’s manual grades. The research suggested that the cosine similarity and Jaccard index are underestimating with an error of 22% and 26%, respectively. CKM has a low error (18%), but it is overestimating the score. It is concluded that codes need to be modified with a corpus developed within the knowledge domain and a new regression model should be created to improve the accuracy of automatic scoring.
Link to Published Version
Recommended Citation
Ahmed, M. S. (2019). Automated answer scoring for engineering’s open-ended questions. International Journal of Research in Education Methodology, 10, 3398–3406. https://doi.org/10.24297/ijrem.v10i0.8495