Assessing open-ended questions through a modified ontology approach (2011)
This talk will present the design of a software tool aiming at semiautomatic assessment of open questions in e-learning environments. A literature review of existing systems that rely on Semantic Web technologies for the representation of domain-specific knowledge, such as the Web Ontology Language (OWL), will be presented. The talk concludes with an overview of the semantic similarity measurement techniques found in the literature.
Assessment by closed questions in e-learning courses is easily achieved by both humans and computers, but only triggers rote learning by students. This corresponds to the lowest level of Bloom’s taxonomy, namely knowledge. The highest level of this taxonomy, evaluation, requires more complex reasoning and can only be assessed by the use of open questions. The grading of these open questions is a tedious task for human teachers, therefore our approach aims at reducing the teachers’ workload by providing semiautomatic and objective evaluation of students’ answers.
Our software tool seeks to evaluate students’ answers to open questions in e-learning course exams by comparing them to a correct answer given by the course’s teacher. Using Natural Language Processing techniques, both the students’ and the teacher’s answers are semi-automatically annotated according to a reference domain-specific ontology. Using Semantic Web technologies, the ontology is represented in OWL, and could be acquired from different sources on the Web. Once semantically annotated, the students’ and teacher’s answers are compared using accepted semantic similarity measuring techniques and graded automatically according to parameters predefined by the teacher. Since our system is targeted at undergraduate computer science courses, the novelty of our approach resides in the encoding of procedural information in the ontology to accurately represent algorithmic knowledge.
This tool would be helpful in an e-learning setting where a more thorough evaluation of the students’ understanding of the material is needed; or to reduce the time spent by the teachers on grading papers.
Dr. Chadia Moghrabi is professor and head of the Computer Science Department at the Université de Moncton. She is also the research leader for the Moncton site of the Institute on Culture, Multimedia, Technology and Cognition (CMTC). She was the technical director of Arts-Netlantic CMTC. Both projects were in collaboration and under the lead of Professor Cohen from UPEI. She obtained her Master and Doctorate degrees from the University of Paris. She worked in Paris both in the industry and as a lecturer at Université de Pierre et Marie Curie. Her research interests cover Artificial Intelligence, Intelligent Teaching Systems, and natural language processing.
Eric Snow is currently pursuing a Masters degree in Computer Science at the Université de Moncton, under the supervision of professor Chadia Moghrabi, working on the automatic assessment of open-ended questions in e-learning. For 10 years, he was a Web programmer at the Centre d'études acadiennes Anselme-Chiasson (CEAAC) of the Université de Moncton. Since 2007, he was in charge of creating a database of archival resources and digital files for the CEAAC.