A rule-based decision support system for detecting, reporting, and substantiating contract cheating within assignments in computing courses in UK Higher Education

  • Ajit, S. (Author)
  • Aparna Maikkara (Author)

Activity: Academic Talks or PresentationsConference Presentation

Description

Contract cheating is a global challenge to Higher Education and has increased with the onset of COVID-19 (Erguvan, 2021; Hill et al., 2021). It is an extremely serious issue in computing courses, particularly in relation to programming (Lancaster et al., 2019; Luxton-Reilly et al., 2018). Contract cheating can also be more broadly termed or redefined as assignment outsourcing (Awdry, 2021) because cheating need not explicitly be based on a specific contract. This could involve getting part or the whole assignment done by family or friends. Contract cheating brings serious disrepute to universities and devalues higher education qualifications. Contract cheating can have major consequences for public health and safety when students enter professions based on work produced by outsourcing (Dawson et al., 2020). It is also unfair on students who do not cheat and have worked hard to earn their degree. In the UK, Quality Assurance Agency (QAA) have stressed how contract cheating is an extremely serious matter when compared to plagiarism because of the deliberate, intentional decision of a student to engage a third-party to complete work (“Contracting to Cheat in Higher Education. How to Address Contract Cheating, the Use of Third-Party Services and Essay Mills. 2nd Edition, The Quality Assurance Agency for Higher Education,” 2017). QAA have also acknowledged that “… if a student is determined to find a way to use an essay mill, they will do so. Therefore, the greater deterrent will lie in detection of their use – detection is now the priority.” Assessment design can help in the reduction of cheating, but no assessment should be considered as cheat proof. 
Detection of contract cheating is time-consuming, onerous, and difficult. There is considerable amount of work done in developing software tools and methodologies to aid detection of contract cheating. Findings have suggested that software may be an effective component for universities to detect contract cheating (Dawson et al., 2020). Examples include Turnitin’s Authorship Investigate tool, stylometrics (Ison, 2020), keystroke dynamics (Byun et al., 2020) and intelligent decision comments (Renzella et al., 2020). It must be noted that none of these software tools/techniques can be used to accurately detect or substantiate contract cheating. This would still require human judgement after careful review of the evidence together with other information such as student viva/interview performance and academic engagement to determine the balance of probabilities if contract cheating has occurred. Hence, our research project hypothesized that an intelligent decision support system (or expert system) corroborating evidence from different tools and sources could improve the efficiency of detecting, reporting and substantiating contract cheating. 
The research explored the possibility of using a rule-based expert system utilizing forward chaining algorithms to support decision making of markers and academic integrity officers. A pilot study was conducted within a UK university. The process of detecting, reporting, and substantiating contract cheating within that university involved several stages:
(1) The marker finds cues and suspects contract cheating to have occurred. He/she/they may then invite the student for a viva/interview to gather further evidence. On suspicion of contract cheating, the marker needs to fill in a standard referral form provided by the university. A summary of the reason(s) for referral needs to be stated in this form. The form then needs to be sent to the administration team together with all the evidence.(2) The administration team sends this form to one of the Academic Integrity Officers (AIO) appointed by the university. These officers are usually academics who are trained and given the responsibility of investigating academic misconduct cases. The AIO reviews the submission (referral form and evidence) and invites the student for an interview. Following interview and further investigation, the AIO decides on whether academic misconduct has occurred, the type of penalty and if the case needs to be referred to a panel for further investigation. The reasons for the decision need to be stated in the form.(3) The Panel makes the final decision based on the facts of the case and evidence provided. 
The above process has been known to be time consuming particularly during COVID-19 with the high volume of cases and limited resources. Students could face significant delays in receiving case outcomes. The high workload involved in detecting and reporting cases could deter markers from doing so. Moreover, there is also the need to ensure consistency and accuracy of decisions taken. To this end, our research project designed a rule-based expert system to support decision making of both markers and academic integrity officers. The expert system makes use of facts and rules to support the marker in detecting contract cheating. The system generates an academic integrity score for each case and flags the marker on whether he/she/they need(s) to invite the student for a recorded viva-voce. For large class sizes, it is time consuming, tedious, and laborious for markers to conduct a viva-voce for all students. The system aims to alleviate this problem by shortlisting students for viva-voce. The integrity score is calculated by acquiring data from the marker (e.g., irregularities in assignment (references or methodology used) and combining it with that of others such as learning analytics (engagement), Turnitin (low similarity), assessment weightage and grade history. A dashboard indicates all the parameters contributing to the integrity score. Following viva-voce, the marker enters viva notes into the system and decides on whether to refer the student for suspected contract cheating. The system assists in autocompletion of referral forms. Further, the system supports the AIOs by displaying a dashboard that integrates data of the student’s performance/record in other modules/assignments. The algorithm of the proposed system was tested using a small sample of marked assignments from previous years. Preliminary evaluation of the prototype design of the proposed system was conducted by interviewing a lecturer and an AIO. The interview was structured and comprised of largely closed-ended questions including the use of Likert Scale. Feedback received is encouraging and both agreed that such a system would improve the efficiency of detecting, reporting and substantiating contract cheating. Work is currently underway to fully implement the system and evaluate it using a larger sample of assignments.


References
Awdry, R. (2021). Assignment outsourcing: moving beyond contract cheating. Assessment & Evaluation in Higher Education, 46(2), 220–235. https://doi.org/10.1080/02602938.2020.1765311
Byun, J., Park, J., & Oh, A. (2020). Detecting Contract Cheaters in Online Programming Classes with Keystroke Dynamics. Proceedings of the Seventh ACM Conference on Learning @ Scale, 273–276. https://doi.org/10.1145/3386527.3406726
Contracting to cheat in higher education. How to address contract cheating, the use of third-party services and essay mills. 2nd Edition, The Quality Assurance Agency for Higher Education. (2017). In The Quality Assurance Agency (QAA). https://www.qaa.ac.uk/docs/qaa/quality-code/contracting-to-cheat-in-higher-education.pdf
Dawson, P., Sutherland-Smith, W., & Ricksen, M. (2020). Can software improve marker accuracy at detecting contract cheating? A pilot study of the Turnitin authorship investigate alpha. Assessment & Evaluation in Higher Education, 45(4), 473–482. https://doi.org/10.1080/02602938.2019.1662884
Erguvan, I. D. (2021). The rise of contract cheating during the COVID-19 pandemic: a qualitative study through the eyes of academics in Kuwait. Language Testing in Asia, 11(1), 34. https://doi.org/10.1186/s40468-021-00149-y
Hill, G., Mason, J., & Dunn, A. (2021). Contract cheating: an increasing challenge for global academic community arising from COVID-19. Research and Practice in Technology Enhanced Learning, 16(1), 24. https://doi.org/10.1186/s41039-021-00166-8
Ison, D. (2020). Detection of Online Contract Cheating Through Stylometry: A Pilot Study. Online Learning, 24(2), 142–165. https://doi.org/10.24059/olj.v24i2.2096
Lancaster, T., Robins, A. V, & Fincher, S. A. (2019). Assessment and Plagiarism. In A. V Robins & S. A. Fincher (Eds.), The Cambridge Handbook of Computing Education Research (pp. 414–444). Cambridge University Press. https://doi.org/DOI: 10.1017/9781108654555.015
Luxton-Reilly, A., Simon, Albluwi, I., Becker, B. A., Giannakos, M., Kumar, A. N., Ott, L., Paterson, J., Scott, M. J., Sheard, J., & Szabo, C. (2018). Introductory Programming: A Systematic Literature Review. Proceedings Companion of the 23rd Annual ACM Conference on Innovation and Technology in Computer Science Education, 55–106. https://doi.org/10.1145/3293881.3295779
Renzella, J., Cain, A., & Schneider, J.-G. (2020). An Intelligent Tool for Combatting Contract Cheating Behaviour by Facilitating Scalable Student-Tutor Discussions. In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering: Companion Proceedings (pp. 298–299). Association for Computing Machinery. https://doi.org/10.1145/3377812.3390795
Period6 May 2022
Event titleEuropean Conference on Academic Integrity and Plagiarism 2022
Event typeConference
LocationPorto, PortugalShow on map
Degree of RecognitionInternational

Keywords

  • contract cheating
  • academic integrity
  • decision support system
  • academic misconduct