Print

Automated proctoring

Author: Xabier Lareo

Online proctoring refers to the remote monitoring and supervision of individuals during an exam. Since the COVID-19 pandemic, the use of online proctoring has skyrocketed and extended from the purely educational sphere to the professional sphere (e.g. certification exams).

Online proctoring can be conducted in several ways: live, where human reviewers monitor test-takers in real time; automated, where AI-powered software detects and flags suspicious behaviour; or hybrid, where events signalled as suspicious by the automated proctoring system are reviewed by humans. In hybrid proctoring systems, human review can take place in real time or afterwards, by analysing a record of the activity.

The growth of online proctoring services is limited by the number of human reviewers and their ability to sustain attention over long periods. To address this issue and ensure scalability, online proctoring is increasingly relying on automated detection methods to reduce the workload and costs associated with human proctors.

The main objectives of automated or hybrid proctoring systems are to authenticate users, limit users’ computer capabilities, analyse users’ behaviour and generate a report indicating which events require human reviewers’ attention.

To prevent impersonation, i.e. the possibility that another individual sits an exam instead of the expected individual, some automated proctoring systems gather various forms of identity verification data. This often includes biometric data, such as facial scans for initial identity verification and continuous liveness detection[1] throughout the exam. Some systems may also use less common biometric inputs like keystroke patterns or mouse movements.

Some of these tools leverage users’ webcams and microphones to record audio and video feeds of the entire proctored session. To detect hidden phones, notes on walls, extra monitors or the presence of other individuals in the room, some proctoring tools require environmental scans using the webcam, or even dual-camera setups.

Other tools monitor screen activity and system level data, including browser history (during the exam session), tab switching, attempts to copy-paste content and usage of unauthorised applications. Some systems record the entire screen activity for comprehensive review.

These tools use AI components to implement functionalities such as:

  • Identity and liveness verification to prevent impersonation;
  • Behavioural monitoring and analytics (checking gaze direction, head movements and overall conduct during the examination) to detect suspicious activity;
  • Audio analysis to check for unusual sounds, conversations or the presence of multiple voices;
  • Object detection to identify unauthorised items such as mobile phones or notes.

Proctoring tools use Ai components to increase their automated capacities and claim to use human proctors’ time and skills more efficiently. However, AI components have limited context understanding and lack the intuition and capacity of human proctors to interpret the nuances of human behaviour. This limitation is most likely the reason why automated proctoring tools are allegedly prone to generating false positives.

Trend developments

In recent years, most proctoring tools have incorporated AI-powered capacities. However, AI is expected to play an even more prominent role in future proctoring tools, evolving from simply detecting suspicious behaviour in one exam to using predictive analytics to identify patterns across multiple exams.

Another trend in proctoring tools is the use, on top of the computer’s webcam, of a second camera (often a mobile device) for a more comprehensive room scan.

An additional ongoing trend is better integration of proctoring tools with learning management systems and educational platforms that will improve scalability and user experience (e.g. authenticating users only once).

Despite the increasing use of automated proctoring, there is a clear pushback from some users against AI-powered automation of proctoring tools. This pushback has led some proctoring service providers to stop offering fully automated reports and go back to hybrid proctoring.

According to Research and Markets, the global market for Online Exam Proctoring, valued at USD 941.3 Million in 2024, is projected to reach USD 2.1 Billion by 2030, growing at a CAGR of 14.7% from 2024 to 2030.[2] 

Potential impact on individuals

The use of automated proctoring tools in online assessments raises significant concerns. Despite claims of AI features performing better or fairer than human proctors, there is a lack of transparency in the training, performance, and explainability of these models. This makes it difficult for students to challenge decisions, undermining fairness and accountability.

Automated proctoring tools have faced allegations of bias, particularly when verifying identities or checking liveness for individuals from minority demographic groups. Automated proctoring software can also be unfair to students with conditions like attention deficit disorder, Tourette's syndrome, autism or dyslexia[1]. Due to atypical movements or their reliance on assistive technologies, these students may be mistakenly identified as cheating. These types of errors can add a considerable amount of stress and have serious consequences for users, and could unfairly exclude users from assessments or subject them to increased stress.

Typically, users access proctoring tools from their homes. Often from private areas like bedrooms or shared spaces like living rooms, where they may be with family members or roommates. Consequently, the use of proctoring tools can be highly intrusive and could allow inferring personal details, including socio-economic conditions or even sexual orientation (e.g. from posters in the room). The use of a second camera can further increase the intrusiveness of proctoring, while the reliance on user consent as a lawful basis for the processing of personal data is problematic due to the inherent power imbalance between educational organisations and their students.

Although all proctoring tools share the same goal of enforcing pre-established rules, they differ significantly in their approaches and the types of personal data they collect. This variation raises a concern: there is no clear agreement on what personal data is truly necessary to achieve this goal. In other words, proctoring tools do not uniformly apply the principle of data minimisation, which requires that only the minimum amount of personal data necessary for a specific purpose should be collected. This inconsistency suggests that some tools may be collecting more personal data than is strictly necessary, highlighting the need for clearer guidelines and standards.

As automation increases, so does the volume of data collected by proctoring tools. This raises concerns about data breaches, which have already occurred. In 2020, two different tools were hacked. In one of the breaches, more than 440.000 users saw their usernames, unencrypted passwords, legal names and full residential addresses leaked. In the other breach, the leaked data included facial recognition data, contact info, names, emails and videos.[3]

While automated proctoring tools might foster the right to education by enabling remote evaluation of students, their use also carries significant risks that need to be managed. These risks include potential biases and errors in their AI-powered components, particularly for students with special needs or from minority groups, which can lead to false accusations of cheating. Additionally, the collection of personal data, including video and audio feeds from private spaces, raises concerns about privacy and data protection. Furthermore, the power imbalance between the controller (educational institution) and the data subject (student) can render user consent invalid, as students may feel pressured to agree to the use of these tools.
Suggestions for further reading
  • Castets-Renard, C., & Robichaud-Durand, S. (2023). Logiciels de surveillance d’examens en ligne en temps de pandémie: à la recherche d’une minimisation des risques d’atteinte à la vie privée des étudiants. Revue générale de droit, 53(1), 207-245.
  • Burgess, B., Ginsberg, A., Felten, E. W., & Cohney, S. (2022). Watching the watchers: bias and vulnerability in remote proctoring software. In 31st USENIX security symposium (USENIX security 22) (pp. 571-588).
  • Slusky, L. (2020). Cybersecurity of online proctoring systems. Journal of International Technology and Information Management, 29(1), 56-83. 

[1] Ableism And Disability Discrimination In New Surveillance Technologies: How new surveillance technologies in education, policing, health care, and the workplace disproportionately harm disabled people, May 24, 2022, Lydia X. Z. BrownRidhi ShettyMatt SchererAndrew Crawford


[1] Liveness detection is the technology used to verify that the person taking a test or exam is a live human being, rather than a pre-recorded video or an impersonator, by analysing various biometric and behavioural indicators.

[3] Online exam tool ProctorU admits breach after hackers leak its database, https://hackread.com/online-exam-tool-proctoru-breach-database-leak/