GoodFit

Proctoring

· 3 min read· By Rohit Venugopal

Proctoring best practices in the ChatGPT era

Why old-school webcam proctoring misses most modern cheating - and what to do instead.

Last updated: April 2026

What has changed

Proctoring used to mean "is the webcam on and is anyone else in the room." That was adequate in 2019. It is not adequate in 2026.

The modern cheating pattern is: a candidate has a second device or another browser tab running ChatGPT, copies the question, generates an answer, and reads it back. The webcam looks fine the entire time. Webcam-only proctoring catches nothing.

A less obvious pattern is real-time coaching: a friend or paid helper listens via a second device and whispers answers. The candidate is technically alone in the room. Eye-tracking and webcam stills do not flag it. You need audio analysis and behavioral signals to catch this.

The signals that matter now

Browser-side signals tell you what the candidate's device is doing, and they catch the modern pattern. There are 12 types of violations that a good proctoring system tracks, grouped into three categories: browser behavior, visual integrity, and audio integrity.

  • Tab and window focus loss - flags anyone switching to ChatGPT mid-answer
  • Copy-paste detection in both directions
  • Unusual keyboard or input-device activity
  • Screen-sharing detection - catches candidates streaming to a helper
  • Face detection - missing candidate, multiple faces, face change
  • Lip-sync analysis - catches AI voice being read aloud

Flag does not mean fail

A red flag is a prompt for review, not an automatic rejection. Good candidates trigger false positives constantly: they have a noisy room, they clicked away to check a spec, their cat walked across the keyboard. The job of proctoring is to surface moments for human review efficiently.

Every flag should be timestamped and linked to the recording so a reviewer can jump to the moment in seconds. A 30-minute assessment with 4 flags should take 3-4 minutes to review, not 30.

Set up your proctoring with severity levels. A single tab switch is low severity - maybe they checked the time. Three tab switches during one coding question is high severity. Let your reviewers focus on the high-severity clusters instead of wading through every minor alert.

What to tell candidates

Be explicit before the assessment: what is monitored, what a flag means, what happens if one fires. Candidates who understand the rules perform better and feel respected. Hiding monitoring feels adversarial and cuts completion rates.

A simple one-paragraph opener works: "This assessment records your screen and webcam. Our system flags unusual activity like new tabs or copy-paste for a human to review. If your cat walks across the keyboard, that is fine. Do not use ChatGPT or a second device."

Transparency also protects you legally. In many regions, monitoring without disclosure creates compliance risk. A clear, upfront statement eliminates ambiguity and builds trust with candidates who have nothing to hide.

What not to proctor

Casual take-home assignments, portfolio reviews, and reference checks should not be proctored. Save proctoring for timed, in-platform assessments where integrity actually matters.

Also: do not proctor interviews with humans. The signal is in the conversation itself. Proctoring belongs on coding tests, timed skill assessments, and standardized cognitive tests - not on panel interviews.

Over-proctoring creates a hostile candidate experience. If every step of your hiring process feels like a surveillance exercise, strong candidates will drop out. They have options. Use proctoring surgically - on the steps where cheating would actually change the outcome.

Ready to try this with your next open role?

Start with 20 free credits. Run a real AI interview before you commit to anything.

Start free with 20 credits

Get started for free

Start hiring smarter today

Every account comes with 20 free credits. No credit card, no lock-in, no surprises.

Start free with 20 credits