GoodFit

Coding Assessments

Real coding challenges, not textbook puzzles

15 languages, real runtimes. Public and hidden test cases. See every run attempt, not just the final submission. Desktop-only with proctoring built in.

solution.py
1def two_sum(nums, target):
2 seen = {}
3 for i, n in enumerate(nums):
4 if target - n in seen:
5 return [seen[target - n], i]
6 seen[n] = i
7 return []
8

Test Cases

Test 1

Public · 230ms

Test 2

Public · 180ms

Test 3

Hidden · Failed

Test 4

Hidden · Running

Trusted by fast-growing companies

Customer logo
Customer logo
Customer logo
Customer logo
Customer logo
Customer logo
Astuto
The Sleep Company
Hudle
Customer logo
Customer logo
Customer logo
Customer logo
Customer logo
Customer logo
Customer logo
Customer logo
Customer logo
Customer logo
Customer logo
Customer logo
Customer logo
Customer logo
Customer logo
Customer logo
Customer logo
Astuto
The Sleep Company
Hudle
Customer logo
Customer logo
Customer logo
Customer logo
Customer logo
Customer logo
Customer logo
Customer logo
Customer logo
Customer logo
Customer logo

IDE

15 languages, real runtimes

Python, JavaScript, TypeScript, Java, Go, C++, C, C#, Ruby, PHP, Kotlin, Bash, Assembly, MySQL, and more. These run on the same runtimes your engineers use - not a stripped-down sandbox subset. Candidates code in what they actually know, not in a "close enough" language.

  • Syntax highlighting, bracket matching, proper autocomplete
  • Install common packages per language (within resource limits)
  • Run output + stderr in real time - not a black-box "submit and wait"
  • Same IDE UX across all languages so candidates are not fighting the tool
solution.py
1def two_sum(nums, target):
2 seen = {}
3 for i, n in enumerate(nums):
4 if target - n in seen:
5 return [seen[target - n], i]
6 seen[n] = i
7 return []
8

Test Cases

Test 1

Public · 230ms

Test 2

Public · 180ms

Test 3

Hidden · Failed

Test 4

Hidden · Running

Test cases

Public visible, hidden enforced

Public tests tell candidates what the function should do. Hidden tests catch the edge cases you actually care about: empty inputs, off-by-ones, 10⁹ elements, negative numbers. Copy-paste solutions consistently fail hidden cases - we score on both, weighted as you choose.

  • Public tests with visible input / expected output
  • Hidden tests with weight and time limits per case
  • Per-case scoring: partial credit on partial pass
  • Edge-case test packs per question (empty, large, negative, off-by-one)
solution.py
1def two_sum(nums, target):
2 seen = {}
3 for i, n in enumerate(nums):
4 if target - n in seen:
5 return [seen[target - n], i]
6 seen[n] = i
7 return []
8

Test Cases

Test 1

Public · 230ms

Test 2

Public · 180ms

Test 3

Hidden · Failed

Test 4

Hidden · Running

Attempt history

See how they think, not just what they submit

Every 'Run' is logged. You see the broken first attempt, the off-by-one debug, the refactor. Candidates who guess-and-submit look different from candidates who read the problem, plan, and iterate. The attempt history shows you which one you're hiring.

  • Full run history with code snapshots at each Run
  • Pacing and timing analysis to see how candidates work
  • Highlight moments of pasted-in code (proctoring integration)
  • Compare final submission to intermediate attempts
solution.py
1def two_sum(nums, target):
2 seen = {}
3 for i, n in enumerate(nums):
4 if target - n in seen:
5 return [seen[target - n], i]
6 seen[n] = i
7 return []
8

Test Cases

Test 1

Public · 230ms

Test 2

Public · 180ms

Test 3

Hidden · Failed

Test 4

Hidden · Running

Timing

Per-question timers for the right bar

A Senior Backend system-design question and a junior FizzBuzz screen should not share a timer. Set per-question limits so your rubric matches your role. Overall time budgets still work for comparability across candidates.

  • Per-question timers with soft-warning and hard-stop
  • Overall assessment timers for comparability
  • Role-specific defaults included in the library
  • Timer events on the attempt history for review
solution.py
1def two_sum(nums, target):
2 seen = {}
3 for i, n in enumerate(nums):
4 if target - n in seen:
5 return [seen[target - n], i]
6 seen[n] = i
7 return []
8

Test Cases

Test 1

Public · 230ms

Test 2

Public · 180ms

Test 3

Hidden · Failed

Test 4

Hidden · Running

Challenge types

Standard problems, web apps, and database queries

Not every coding challenge is a terminal input/output problem. GoodFit supports three challenge types: standard input/output problems for algorithms, web-based challenges where candidates build a working page or API, and database challenges where candidates write real queries. Each type runs in its own sandbox with appropriate tooling.

  • Standard input/output for algorithm and data-structure problems
  • Web challenges for frontend and full-stack roles
  • Database challenges with real query execution for data roles
  • Desktop-only experience - mobile is blocked to maintain integrity
solution.py
1def two_sum(nums, target):
2 seen = {}
3 for i, n in enumerate(nums):
4 if target - n in seen:
5 return [seen[target - n], i]
6 seen[n] = i
7 return []
8

Test Cases

Test 1

Public · 230ms

Test 2

Public · 180ms

Test 3

Hidden · Failed

Test 4

Hidden · Running

Customer story · Xcelore

Xcelore runs role-specific coding assessments for every engineering hire - and gets a ranked, comparable shortlist instead of a slush pile.

Before GoodFit, the process was entirely manual. Recruiters reviewed every resume, then invited shortlisted candidates to full-day in-person rounds. There was no early filtering layer, a lot of time spent before anyone knew whether a candidate could actually code.
Sakshi Srivastava
Sakshi SrivastavaCampus Hiring Recruiter, Xcelore, Xcelore
Read the full story

6 weeks

Hiring time saved

2,600+

AI interviews conducted

What you get

15

languages with real runtimes

Public + Hidden

test case system

Full

attempt history per candidate

Per-Q

time limits

FAQ

Questions hiring teams ask about Coding Assessments

Which languages are supported?
Python, JavaScript, TypeScript, Java, Go, C++, C, C#, Ruby, PHP, Kotlin, Bash, Assembly, MySQL, and more. Let us know if you need something else - we add languages regularly.
Can candidates use their own IDE?
No, and that is intentional. The in-browser IDE is how we catch copy-paste, run attempts, and proctoring signals. The IDE is familiar enough (familiar code editor) that candidates adapt in seconds.
How do you prevent LeetCode-style memorization?
Question packs include variations and private questions you can author per role. The library tracks question exposure so high-traffic questions get rotated. Hidden test cases catch candidates who paste a surface solution.
Can I see a sample report?
Yes. Book a demo or burn a few free credits on a practice assessment of your own. The report includes scorecard, test results, attempt history, and proctoring flags (if enabled).

Get started for free

Start hiring smarter today

Every account comes with 20 free credits. No credit card, no lock-in, no surprises.

Start free with 20 credits