Why engineers refuse coding tests for interviews
4 mins, 48 secs read time
The idea behind a coding test is very simple: to filter out candidates who do not have the technical chops for the role early on in the process, before the hiring manager and candidate both waste their time with an in-person interview.
But most engineers today frown at the idea of completing a coding test, and over 50% straight out refuse to do status quo assessments (based on our research with 100+ companies in SEA).
Here are three of the most common reasons why engineers hate status quo coding tests:
- They test for algorithmic skill rather than ability to write code
Companies need the scores from assessments to be significant, and the easiest way to do that is to use trick questions on the assessments. To do well on these tests, candidates need to spend weeks practising writing code for a list of trick questions. Only a fraction of developers are able to do well on these tests.
As an interviewer, it is very easy to forget how stressful the interview setting is for the interviewee. Having to write executable code for a very niche algorithm you studied at school (and only if you were a CS major) and that you’ve never really used in your time as an engineer in the real world, with the timer ticking, can be extremely intimidating.
While it's great if someone's good at algorithms (even though this skill can be improved with practice), it is not a strong indicator of how good of an engineer someone is or how good they're going to be in the role. Only a small fraction of tech roles require strong algorithmic ability. This way of measuring developer skills also has an inherent bias against more experienced developers.
As you can imagine, no great developer is excited about the prospect of giving a test in the first place. Add to that the fact that the questions are irrelevant, and you get 50% of candidates who straight out refuse to take these tests. - They're too big an ask
Asking candidates to spend more than an hour on a coding test before you've committed any time to them is unfair.
A three-hour coding test defeats the purpose of automation, because, while the hiring manager has nothing to lose, the candidate now needs to spend more time on it than they would for a video or in-person interview.
The longer your assessment, the lower the test-taking rate will be. - It's harder to code in an unfamiliar environment
Most developers have a preference for an IDE (integrated development environment) that they've customized to help them write code seamlessly. A test environment is unfamiliar, and therefore harder for a software engineer to function in optimally. This is especially true when the test requires the use of not just a programming language in a simple code editor, but also includes testing for front-end/backend code framework capabilities.
Developers often challenge the validity of coding tests/assessments because of these and other reasons – understandably so.
So should we skip coding tests altogether?
That is not an option. Anyone involved in tech hiring knows that there are plenty of developers who aren't really qualified for the role, making some kind of litmus test that candidates must pass before being invited to an interview a necessity.
Can't we use a resume screen instead?
Software engineers don’t tend to be good at selling themselves, and great candidates often massively undersell themselves on paper. At best, a resume screen helps you eliminate some candidates who are very clearly not qualified for the role and sort resumes by priority. Beyond that, a resume filter has an inherent bias toward candidates with good credentials (education and work history). Good programmers can come from anywhere, and using keyword matching means you're probably missing out on a lot of great candidates. But you can’t just start interviewing everyone who applies.
So you need an assessment solution, how do you evaluate if the one you're using is a good one?
Here are the top things you want to check for. You're in good hands if:
- Your test-taking rate > 70%
- The average time to complete the assessment is between 45 and 75 minutes
- When you ask candidates for their feedback during in-person interviews, they have good things to say about their experience.
- Hiring managers are happy with the quality of candidates getting forwarded to the in-person rounds
If your current solution does not satisfy these criteria, you might be missing out on strong candidates for your team. As software engineers and hiring managers, my co-founder, Siddhartha Gunti and I have previously used a majority of the status quo solutions and found the results unsatisfactory. Here’s what we've been working toward for the past couple years, and have seen early success in.
At our company Adaface, we're building a way for companies to automate the first round of tech interview with a conversational AI, Ada.
It’s not just a coding test in chatbot format. Here's what's different:
- Shorter assessments (45–60 mins) to make sure engineers can perform quickly and still showcase their expertise
- Custom assessments tailored to the requirements of the role (no trick questions)
- Questions at the simpler end of the spectrum (it is a screening interview) with a generous time allowance (3x what it takes our team to write code for it)
- Extremely granular scoring that eliminates false positives and false negatives
- Friendly candidate experience (hints for each question, friendly messaging and chatbot; average candidate NPS is 4.4/5)
When asked about what convinced them that Adaface was the right choice, this is what Brandon Lee, Head of People & Culture at Love, Bonito, had to say:
“Externally, we were receiving positive feedback on the ease of use and convenience of Adaface from candidates. Internally, having tested it with our team members, we were also amazed at the validity and rigor of the assignments.”