By James Galvin, CEO and Founder of vsource.io
The Flawed Interview Process
I still remember my first proper job interview. A starry-eyed computer science graduate, I donned my best Dunne’s Stores shirt and my trusty faux suede blazer and dragged myself away from Baldur’s Gate for the afternoon.
The HR Manager gave me two tests:
First: the programming test, almost entirely specific to ASP.NET, with which I had no experience. I handed in a blank booklet. No problem, I was told, because this was a graduate job opportunity which did not require prior experience. Invisible high five.
Second: the brainteasers. Puzzles, logic tests, analytical reasoning. Thirty questions and one hour. I was in my element. I got the maximum score and in record time. Actual high five.
After a token meeting with one of the directors, where we talked about how many pizzas get delivered in Cork City every week, I was offered the job. I walked out of there with a big smile on my face, and with a very flawed understanding of how an interview process should work.
Two years later, when I founded my own tech company, I put a similar flawed system in place for hiring my dev team.
I made an exam with a mixture of programming questions and logic puzzles. The Castle Puzzle, in the title image, is one that most people failed to solve. I remembered the names of everybody who got that right. Did it have any correlation to a successful employee in the long term? None that I’m aware of.
No Place for Gimmicks
I recently heard about a new HR Technology startup whereby a candidate plays a game for 8 minutes. Over 400 measurements per second are taken, which apparently can determine your career fit. According to Lensa, I’m an inventor, like Nikola Tesla. I’m chuffed. But I’m very skeptical about companies using tools like this to help with their hiring.
We have seen these kind of HR games before, evaluating obscure metrics like Memory Span, Risk Learning, Altruism Preference. Without a team of neuroscientists interpreting the results specifically for your company, tools like these will merely introduce dozens of new assumptions and hypotheses and make spaghetti of your hiring variables.
The more simple and unambiguous your interview evaluation metrics are, the better. And without a clear and consistent framework for decision making, your interviewers soon be tangled up in cognitive bias.
I grew up with Lewis Carroll and Raymond Smullyan. Labyrinth guards, knights and knaves, identical twins, prisoners with hats. Over the years I’ve had a propensity to hire chess players or young people who distinguish themselves in mathematics or logic. I wanted to hire very smart people and, naively, this was my proxy for measuring intelligence.
Can you make this assumption: a great chess player has what it takes to be a great software developer? Probably not. A game of chess is a completely different stage. A very specific set of rules with no grey areas, no buggy compilers, no need for teamwork, no distractions, no irrational end-users. If you want to make an assumption like this, you should be prepared to back it up with real evidence. Try holding a chess tournament in your company and see how the rankings compare to your performance reviews. My guess is that there will be no obvious correlation.
The main reasons why people are good at chess: because they have an aptitude for chess. Or because they have a lot of experience playing chess. High score on the IQ test? Shows an aptitude for IQ tests.
If you want to test somebody’s ability to become a great software developer, then you should measure their ability to develop software.
The Growing Need to Avoid False Negatives
LinkedIn is bubbling with articles about the best ways of hiring the stand-out candidate. However, in this competitive climate, we need to focus more on ways of hiring the best candidate who doesn’t stand out.
If you’re dealing with a very high candidate-to-hire ratio, like most top tech companies these days, you can’t afford to miss out on the “diamonds in the rough”. High potential applicants who undersell themselves in the interview process. You must be keenly aware of why your candidates are being eliminated.
The Insightful Interviewer Trap
Edward de Bono wrote about “the intelligence trap”:
“A highly intelligent person may have a certain view on a subject and use his or her thinking just to support that view. This is done with arguments that make a great deal of sense. The more able a thinker is to support a point of view, the less inclined is that thinker to explore other points of view. Since the original point of view may be based on prejudice or habit, this failure to explore the subject is bad thinking.”
Experienced interviewers face a similar threat: being a “good judge of character”. The tendency to rely too much on instinct, using heuristics and being quick to pigeonhole the applicant. In general, people are overconfident in their ability to judge others.
If you’re trying to reduce false negatives, interviewers must be made extra careful of the “horns effect” or anchoring on factors that are vague or unrelated to the job.
A dead fish handshake could be a valid reason for rejection if you’re looking for a sales person, but what if you’re hiring a database administrator? In order to avoid the interviewer’s ego or personal biases having too much influence, recruiters need very clear guidelines about what should be judged, and what should be disregarded.
Reconstructing the Interview Process
A good decision is one that is made with a good process.
Start with as few evaluation metrics as possible. Take a close look at your hiring process. Then strip it back to the basics. Get rid of all the pointless metrics that you can’t clearly justify. Scrap the gimmicks, brainteasers, personality tests, IQ tests. You can try to adding some of these back later, one at a time.
Re-examine your candidate search criteria and cast a wider net. Question whether you really need to target candidates from the most prestigious universities or the most hallowed tech companies.
Try to validate every hypothesis. Do GPA scores make a difference in the long run? Do you really need native English speakers? You can use your performance evaluation data to try to answer questions like these. The metrics that you use in the interview process should also be mirrored in the performance review process.
Reconsider the way you gauge “cultural fit”
Maybe I’m getting old. I groan every time I read a job description that references beer pong skills or light sabres. I hope by now people no longer decide to join a company based on the availability of craft beer or the presence of a docile dachshund in the office. And I hope that interviewers no longer evaluate candidates based on “favourite Firefly character” or “ability to quote the Princess Bride”.
What is the best way to test for cultural fit? Start by finding the most foolproof, unambiguous way of scoring a candidate against each of your company’s core values. Your recruitment team can maintain a wiki to keep track of indicators that point to the presence or absence these values.
If a couple of years from now you realise that your hiring assumptions were completely wrong, at least you can pray for randomness to save you. In the meantime, learn how to up your interview game in 4 easy steps.
Oh, and if you’re wondering about how to get across the moat, the secret is in the corner. Have fun storming the castle!