Skip to content
AI and the Hiring Process

Are games and algorithms the future of the interview process?

Matt Clibanoff
Matt Clibanoff

Sep 07 | 2018

Even if you’re qualified for the position you’re applying for, many feel the interview process is a nerve-wracking experience, one that forces the interviewee to answer a series of exacting questions that have no real relevance to her ability to perform the job. From the employer side, things aren’t much easier.

HR reps and middle managers alike often find themselves with employees who look good on paper and talk a big game during their interview, but don’t deliver once they’ve been hired. On top of this, there’s nothing really stopping a potential employee from flat out lying during the hiring process. If an interviewee gets caught in a lie, she won’t get hired, but she didn’t have a job to begin with, so she’s no worse for wear. In order to mitigate these and the myriad other difficulties associated with the hiring process, employers have started using (in a somewhat ironic twist) artificial intelligence to aid with recruiting.

Outside of the difficulties discussed above, one of the primary motivators for companies’ move towards automated recruiting processes is money. It can cost nearly a quarter million dollars to recruit, hire, and onboard a new employee, and when someone turns out to be a dud, the effects can reverberate throughout the entire company. That said, it’s not as if corporations have HAL from 2001 a Space Odyssey handpicking the optimal candidate, not yet at least. Different AI developers offer different things. For example, x.ai specializes in scheduling interviews, Filtered automatically generates coding challenges for aspiring programmers looking for work, and Clearfit has a tool that can rank potential candidates.

These programs, however useful, only free employers from having to do the low-level clerical work hiring. The bulk of the sorting and selecting of candidates still falls squarely on the shoulders of the hiring manager. Cue Pymetrics, a company built on the idea of replacing the way in which we conduct interviews and hire new employees. Pymetrics’ AI uses a series of algorithms and cognitive science-based games to help pair employees and companies. The latter though, is what differentiates Pymetrics from the competition.

The idea is simple: when an employer gives an applicant a test or asks her a series of questions, the applicant answers in a way that comports with what she thinks the interviewer wants to hear. With an objective-based game where a clear goal is outlined, a candidate has a much harder time masking her methodology. The games Pymetrics develops reportedly measure 90 “cognitive, social and personality traits” and are used to gather data on a company’s top performers. After enough data is collected, Pymetrics can then create the perfect composite employee. Every applicant is then measured against this composite, giving employers an objective look at who is best for the job.

The use of games is far from a passing trend however, and is not unique to Pymetrics. A Deloitte report recently revealed that nearly 30% of all business leaders use games and simulations in their hiring process. Unfortunately for companies hoping AI and algorithmic programs are the cure all for their (the companies’) hiring woes, this report also concluded that over 70% of these business leaders found cognitive games to be a “weak” indicator of employee success. Still, to throw another wrench into the equation, there is a significant amount of evidence to support the idea that algorithms do outperform humans when it comes to hiring ideal candidates. In reality though, humans and AI systems are just better at different things. For example, a person, with only so much time in their day, can’t accurately or quickly read through thousands of resumes and cover letters. But while algorithms are good at narrowing down selections and denying clear wrong fits, they aren’t particularly well suited for sussing out passion or work ethic. There’s also another, unforeseen issue attached to AI hiring.

AI and algorithmically-based hiring are supposedly unbiased and don’t allow for pettiness, racism, or sexism to factor into their selection process. That said, today’s leaders in AI technology are far from working out all the kinks. A crime-predicting algorithm in Florida recently labeled black people as potential criminals twice as often as it did white people. It also isn’t an unrealistic jump in logic to suggest that an algorithm could see a demographic inconsistency, such as the best salesman at a particular firm happening to be male, and conclude that it should rank female job applicants lower. Pyretics, in particular, claims that its algorithms are rigorously designed to avoid this type of bias, but this issue not only calls AI’s efficacy into question but its ethics as well. According to Sandra Wachter, a researcher at the Alan Turing Institute and the University of Oxford, “Algorithms force us to look into a mirror on society as it is,” and that relying too heavily on data can make it seem as though our cultural biases are just inscrutable facts. This is what Arvind Narayanan, a professor of computer science at Princeton calls the “accuracy fetish,” a fetish that’s all too prevalent in Silicon Valley, a place in which AI is consistently touted as objective.

In a lot of ways, it’s hard to argue against algorithmic hiring procedures. They save both time and money, and have been proven to work in several cases. The danger is not in this technology supplanting HR reps. People will continue to be a part of the interview process, if only for the reason that liking the people you work with is one of the most important facets of productivity. Algorithms only become a problem when they’re treated as infallible oracles, capable of answering questions inaccessible to the human mind, rather than pieces of machinery. It’s important to remember, the algorithm is a tool, an electric drill to the interview process’s hand crank. AI isn’t meant to replace human judgement, but to narrow the gap between rote tasks and decisions that require said judgement. In this metaphor, people aren’t the hand crank or the electric drill; we’re the screw.

That said, it’s human nature to appeal to authority and the question that lies at the heart of the luddite’s fear, is whether or not we can demystify this technology enough to continue trusting our guts over an algorithm’s calculations. Arthur C. Clarke once said, “Any sufficiently advanced technology is indistinguishable from magic.” We’ll find out soon enough whether or not he was right.

Related Articles