Don’t Let Artificial Intelligence Pick Your Employees

Written

Don’t Let Artificial Intelligence Pick Your Employees

A Stanford GSB scholar shares why algorithms aren’t sophisticated enough to make these strategic decisions ... yet.
Hiring should be a strategic decision, says one Stanford GSB professor, and AI isn’t ready for that role. | Reuters/Brian Snyder

In 2014, Amazon launched a new recruitment algorithm to help it find the best job candidates. A year into the experiment, the company saw that the tool was biased against women and quietly shut the program down. When Reuters broke the story last October, John Jersin, the product leader for LinkedIn Talent Solutions, offered his thoughts on the general landscape of algorithmic hiring: “I certainly would not trust any AI system today to make a hiring decision on its own,” he said. “The technology is just not ready yet.”

Implicit in his comment is the notion that, someday, these systems will be ready. But work by Adina Sterling, an assistant professor of organizational behavior at Stanford Graduate School of Business, questions this optimism, linking it to a deep — and deeply problematic — misconception of hiring’s strategic role.

In a new paper coauthored with Daniel W. Elfenbein of Washington University in St. Louis and published in Strategy Science, Sterling articulates how smart hiring is inextricable from long-term corporate strategy; she also explains why delegating the responsibility of hiring to machines, at least in the near future, is likely to undermine its strategic potential.

“With technology increasingly stepping into this role, we’re at a moment in which these questions of higher-level strategy ought to be of great importance,” she says.

From Monster.com to Algorithmic Hiring

The use of machines in hiring became widespread roughly a quarter-century back, when career platforms like Monster.com emerged on the web. These websites allowed companies to speed up the pace and expand the scale of recruitment. Job posts that once might have fielded 20 applicants were suddenly flooded with 200 in a matter of minutes. “The vast majority of the work since then has been focused on this sourcing side,” Sterling says. “It’s been about filling up the pool with applicants that you think you need and separating them from the ones you don’t.”

Today, between 65% and 70% of all job applications are touched first by a machine. After an initial culling, the best-fit candidates are then, in most cases, handed off to a human. And while computers are getting better at understanding information, and while the availability of information is exploding from our online footprints, the relatively strict filtration offered by most sourcing algorithms creates a problem. “It’s becoming much harder to find unusual talent, given these candidates don’t fit squarely into one category,” Sterling says. Consider somebody with a background in improv comedy applying for a sales job: A hiring manager might recognize potential; a computer likely wouldn’t.

While algorithms are efficient, they often lose nuance that may have been present 10 years ago when AI was not as prevalent.
Adina Sterling

At the same time, the use of algorithms and artificial intelligence is slowly expanding its purview from sourcing into actual selection. While not widespread, the development of this practice at-scale is on the horizon. “And yet handing sourcing and selection off to machines is a fundamental problem if we start to understand hiring as truly strategic,” Sterling says. “There is still a lot that human beings do that has to do with strategy, and scholars need to think through what strategic managers should be doing in the context of hiring.”

Hiring as Strategy

To claim hiring as strategic, Sterling and Elfenbein first define what they mean by this term. A decision is strategic, they say, when it meets four criteria:

  1. Irreversibility: Irreversibility means that once a decision is made the stakeholders are committed to each other, choices available to competitors are irrevocably changed, and some options are necessarily foreclosed while others are opened.
  2. Interdependency: This implies that the benefit of one decision is contingent on relative alignment or complementarity with other decisions made elsewhere in the company. In this regard, interdependency places a premium on knowledge sharing and coordination among decision makers.
  3. Competition: Competition means that competitors’ responses influence the value of different alternatives. Decision makers must thus put themselves in the shoes of these competitors and forecast how they might react to each alternative across many imagined futures.
  4. Learning under uncertainty: Companies account for the uncertainty inherent in hiring; they hire not just to execute a new job, but also with an eye to learning new knowledge and getting connected to new people, communities, customers, and suppliers.

After conducting a series of interviews with hiring managers, AI and machine learning experts, and personnel-technology start-up executives, Sterling and Elfenbein argue that each of these conditions applies to hiring, and so it ought to be considered strategic. “And things that you can program or automate — in other words, what machines are already doing in hiring — are not the strategic aspects of hiring. The strategic aspects encompass attending to those four elements,” Sterling says.

Hiring strategically thus means thinking not just about any one applicant’s skills — the “best athlete” approach — but about how to holistically and consistently source talent and integrate it into the organization. It also means sticking with this human capital strategy over the long term, since these are commitments that cannot be reversed without cost.

In short, hiring is not a one-off search for the most suitable candidate for a specific job, but a process akin to filling a roster. Hiring demands a global view of the company and its direction within a shifting market. Computers do not possess such a view.

The Role of AI

This is not to say that hiring algorithms should be discarded. But their role within any HR department, says Sterling, needs to be squared with the strategic centrality of deciding whom a company employs. This means two things.

First, hiring managers need to take a careful look at what’s under the hood of the algorithms they have in place. “There are a lot of filters that any given applicant pool has gone through that managers may have no idea about,” she says. “And while algorithms are efficient, they often lose nuance that may have been present 10 years ago when AI was not as prevalent.” Consider, again, the sales candidate who excels at improv.

Second, after determining whether these filters need to be adjusted, managers should think about the concrete value of their hiring algorithm. Yes, it sorts résumés quickly. But what else does it do? And what does it not do? They should then fit these insights into broader consideration of the company and its strategic trajectory. How might a blind algorithm support or hinder this movement?

“I would love managers to have the sense that they’re held accountable for what algorithms are doing,” Sterling says. “They should, at the very least, understand that this is a responsibility of theirs.”

For media inquiries, visit the Newsroom.