AI in HR: Are We Sacrificing Human Connection for Efficiency?
Reading Time: 4 minutes
Table Of Contents
In attempts to cut operational costs and hack productivity, HR professionals have decided to embrace the recent AI craze.
Using AI for HR purposes is, actually, nothing new. Applicant Tracking Systems (ATS) use AI technology to parse through resumes on autopilot and handle complex applicant and job market data.
However, in the recent turn of events, artificial intelligence became significantly more powerful and invasive, going beyond mere keyword screening in resumes. AI aficionados now claim that these tools can completely revolutionize HR departments and, with minimal human interference, select perfectly capable, diverse candidates.
Others are not as enthusiastic, as numerous controversies related to using AI in the workplace emerge daily, from art-theft debates to the mishandling of sensitive data.
AI poses a unique risk for HR departments: losing human connection to employees and applicants in favor of the endless pursuit of maximum efficiency. Below, we’ll elaborate on the impact AI technology has on human-facing processes.
What Are The Issues With AI In HR?
Humans of HR still need to put in some work: as it turns out, artificially intelligent tools aren’t the sharpest tools in the shed.
There are several main issues to consider if you intend to use AI tech in HR:
- The majority of people aren’t comfortable being interviewed by an AI and feel it’s dehumanizing and very awkward.
- AIs often misinterpret minuscule gestures and make wrong conclusions.
- AIs may remove surface-level bias, but remain as biased as their human counterparts; it’s humans who designed them, after all, and the data fed into the AI reflects the existing gender and racial bias.
- AI use may compromise sensitive data, including biometric and classified/confidential information.
Let’s review these claims one by one.
People dislike being interviewed by an AI
Leaving a good first impression on a job interview is an emotionally tasking feat in itself — and just when we thought we got it, now we have to impress… robots?
Automated video interviews replace human interviewers, and use AI tech to analyze applicants’ facial expressions, gestures, voice, and language thoroughly. In a nutshell, candidates go through a depersonalized interview process that isn’t really an interview. Since there’s no recruiter on the other side, they are talking to the screen, as instructed by a pre-programmed AI.
Research coming from The University of Sussex proves that this kind of interviewing disorients the candidates and makes them act more rigid than they usually would. Additionally, there is no real-time feedback and cues from the human interviewer that would let the candidates see how they’re doing.
You smile — you lose
People who’ve experienced AI job interviews don’t trust these programs. Although they’re said to be objective and praised as such, candidates notice the errors in judgment, and advise each other, for example, “not to lift one side of their mouth, which could be interpreted by the AI as smirking.”
If a mere smirk can cost an applicant a job, all the concerns seem legitimate — but businesses claim that AIs are just some of the tools used to evaluate job candidates, and don’t have a final say.
AI tools in recruitment are even compared to “modern phrenology” a pseudoscience that infamously connects skull shape to certain behaviors and mental traits.
Not only does AI not remove the bias — it may perpetuate it
Research by Eleanor Drage and Kerry Mackereth suggests that AI use may inadvertently help perpetuate more subtle, deeply ingrained stereotypes.
As Dr. Kerry Mackereth explained to BBC News, “These tools can’t be trained only to identify job-related characteristics and strip out gender and race from the hiring process, because the kinds of attributes we think are essential for being a good employee are inherently bound up with gender and race.”
More than a human connection is at stake: biometric information and sensitive data may easily be compromised
Earlier this year, Samsung banned its employees from using ChatGPT on company-owned devices, after staff members uploaded sensitive code snippets. At the same time, they’re developing internal AI tools. This points to the conclusion that, while useful in many ways, public AI tools aren’t to be trusted with sensitive data.
Biometric information is at risk, too: it can easily be harvested from video interviews and internal applicant files. HireVue is an online video interview platform that is currently facing a class action lawsuit for failing to meet the requirements of the Illinois Biometric Information Privacy Act (BIPA) — “providing statutory disclosures and obtaining informed consent.”
Source: classaction.org
When candidates applied to the jobs that required HireVue use, they weren’t informed that their biometric information will be collected and used before the hiring process.
The Bottom Line
AI can help immensely with the automation of day-to-day HR tasks where immediate human attention isn’t necessary. It is already used in ATS, and it helps speed up the hiring process by selecting the CVs of fitting candidates, contacting them, and setting up online interviews.
However, recruiters and HR departments should tread carefully when using AI in HR. This tech isn’t in its infancy anymore, but the recent advancement raises some important moral and safety questions.
On top of showing a tendency to alienate people further in the remote work era, people report that AIs aren’t as unbiased as their creators claim them to be, and use outdated practices to decide who will go to another round of job interviews. Preserving biometric and sensitive data is another concern since it can be misused in more ways than experts can currently even anticipate.
AI-based tools still aren’t good enough to replace savvy hiring staff, eliminate racial and gender bias, or recognize the best-fitting candidate. We still need educated and experienced HRs to stand behind these AI tools and use them to their advantage, without compromising genuine human interaction.
Anja Milovanovic
A journalist turned content writer – Anja uses her investigative skills to produce high-quality SaaS, Marketing, and HR content.