About a year ago I was asked at a recruitment event if artificial intelligence (AI) could really remove human bias from the recruitment industry. My short answer remains the same: yes. The long answer? AI can only remove bias if it is not already weaved into its programming.
Like a bug found within a piece of code, AI could suffer at the hands of human error, even before it looks at its first resume. But for the sake of this article, I will assume the framework of this imaginary AI we are envisioning is without error. If this perfect scenario ends up being the case, then my immediate reaction would be that AI could definitely remove human bias from recruitment. But alas, I feel that my hasty reaction is wrong.
AIs are not programmed with the negative social norms a human may subscribe to, and ideally, this would mean that it would not have any inherent bias (racial, gender, age, etc.) when choosing a candidate. Unlike a human hiring manager, candidates need not worry about the social prejudices an AI might hold against them.
However, what AI does is learn – it can learn trends, stats, and patterns in order to maximize its efficiency. It can correlate everything from a name, to a school, to a company, a background, hobbies, and even tone of voice and diction to build up a profile that allows it to differentiate candidates from each other. In turn, an AI can push its own learned bias towards those it believes will either be successful or unsuccessful.
In some cases, inputs that come from AI are generated through itself, not just what humans necessarily tell it to do, but also what it can figure out on its own – hence the term artificial intelligence. Imagine if the system were to realize that anyone with the name Lauren is more likely to succeed than someone with the name Claire, based on experience for the exact role it is hiring for and historic hiring trends. Would it be right that Lauren landed an interview over Claire, just because of a name?
This example might seem a bit farfetched, but the truth is we aren’t far off from this reality. As promising as artificial intelligence is, it is also potentially dangerous. Considering implementing even a basic form of this technology to replace human beings is something that requires a great deal of thought and consideration.
In short, bias is a part of trending, in that a healthy amount of bias helps us recognize patterns and trends when analyzing candidates. Some biases are wrong, some biases are unspoken of and some biases are simply factual but in today’s world of equal opportunity (rightfully so), no amount of supporting data should allow you to draw bias on a person singularly. However, try teaching that to a machine.
So let’s take a step back and ask ourselves the question, what if AI was only asked to analyze skills, experience, and education, nothing else. Could it be unbiased in its hiring efforts? Possibly.
Under these parameters, an AI system might recommend candidates from Yale over candidates from the University of Texas. Why? Because this hypothetical data favors Yale graduates over University of Texas graduates. Even though the cold data is there, it’s clear that bias may still exist in the most formulaic of AI.
The reality is that AI currently cannot read personality accurately enough to take it into consideration when recommending a candidate. AI still struggles to pick candidates that fit into specific corporate culture, that determination is still ideally decided by human beings.
So let’s boil it right down, could AI be used as an unbiased resume and applicant shortlister? Again, possibly, but it’s very difficult to remove bias entirely, even within machines. If a new technological advancement is not saving your business a considerable amount of resources, it’s simply not worth using. Implementing AI just as a resume database seems like a bit of a waste.
The reality is, I feel bias is somewhat a part of life. Am I more inclined to consider someone for a position who has a valuable form of experience versus someone who doesn’t? Yes, of course I am. Does that make me a bad person for not giving the person with no experience a fair crack of the whip for the job? I’m not so sure about that.
There are the obvious biases that we all shouldn’t use during the hiring process, but when the word “bias” is used as a blanket term, when does it go from being unfair to impossible to distinguish anyone without using a certain degree of bias?
My conclusion is simple; AI will bring unthought of biases to the forefront. Some will be good, some will be bad, and we should think twice before we remove the human element out of the recruitment industry. It’s often our personalities and the humane character that give people a shot at a job, when statistically they might have never been considered.
No Replies to "Can Artificial Intelligence Remove Human Bias from Recruitment?"