Until recently, Amazon had an AI-based tool to screen candidates that probably sounded really good at first on paper. In fact, it seemed like the Holy Grail: You give the tool 100 resumes, it helps screen for the top candidates and, wow, look at all the time we’ve saved! Right?
There was just one major problem: The tool was showing a bias against women. The system wasn’t screening candidates for software developer jobs and other technical jobs in a gender-neutral way.
However, there’s actually a good reason for this – when the data represents prior biases, the machine will carry that trend through in its results. In this instance, what did the data tell us? That we don’t have enough women in technology to be software developers at the present time, which is a sufficient and important challenge to overcome.
Modern machine learning systems are inherently designed to predict the best you can get out of today’s environment. It doesn’t deal with where the candidates come from. It doesn’t have an inherent bias and is not “aware” of gender. It learns and screens based on the data it has been given.
To use a very simple example, if you’re picking colored pegs and the training data has 75% of green pegs, only so many white pegs and only so many blue pegs, as the system learns about the prevalence of these, that’s how it’s going to pick it. So if the data reflects the current workforce and people who have been hired, the system will be trained on that data. The workforce population has 90% men? Then the system will likely screen candidates on the basis of that percentage.
What is important is to worry about the actual environment where the bias is visible in numbers.
If you look at what percentage of women in technology are software developers, you’ll probably find it’s about 10% or less. Well, guess what? That’s how you train your data. So with that training, if a machine would show you a thousand resumes to choose from, by the time it’s all said and done, it would basically come back with a proportion of 90%-male-to-10% female because that’s the reality from which it was learning.
The valid question that people will then ask is: How can you get rid of that bias?
You can certainly attempt to reach a bias-free state in your system, but then there is going to be a trade-off: What do you need to give up once you insist on additional criteria? Yes, you could skew your system to show greater value for one piece of data versus another and that may get you to a system with fewer biases.
The real problem, however, has less to do with machines than a bias that may be occurring in the real world. Again, we need to assume that machine learning will reflect the reality that will train it.
Consequently, we need to figure out how to recruit more women in technology areas such as computer science programs so they can flourish in rewarding careers within the field. Rather than asking what is wrong with a “biased machine,” we can be addressing the true problem from an earlier point in time, which will hopefully prove to be more effective.
Machine learning systems will only get “smarter” over time and we should aim to identify the best screening tools possible. However, instead of exclusively pointing to flaws in our machine learning systems, perhaps we should also ask where certain problems are occurring right now in expanding the candidate pool among women in technology that we can commit to addressing. There may be no better way to attack the problem of minimizing biases head on.
If you’re a hiring authority who seeks the most unbiased, well-rounded perspective on your candidate pool, there’s one more asset you can turn to: Roy Talman & Associates. We bring a deep understanding of not only how well a particular candidate will fit a role’s challenges but also how well that individual may mesh with your team and culture – today and for the long-haul. For the investment you’re making in a new team member, we may very well be your most invaluable partnership. Learn more through a conversation with us today.