The use of artificial intelligence (AI) in recruiting practices is on the rise. Recent surveys from IBM show that 19% of global enterprises already use AI for talent acquisition. Data from Gartner shows that 76% of recruiting professionals believe their company will fall behind if they don't implement AI soon and that as many as half plan to use it to write job descriptions, interview questions, and recruitment marketing materials.
While AI technology can create efficient hiring processes, it can just as easily lead to AI bias–results that reflect human biases–leading to inequalities in recruiting and talent acquisition.
All hiring professionals need to pay attention to AI bias, which can have a significant negative impact in various ways.
Though few federal laws currently address the issue of technology-related discrimination, employers may still be held responsible for the outcomes produced by AI technology under laws such as the Americans with Disabilities Act and Title VII of the Civil Rights Act of 1964.
The Equal Employment Opportunity Commission is monitoring algorithmic bias and has developed an Artificial Intelligence and Algorithmic Fairness Initiative to gather insight into these technologies' impacts and issue guidance on mitigating bias.
Until that happens, several U.S. states and territories have implemented laws. For example, New York's Automated Employment Decision Tools law requires local business owners to assess whether their AI technology negatively impacts protected groups and publish the results on their websites.
Employers hiring candidates worldwide should also understand the global landscape regarding laws governing the use of AI technology. For example, the EU has adopted the Artificial Intelligence Act, which outright bans using biometric and facial recognition databases in hiring practices. Canada, China, and India have adopted similar laws or issued accountability guidance to employers.
AI bias can negatively impact DEI efforts. Suppose the technology is trained on data or uses an algorithm that presents biases against people from underrepresented groups; in that case, companies won't have the opportunity to interview those candidates, resulting in a less diverse workplace.
A lack of diversity resulting from algorithmic bias can have far-reaching effects on employee satisfaction and the employer brand. If candidates believe that a company employs discriminatory hiring practices, it can result in a lawsuit and a loss of trust, causing candidates to avoid the company altogether.
Biased algorithms can lead to additional missed opportunities for top talent. If AI technology filters out qualified candidates based on an undetected bias, those candidates aren't reaching the talent pool and may be hired elsewhere.
Data trains AI algorithms to make automated decisions. The outcomes will likely be the same when those data samples are biased. For example, when training facial or voice recognition software, it's essential to use people from all racial and ethnic backgrounds. Otherwise, the software algorithm may inadvertently develop a bias toward those absent from the dataset.
Additionally, algorithms are programmed by human developers. If those developers carry conscious or unconscious biases, those thoughts can also show up in the results. Developers may accidentally give unfair advantages to specific groups by unfairly weighting certain factors or criteria in the selection process.
In 2018, Amazon discontinued its AI-driven hiring technology after it was found to show bias against women. The algorithm was trained to select applicants based on resumes the company had received over the preceding decade. Because it had not received many applications from women, the computer models began to prefer male candidates and penalized resumes with the word "women."
Although this is only one example of AI bias's negative impact on hiring practices, it highlights the importance of employers doing the necessary work to ensure AI bias does not lead to unfair outcomes that harm candidates or the organization.
Studies show that 71% of U.S. candidates oppose AI in final hiring decisions, and 41% oppose its use in reviewing job applications. Hiring professionals also express concerns about AI's ability to produce accurate results. However, employing practical strategies to mitigate known issues with AI bias can help bridge the gap and promote trust across the board.
One such strategy is ensuring human oversight of company processes involving AI algorithms. Doing so may allow hiring professionals to spot potential biases before they negatively impact candidates or the company.
Employers should ensure that any software vendor they work with is committed to auditing and improving data quality and diversity, performing regular algorithm audits, and testing for fairness. They may even implement fairness-aware algorithms specifically programmed to account for biases.
Talent acquisition professionals may also consider implementing blind recruitment policies, which require removing personal identifiers from resumes. If a particular software has been trained with a bias toward certain groups, removing this information may mitigate that inherent bias and allow for a more fair hiring process.
Hiring professionals should always encourage candidates to provide feedback about their experience. If an employer notices that candidates from specific groups are having issues with the process, it may be necessary to determine whether technology is to blame. This, in turn, can help counteract the effects of AI bias and improve the hiring process for all candidates.
Mitigating AI bias has many benefits for companies and candidates. Aside from reduced legal and compliance risk exposure, reducing bias enhances diversity and inclusion, leading to better employee satisfaction and engagement, and feelings of psychological safety and trust. Companies can experience a stronger employer brand and reduced turnover.
Additionally, removing AI bias improves hiring outcomes and gives hiring professionals access to previously untapped talent pools. When automated systems don't filter out qualified applicants from underrepresented groups, there are more high-quality candidates to choose from.
Artificial intelligence enhances efficiency in the hiring process. However, talent acquisition professionals must be aware of potential bias and take steps to mitigate it to protect both candidates and the organization.
Doing so helps employers build trust with candidates and employees and ensures the organization reaps the benefits of a highly qualified and diverse workforce. Hiring professionals can rest assured that recognizing and thwarting biases can go a long way toward helping the organization fulfill its goals and objectives.
–
Looking to elevate your recruitment strategy with AI? Connect with our team to discover how our AI-powered solutions can transform your hiring process today!
info@recruitics.com
230 East Avenue
Suite 101
Norwalk, CT 06855
US +1 877 410 8004
© 2024 Recruitics • All Rights Reserved