Legal and Ethical Risks of Using AI in Hiring

Legal and Ethical Risks of Using AI in Hiring

Key Takeaways

  • Seventy-nine percent of organizations use it to enhance efficiency and accuracy in hiring processes.
  • Potential bias, privacy concerns, and compliance issues with federal and state laws are some of the major risks of using AI in recruitment.
  • Organizations should regularly audit AI systems to mitigate risks, establish clear ethical guidelines, and balance AI and human judgment in hiring decisions.

Artificial intelligence (AI) has become integral to recruitment in recent years, helping organizations source, assess, and hire candidates more accurately and efficiently. However, to achieve the most favorable hiring outcomes, talent acquisition teams must be aware of the risks associated with AI, including data privacy, ethics, and legal compliance concerns. By understanding these risks and adopting effective mitigation strategies, organizations can hire talent faster and more cost-effectively.

 

How AI is Used in Hiring

Given the many advantages AI offers recruitment teams and applicants, it's not surprising that many organizations have begun using it. According to the Society for Human Resource Management (SHRM), 79% of organizations use automation or AI in recruitment.

Though many new AI capabilities remain in development, companies already use AI to perform the following activities:

  • Creating job descriptions that highlight position responsibilities and requirements effectively
  • Optimizing recruitment marketing activities, for example, via programmatic job advertising
  • Screening and sorting candidate applications and resumes
  • Answering candidate questions and providing helpful recruitment information via chatbots
  • Providing candidates with timely feedback and personalized updates

Tip: Chatbot technology isn't just for candidate-facing communication. It can also deliver real-time data insights to recruitment professionals. Recruitics' AI-powered chatbot analyst, Brion, integrates millions of internal and external market data points into a single conversational analytics solution for recruitment teams.

 

Neither human judgment nor AI are flawless, but both can be more effective when they work together.

 

 

Critical Legal and Ethical Risks of Using AI in Recruitment

When using any new technology, it's essential to consider its context and potential impact on regulatory compliance and ethical behavior. Here are some potential risks of using AI that talent acquisition teams should know:

Potential for Bias

One of the most common criticisms of artificial intelligence is that it may contribute to bias in the hiring process. For example, when screening applicants, there is a concern that AI-powered filters could erroneously favor specific candidates based on their gender, race, age, or another characteristic. In 2018, a global tech company shut down its AI recruitment tool because it demonstrated a preference for male candidates over females.

Though an estimated 59% percent of recruiters agree that introducing AI into the recruitment process will remove unconscious bias, AI tools must be appropriately trained for this to occur. Humans train AI to perform specific functions, so any unconscious biases held by people could be woven into tools for sourcing, screening, and assessing applicants.

Privacy Concerns

Using AI tools to screen resumes and store Personally Identifiable Information (PII), such as names, email addresses, and employment histories, can be risky with sufficient data protection protocols. Hackers and identity thieves could steal and misuse it, creating data privacy headaches for the organization and its candidates.

Another potential issue is the improper collection and use of candidate data. For instance, AI tools that "scrape" or pull information from candidate social media profiles might infer personal attributes like gender identity, sexual orientation, or political views. If these details influence hiring decisions, the organization may risk lawsuits, reputational damage, and potentially overlooking qualified candidates.

Federal and State Laws

As talent acquisition professionals are aware, several federal laws prevent discrimination in hiring practices, including the Americans with Disabilities Act (ADA) and the Equal Employment Opportunity (EEO) Act. Given the rise in AI recruitment tools, they are increasingly scrutinized for their potential to violate anti-discrimination laws. In 2022, the U.S. Justice Department and the Equal Employment Opportunity Commission issued a statement warning against using AI tools that could violate the ADA.

State and municipal laws also regulate AI in recruiting. For example, the Illinois Artificial Intelligence Video Interview Act requires employers to take specific steps before conducting AI-assisted video interviews. In New York City, employers must now conduct periodic audits of their AI hiring tools to ensure they're free of bias and discrimination.

 

Legal and Ethical Risks of  Using AI in Hiring

 

Transparency with Candidates

More transparency in how organizations use AI for hiring may make candidates more skeptical about its value. Gallup research found that 85% of Americans are concerned about using AI for hiring decisions, so organizations can hurt their ability to hire by not being more forthcoming. Suppose candidates are misinformed about how AI is used or think it makes the hiring process unfair. In that case, they may be less likely to trust the organization and less interested in pursuing job opportunities there. 

Candidate Consent and Data-Sharing Permission

Just as candidates may be more trusting of an organization that is transparent about its use of AI, they may also appreciate having more choice in how AI is used in the recruitment process. Without it, candidates may resent AI tools used to determine their candidacy and be concerned about how their personal data is used. Some municipalities share this concern and have passed laws requiring candidate consent. For example, in Maryland, employers must get permission from candidates before using AI-powered facial recognition technology in job interviews.

Balancing Efficiency with Fairness

Though AI can automate several hiring activities, recruitment teams should ensure that the desire for efficiency doesn't overshadow the need for equity and fairness. For instance, over-reliance on AI for sourcing and screening could overlook diverse candidates and those with unique skills and experiences. It could also unfairly reject qualified candidates during the interview and assessment stage.

Large language models (artificial intelligence designed to understand and generate human-like text by processing vast amounts of data) can "hallucinate," meaning they sometimes generate information that seems plausible but is incorrect or fabricated. In recruitment, this could lead to serious issues, such as inaccurately summarizing candidate qualifications, generating misleading job descriptions, or recommending unsuitable candidates. These hallucinations can also erode trust in AI-driven recruitment tools and result in poor hiring decisions.

 

Strategies for Responsible AI Use in Hiring

AI-powered facial recognition platforms, chatbots, and applicant screening tools may only be scratching the surface of the full capability of AI. To realize its many benefits while also managing the potential legal and ethical risks, talent acquisition teams can take the following actions:

Utilize Diverse and Unbiased Training Data

Although past applicants and hires can sometimes predict which future candidates will be a good fit, that's not always the case. Therefore, it's critical to incorporate diverse data inputs when training AI models for candidate sourcing, screening, and engagement. Doing so will help to prevent algorithmic bias and keep the organization from missing out on high-quality candidates. 

Conduct Regular Audits of AI Systems

One of the benefits of AI tools is that they can "learn" and improve over time. Regular audits provide an opportunity to review data inputs, correct bias, and align AI systems with organizational hiring and diversity objectives. The audit process may include a closer look at any of the following:

  • Over or underrepresentation of specific groups or demographics in candidate sourcing and screening data
  • Erroneous collection of irrelevant candidate data and personal details
  • Compliance with evolving federal, state, and local laws

Establish Ethical Guidelines

Defining how and when AI is used within the hiring process is critical to avoid misunderstandings and inconsistencies. Ethical guidelines should cover AI systems' specific functions and be fully accessible to recruitment team members. Guidelines and policies may include: 

  • Organizational goals related to the ethical use of AI in recruitment
  • How often the organization audits its AI systems
  • Steps for informing candidates about how AI is used in candidate screening and assessment 

Balance AI with Human Judgment and Expertise

To be most effective, AI should enhance human processes and decision-making, not replace them. By ensuring recruitment professionals understand their role relative to AI, organizations can ensure AI tools don't play an outsized role in hiring. Neither human judgment nor AI are flawless, but both can be more effective when they work together.

 

Legal and Ethical Risks of  Using AI in Hiring

 

Innovative Solutions for the Future of AI-powered Recruitment

There's no doubt that AI has the potential to continue its transformative effect on the way organizations identify and assess prospective hires. With time, emerging technologies such as predictive analytics and AI tools that mimic human behavior will likely improve the recruitment function's efficiency and accuracy. Additionally, as organizations become more transparent in defining the role of AI in recruitment, job seekers may embrace AI with more trust and enthusiasm.

As AI evolves and organizations develop policies and processes to address its ethical and legal implications, there will likely be many new opportunities to identify and hire candidates who best align with the organization's culture and values. 

–-

The team at Recruitics is ready to help you get started! Discover how Recruitics' AI-powered talent attraction and conversion solutions help organizations harness AI's full potential in each hiring process stage.

Subscribe to newsletter

Categories

Find Out How We Can Become an Extension of Your Talent Acquisition Team