How AI Laws are Impacting Hiring and Recruitment Practices

How AI Laws are Impacting Hiring and Recruitment Practices

Key Takeaways:

  • Legislative Response to AI in Recruitment: State, federal, and international governments are enacting laws to regulate AI in hiring, such as New York's AEDTs Law, Illinois' AI Video Interview Act, and the EU AI Act. These regulations address bias, data privacy, and transparency.
  • Impact on Hiring Practices: AI regulations require hiring professionals to perform bias audits, inform candidates about AI tools, and obtain consent. Compliance with laws like the ADA, Title VII, and GDPR is essential to avoid legal risks and ensure ethical AI use.
  • Balancing Efficiency and Ethical Considerations: While AI can enhance recruitment efficiency, it's crucial to prioritize data privacy, security, and fairness. Hiring professionals must stay informed about regulatory changes and conduct regular bias audits to maintain trust and compliance.

Although artificial intelligence has existed since the mid-1950s, the reality has seen exponential growth and public awareness in the last few years, especially with the launch of accessible tools like ChatGPT. With that growth comes renewed awareness and focus on the business benefits of AI and the dangers of using this type of software. 

From bias to automation-induced job loss to data security and privacy issues, the emerging risks of AI concern many. Legislatures and regulatory bodies at every level are scrambling to understand and regulate the technology to protect society from its potential pitfalls. 

Discover what state, federal, and international governments are doing to this end, how this legislation affects hiring and recruitment practices, and what hiring professionals can do in response to tightening regulations.


The U.S. Landscape: State-Level Legislation

There has been a surge of AI-related bills in 2024, with dozens of states, Puerto Rico, the U.S. Virgin Islands, and Washington D.C. all moving toward regulating the technology. Accelerating legislation has the potential to challenge businesses operating across multiple states. 

Hiring professionals who want to use AI to compete for top talent but desire to remain in compliance need to pay attention to what's coming down the pipeline. Here is a brief overview of recent state-level laws and their impact on hiring.

New York's Automated Employment Decision Tools (AEDTs) Law

The New York City Council recently introduced Local Law 144, also called the NYC Bias Audit Law. This legislation requires all employers to conduct impartial evaluations to assess whether AI-driven employment decision tools negatively impact people belonging to certain racial/ethnic or sex/gender groups. 

Employers must also provide a summary of the results on their website and inform candidates and employees that the tool will be used to evaluate them. Although the law is designed to protect candidates, it places a more significant administrative burden on employers that must find independent entities to conduct the audit. 

Notification requirements may also affect the size or quality of the candidate pool, as candidates may have strong feelings about using such tools. Finally, employers need more guidance about the legal implications of the audit results, especially if a third party finds evidence of bias in hiring practices.

Illinois' Artificial Intelligence Video Interview Act

In 2020, the Illinois General Assembly introduced the Artificial Intelligence Video Interview Act. This relatively new law places several restrictions and requirements on employers using AI tools to analyze video interviews. 

Employers must inform applicants about the software, explain how it will be used, and obtain the applicant's consent. Employers can share the video only with those involved in the interview evaluation process, and they must ensure that all copies are destroyed within 30 days of an applicant's request to do so. 

While this law is yet another one introducing more administrative burden for hiring professionals, it also needs more clarity on what counts as artificial intelligence, what constitutes sufficient notice, and what the penalties are for violations.

Maryland's Facial Recognition Law (HB 1202)

Maryland's HB1202 prohibits employers from using certain facial recognition technology during an interview unless the applicant consents by signing a waiver. The waiver must plainly state:

      • The applicant's name
      • The interview date
      • That the applicant consents to the use of facial recognition software
      • That the applicant has read the consent waiver

Fortunately for employers, this law doesn't include extensive administrative requirements, though employers may need to find practical ways to keep records of applicant consent. However, it is essential to know the candidate's sentiment regarding facial recognition software and be prepared to explain its benefits in the hiring process.

iStock-1405689267 (1)

Federal Efforts and Anticipated Changes

In October 2023, President Joe Biden signed Executive Order 14110 to develop security standards for AI tools and protect Americans from potential negative impacts. This extensive executive order calls for the following actions:

      • Develop standards and rules for AI infrastructure as well as harm and risk identification practices
      • Promote AI research, innovation, competition, and collaboration
      • Understand and address the effects on labor and displacement
      • Apply existing laws to AI technology and address disparate impacts in criminal justice, housing, hiring, and federal benefits programs
      • Protect the rights and personal data of students, patients, and consumers across all sectors

To align with these priorities, employers must familiarize themselves with emerging AI standards and develop governance practices around them. Hiring professionals should also understand how AI impacts their data collection and workforce development practices and ensure that their actions comply with new laws.

General Laws Applying to AI in Employment

Even when employers may not intentionally discriminate against protected groups, using AI technology can result in violations for which they can be held responsible. For that reason, employers must address issues arising from AI use to stay in compliance with existing laws.

      • Americans With Disabilities Act (ADA): Use technology to evaluate skills instead of disabilities and ensure reasonable accommodations during hiring
      • Title VII of the Civil Rights Act of 1964: Determine whether selection procedures have a disparate impact based on race or other traits
      • The Age Discrimination in Employment Act (ADEA): Follow EEOC guidance on algorithms and avoid software that filters applicants based on birthdates
      • The California Consumer Privacy Act (CCPA): Notify consumers before using automated decision-making technology, allow opt-outs, and explain its effects
      • The General Data Protection Regulation (GDPR): Use solid data protection protocols, be transparent about data use, and maintain human oversight
Actions such as these will help hiring professionals stay aligned with regulations and avoid compliance risks.

International Perspectives: Key AI Laws

Several AI laws have also been introduced in countries outside the United States.

The EU AI Act 

The Artificial Intelligence Act bans specific AI applications such as biometric categorizations and facial recognition databases. Employers wishing to avoid high-risk AI use cases should reconsider their use. Additionally, the act imposes severe obligations on those using AI in employment, requiring them to conduct risk assessments and ensure human oversight.

Canada's Artificial Intelligence and Data Act (AIDA) 

Canada's new AI law imposes strict requirements for high-impact AI-driven systems to safeguard privacy and ethical behavior. These requirements include risk assessment and mitigation measures, consistent monitoring, and public disclosure. The law also introduces substantial fines as an enforcement mechanism.

China's Internet Information Service Algorithmic Recommendation Management

While this law doesn't directly apply to employers, it advocates for data privacy and safety through transparency and audit requirements for Internet recommendation algorithms. Recruitment marketing websites or other online properties that use this technology should consider conducting regular assessments for fairness and security. Employers operating in China should ensure alignment with EU guidelines.

India's AI Advisory by the Ministry of Electronic & Information Technology (MeitY)

This advisory requires AI developers to use inclusive algorithms to ensure their software doesn't promote bias and discrimination. While these mandates are aimed at developers, they help protect job candidates and employees from AI-based discrimination in the hiring and talent management processes by requiring employers to research before selecting such tools.


Ensuring Compliance and Ethical AI Use

With so many regulations in place, it can be challenging for hiring professionals to select AI tools that align their hiring practices with the law in their jurisdiction. Fortunately, this vendor compliance checklist can help with the selection process, asking questions that cover the following:

      • Supplier compliance
      • Documentation
      • Sufficient addressing of bias and ethics
      • Cybersecurity risk management practices

Along with using this checklist, hiring professionals should stay current on federal, state, and international legislative changes. Staying informed and proactive helps hiring professionals avoid compliance risk and potential lawsuits and keep their employer brand intact

In addition to monitoring the regulatory landscape, employers should conduct regular bias audits, inform applicants of the use of AI technology, obtain their consent, and work with AI vendors to ensure compliance and ethical practices.

Balancing Digital Transformation With Candidates' Best Interests

Hiring professionals need to know that it is okay — and even encouraged — to explore the ways in which AI technology can enhance hiring and workforce development practices. However, all must do so with the best interests of candidates and employees in mind.

While efficiency can and should be a goal, remember that data privacy, security, and fairness are equally important. When hiring professionals strike this balance, they can confidently update their hiring practices for modern times while maintaining trust and faith in their employer brand.

Stay ahead of AI regulations in recruitment and ensure your hiring practices are both compliant and ethical. Contact the Recruitics team to learn more about navigating the evolving landscape to keep your recruitment strategies current and trustworthy.


Subscribe to newsletter


Find Out How We Can Become an Extension of Your Talent Acquisition Team