Although artificial intelligence has existed since the mid-1950s, the reality has seen exponential growth and public awareness in the last few years, especially with the launch of accessible tools like ChatGPT. With that growth comes renewed awareness and focus on the business benefits of AI and the dangers of using this type of software.
From bias to automation-induced job loss to data security and privacy issues, the emerging risks of AI concern many. Legislatures and regulatory bodies at every level are scrambling to understand and regulate the technology to protect society from its potential pitfalls.
Discover what state, federal, and international governments are doing to this end, how this legislation affects hiring and recruitment practices, and what hiring professionals can do in response to tightening regulations.
There has been a surge of AI-related bills in 2024, with dozens of states, Puerto Rico, the U.S. Virgin Islands, and Washington D.C. all moving toward regulating the technology. Accelerating legislation has the potential to challenge businesses operating across multiple states.
Hiring professionals who want to use AI to compete for top talent but desire to remain in compliance need to pay attention to what's coming down the pipeline. Here is a brief overview of recent state-level laws and their impact on hiring.
The New York City Council recently introduced Local Law 144, also called the NYC Bias Audit Law. This legislation requires all employers to conduct impartial evaluations to assess whether AI-driven employment decision tools negatively impact people belonging to certain racial/ethnic or sex/gender groups.
Employers must also provide a summary of the results on their website and inform candidates and employees that the tool will be used to evaluate them. Although the law is designed to protect candidates, it places a more significant administrative burden on employers that must find independent entities to conduct the audit.
Notification requirements may also affect the size or quality of the candidate pool, as candidates may have strong feelings about using such tools. Finally, employers need more guidance about the legal implications of the audit results, especially if a third party finds evidence of bias in hiring practices.
In 2020, the Illinois General Assembly introduced the Artificial Intelligence Video Interview Act. This relatively new law places several restrictions and requirements on employers using AI tools to analyze video interviews.
Employers must inform applicants about the software, explain how it will be used, and obtain the applicant's consent. Employers can share the video only with those involved in the interview evaluation process, and they must ensure that all copies are destroyed within 30 days of an applicant's request to do so.
While this law is yet another one introducing more administrative burden for hiring professionals, it also needs more clarity on what counts as artificial intelligence, what constitutes sufficient notice, and what the penalties are for violations.
Maryland's HB1202 prohibits employers from using certain facial recognition technology during an interview unless the applicant consents by signing a waiver. The waiver must plainly state:
Fortunately for employers, this law doesn't include extensive administrative requirements, though employers may need to find practical ways to keep records of applicant consent. However, it is essential to know the candidate's sentiment regarding facial recognition software and be prepared to explain its benefits in the hiring process.
In October 2023, President Joe Biden signed Executive Order 14110 to develop security standards for AI tools and protect Americans from potential negative impacts. This extensive executive order calls for the following actions:
Even when employers may not intentionally discriminate against protected groups, using AI technology can result in violations for which they can be held responsible. For that reason, employers must address issues arising from AI use to stay in compliance with existing laws.
Several AI laws have also been introduced in countries outside the United States.
The Artificial Intelligence Act bans specific AI applications such as biometric categorizations and facial recognition databases. Employers wishing to avoid high-risk AI use cases should reconsider their use. Additionally, the act imposes severe obligations on those using AI in employment, requiring them to conduct risk assessments and ensure human oversight.
Canada's new AI law imposes strict requirements for high-impact AI-driven systems to safeguard privacy and ethical behavior. These requirements include risk assessment and mitigation measures, consistent monitoring, and public disclosure. The law also introduces substantial fines as an enforcement mechanism.
While this law doesn't directly apply to employers, it advocates for data privacy and safety through transparency and audit requirements for Internet recommendation algorithms. Recruitment marketing websites or other online properties that use this technology should consider conducting regular assessments for fairness and security. Employers operating in China should ensure alignment with EU guidelines.
This advisory requires AI developers to use inclusive algorithms to ensure their software doesn't promote bias and discrimination. While these mandates are aimed at developers, they help protect job candidates and employees from AI-based discrimination in the hiring and talent management processes by requiring employers to research before selecting such tools.
With so many regulations in place, it can be challenging for hiring professionals to select AI tools that align their hiring practices with the law in their jurisdiction. Fortunately, this vendor compliance checklist can help with the selection process, asking questions that cover the following:
Along with using this checklist, hiring professionals should stay current on federal, state, and international legislative changes. Staying informed and proactive helps hiring professionals avoid compliance risk and potential lawsuits and keep their employer brand intact.
In addition to monitoring the regulatory landscape, employers should conduct regular bias audits, inform applicants of the use of AI technology, obtain their consent, and work with AI vendors to ensure compliance and ethical practices.
Hiring professionals need to know that it is okay — and even encouraged — to explore the ways in which AI technology can enhance hiring and workforce development practices. However, all must do so with the best interests of candidates and employees in mind.
While efficiency can and should be a goal, remember that data privacy, security, and fairness are equally important. When hiring professionals strike this balance, they can confidently update their hiring practices for modern times while maintaining trust and faith in their employer brand.
—
Stay ahead of AI regulations in recruitment and ensure your hiring practices are both compliant and ethical. Contact the Recruitics team to learn more about navigating the evolving landscape to keep your recruitment strategies current and trustworthy.
info@recruitics.com
230 East Avenue
Suite 101
Norwalk, CT 06855
US +1 877 410 8004
© 2024 Recruitics • All Rights Reserved