As employers expand their use of AI in the hiring process, few states have regulations

As artificial intelligence begins to seep into aspects of everyday life and becomes more advanced, some state lawmakers feel a fresh urgency to create regulations governing its use in the hiring process.

According to a 2022 report, artificial intelligence, commonly known as AI, has been implemented by one-quarter of U.S. businesses IBM Global AI adoption rate up over 13% year-on-year, with many starting use this in the recruitment process.

State laws haven’t kept up. Only Illinois, Maryland, and New York require employers to ask for consent first if they use AI in some parts of the hiring process. Several states are considering similar laws.

“Legislators are critical, and as always, legislators are always late to the party,” said state Del. Mark Fisher, a Republican. Fisher sponsored the state lawwhich came into effect in 2020, regulating the use of facial recognition programs in recruitment. It prohibits an employer from using certain facial recognition services — such as those that can verify candidates’ faces against external databases — during a job interview with a candidate unless the candidate consents.

“Technology innovates and then it always seems like a good idea until it isn’t,” Fisher said. “That’s when legislators step in and try to regulate things as best they can.”

AI developers want to innovate as quickly as possible, whether they have regulations to do so or not, says Hayley Tsukayama, senior policy fellow at the Electronic Frontier Foundation, an internet civil rights group. Both developers and policymakers need to consider the consequences of their decisions.

For policymakers to create effective regulations, Tsukayama said, developers need to be lucid about what systems are being used and be open about potential issues.

“It’s probably not exciting for people who want to move faster or for people who want to get these systems into their workplace now or already have them in their workplace,” she said. “But I think it’s really important for policymakers to talk to a lot of different people, especially the people who will be affected by this.”

Artificial Intelligence in Hiring

According to AI, it can support in the recruitment process by conducting CV assessments, scheduling interviews with candidates and obtaining data analysis by Skillroads, a company offering professional CV writing services using artificial intelligence.

Some members of Congress are also trying to act. proposed The American Data Privacy and Protection Act aims to set rules for artificial intelligence, including assessing the risks of AI and its general use, and would cover data collected during the hiring process. Introduced last year by U.S. Rep. Frank Pallone Jr., a New Jersey Democrat who currently sits on the U.S. House Energy and Commerce Committee.

The Biden administration last year released the Artificial Intelligence Rights Act, a set of rules to guide organizations and individuals in the design, use, and deployment of automated systems, according to document.

Meanwhile, lawmakers in some states and localities have been working to create appropriate policies.

Maryland, Illinois and New York are the only places with laws explicitly targeting job seekers dealing with AI in the hiring process, requiring companies to inform them about its use at certain points and ask for consent before taking further steps, the data shows. data from Bryan Cave Leighton Paisner, a global law firm providing clients with legal advice on commercial litigation, finance, real estate and other matters.

According to The New York Times, California, New Jersey, New York and Vermont are also considering bills that would regulate the use of artificial intelligence in hiring systems.

Facial recognition technology It is used by many federal agencies, including cybersecurity and law enforcement, according to the U.S. Government Accountability Office. Some industries also use it.

Fisher said artificial intelligence could connect facial recognition programs to candidate databases in seconds, which was the reasoning behind his bill.

His goal, he said, was to create a narrow measure that could open the door to potential future AI-related legislation. The law, which took effect in 2020 without being signed by then-Governor Larry Hogan, a Republican, covers only the private sector, but Fisher said he would like to see it expanded to include public employers.

One way to avoid AI problems in the recruiting process is to ensure human involvement, from product design to regular monitoring of automated decisions.

Samantha Gordon, director of programs at TechEquity Collaborative, an advocacy organization for tech workers, said that in situations where machine learning or data collection is done without human involvement, there is a risk that the machine will be biased against certain groups.

For example, HireVue, a platform that helps employers collect video recordings of job applicant interviews and assessments, announced in 2021 that it was removing its facial analysis component after an internal review determined that the system was less related to job performance than other elements of its algorithmic assessments, according to a release from the organization.

“I think this is something you don’t have to be a computer scientist to understand,” Gordon said. Speeding up the hiring process, she said, leaves room for error. That’s where Gordon said lawmakers will have to step in.

And on both sides of the fence, Fisher said, lawmakers believe companies should disclose the results of their work.

“I would like to think that, in general, people would like to see much more transparency and disclosure in the use of this technology,” Fisher said. “Who is using this technology? And why?”

Legislative challenges

Policymakers have little understanding of AI, especially as it affects civil rights, says Clarence Okoh, senior policy adviser at the Washington-based nonprofit Center for Law and Social Policy (CLASP) and a Social Science Research Council Just Tech Fellow.

As a result, he added, companies using AI often find themselves subject to regulation.

“Unfortunately, I think what’s happened is that a lot of AI developers and vendors have very effectively crowded out conversations with policymakers about how to govern AI and mitigate its social impacts,” Okoh said. “And so unfortunately there’s a lot of interest in developing self-regulatory schemes.”

Okoh said some self-regulatory practices include audits and compliance with regulations that are based on general guidelines such as the Draft Artificial Intelligence Bill of Rights.

The results have sometimes raised concerns. Some organizations operating under their own guidelines used AI recruiting tools that showed bias.

In 2014, a group of developers at Amazon began building an experimental, automated program designed to sift through job applicants’ resumes to find the best talent, according to Reuters Agency investigation, but by 2015, the company found that its system had effectively learned that male candidates were preferred.

People close to the project told Reuters that the experimental system was trained to filter candidates by observing patterns in resumes submitted to the company over a 10-year period — most of them from men. Amazon told Reuters that the tool “was never used by Amazon recruiters to evaluate candidates.”

Some companies, however, argue that AI is helpful and that it has strict ethical rules.

Helena Almeida, vice president of legal counsel at ADP, a human resources software company, says the company’s approach to using AI in its products is guided by the same ethical guidelines it had before the technology emerged. Regardless of legal requirements, Almeida said, ADP sees it as its responsibility to go beyond the basics to ensure its products don’t discriminate.

AI and machine learning are used in several of ADP’s recruiting support services. And many of the current regulations apply to the AI ​​world, she said. ADP also offers its clients some services that use facial recognition technology, according to its website. As technology advances, ADP has adopted a set of rules to regulate the use of artificial intelligence, machine learning and other technologies.

“You can’t discriminate against a certain demographic without AI, and you can’t discriminate against them with AI,” Almeida said. “So that’s a critical part of our framework and how we look at bias in these tools.”

Get in Touch

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Related Articles

Latest Posts