By Anthony Kaylin, courtesy SBAM Approved Partner ASE
Artificial Intelligence or AI is becoming more common in a variety of HR practices, especially on the recruitment side of HR. At the same time, it is being bandied about by a variety of government organizations as being a hinderance to applicants and employees.
The most common citation by governments is the Amazon experiment, in which Amazon used AI in a test for recruitment, and the recruitment was primarily of white males, because those who wrote the initial algorithm were white males and their biases were found in the programming. Since AI would have a continuous learning loop, the AI experiment was ended by Amazon since the bias could not be removed.
Another example is Meta’s approach to job targeting. The company was sued because it perpetuated age discrimination in the way jobs were distributed to potential applicants on Facebook.
The EEOC, in its recently released draft strategic enforcement plan for the years 2023 through 2027, specifically states that the use of automated systems, including artificial intelligence or machine learning, to target job advertisements, recruit applicants, or make or assist in hiring decisions “can intentionally exclude or adversely impact protected groups.” The EEOC has cited in the past that facial recognition type AI could negatively impact individuals with disabilities. Even chat boxes were considered problematic.
Together with the U.S. Department of Justice, the EEOC launched the Artificial Intelligence and Algorithmic Fairness Initiative in 2021 and issued guidance on how employers can use AI without violating federal disability law. For example, the EEOC guidance gives this example: “A job application requires a timed math test using a keyboard. Angela has severe arthritis and cannot type quickly. Typing quickly is not necessary for the job. Angela will fail the test if she takes it without a reasonable accommodation. The reasonable accommodation could be speaking the answers or having more time for the test.”
So, what does HR need to know about AI in its operations? A number of jurisdictions have or will have laws regulating AI. Those considering legislation includes Alabama, Colorado, Hawaii, Massachusetts, Mississippi, Vermont, and Washington.
New York City’s law effective in April makes it unlawful for an employer or employment agency to use an automated employment decision tool (AEDT) to screen a candidate or employee within New York City unless certain bias audit and notice requirements are met. Illinois, Maryland, and several other jurisdictions have laws in place to regulate AI in the workplace in an effort to decrease hiring and promotion bias.
The California Civil Rights Department (formerly the Department of Fair Employment and Housing) has proposed regulations on AI that would make it unlawful for an employer or covered entity to use “automated decision systems, or other selection criteria that screen out or tend to screen out an applicant or employee” on the basis of a protected characteristic, unless the “selection criteria” used “are shown to be job-related for the position in question and are consistent with business necessity.”
AI as a selection tool would fall under the Uniform Guidelines for Employee Selection Procedures, but the problem with AI is that it cannot be validated. If it is a continuing learning module, validation would have to be continuous. Experts disagree as to the value of AI in selection processes, but it is becoming increasingly commonplace. A major takeaway for HR is the same as when testing tools try selling their wares and claim the tool is 100% valid or it has been validated to a 100%. Generally, tests are good if the validation ranges about 20% to 30% because it cannot be 100% validated and specific for AI, it is unlikely it can ever be validated.
Therefore, HR should inventory tools that may be using AI in one form or another, like applicant tracking systems that rank candidates, initial intakes through chat boxes, or the use of bots to create applicant and employee profiles. Work with legal counsel to ensure that the risks of using these tools are de minimis. Unfortunately, there is a growing trend of lawsuits against employers for discrimination while using these tools.