Skip to main content
Join Now
Placeholder image for article about AI in HR

< Back to All

Is HR Ready for AI?

May 30, 2024

Artificial intelligence (AI) is being embedded in HR tools without warning. From applicant tracking to learning management systems, AI is becoming a staple of the HR arsenal. Going back to the late 1990s and 2000s, big data analysis was the “thing.”  Degrees popped up, statisticians and/or statistical knowledge became a sexy and necessary requirement as internal databases became overwhelming.

A study by The Hackett Group shows that HR organizations should explore generative AI as part of their HR digital transformation; although the tech is still maturing, generative AI offers huge promise, researchers wrote. Furthermore, the research found, for instance, that generative AI could yield a 40% reduction in general and administrative costs and a 40% reduction in SG&A staff over the next five to seven years. According to the study, 41% of HR organizations have implemented pilots or small-scale generative AI deployments, and a 7% growth rate is expected in 2024—higher than any other emerging technology. However, ASE’s recent AI survey of Michigan employers showed only about 25% are using AI in HR.

In an extreme example of how AI is taking over organizations, last year IBM announced that it would stop hiring for about 7,800 positions that could be replaced by artificial intelligence systems over time. These positions included back-office HR positions.  Any losses in these jobs by attrition were unlikely to be replaced. Note though IBM is a leader in AI technology, Watson being an example of their leadership, especially in healthcare diagnostics.

What are some of the pitfalls of AI for HR? 
  • Any use of ChatGPT or its brethren needs to be closed set data. An open-source use of data would leave employers vulnerable and give hackers immediate access to private data that it would otherwise have to work at to get access.
  • The dataset that HR wants to use needs to be reviewed and tested for bias before using or implementing. For example, a common example is Amazon’s attempt to use AI for recruitment purposes found in its testing that white males dominated its recruiting in the technology positions. The dataset used were of resumes of current employees, primarily white employees.  Although highly discussed in media, Amazon never used that algorithm and dataset.
  • Large learning Modules (LLM) AI is severely limited in its scope of applicability. The EEOC, for example, has guidelines concerning the non-selection of qualified disabled applicants, who may not have correct eye contact or writing ability.
  • This leads to the fact that most LLM AI models are set up for educated white speech patterns. It does not generally pick up for slang, non-native English speaker speech, or local or neighborhood English speech, which would put minorities at a severe disadvantage.
  • The regulatory framework of using AI is at its infant stage and the various jurisdictions could require more restrictive or conflicting approaches.
  • HR needs to carefully monitor AI for adverse statistical significance against any employee population group. A vendor may state that its generative AI model is validated, but by its design and nature, generative AI cannot be validated. It is always learning and is a moving target.
  • More and more compensation surveys are relying on AI to match jobs for survey reporting purposes. Job titles or job description wordings may be the linchpin to combine various titles across companies for pay reporting purposes. There then becomes a question about the accuracy of mapping.

Further pitfalls include the fact that vendors are not willing to add indemnification clauses to contracts when their tools include AI. No one will ever see what the algorithm is as vendors call it proprietary. All the risk is on the organization, and specifically HR.

Government is getting involved from OFCCP to EEOC and Department of Labor. Ignorance is no excuse under any law or circumstances.

Finally, the long-term viability of HR is at risk. HR needs to test all tools before implementation, but with cost reductions and lower HR headcount, such testing may be outsourced to organizations that do not understand HR and compliance risks. Further, as IBM has shown, entry level and other career path jobs in HR could go away, leading newly minted HR professionals on a path that has an abrupt end.

ASE Connect

ASE has curated an AI Toolkit to help you be compliant in your use of AI in HR and to guide your employees on safe and secure AI usage. Resources available include 2024 AI Survey Report, Sample AI Policy, AI Prep Guide, AI HR Gap Analysis Tool, and Articles & Blogs. Access it here.

 

By Anthony Kaylin, courtesy of SBAM-approved partner, ASE.  Source: HR Executive 3/25/24, ars technica 5/2/23

Click here for more News & Resources.

Share On: