By Amani Gharib, PhD, courtesy of SBAM-approved partner, ASE
Artificial intelligence (AI) has revolutionized the way Human Resources (HR) operates. As the HR landscape becomes more complex, HR practitioners need powerful tools to manage and support employees effectively. AI can offer HR practitioners an efficient and effective way to automate and streamline various HR processes. By leveraging AI, HR practitioners can reduce the time and resources required to manage HR processes while improving the employee experience.
Take a moment to digest the above. Is there anything that catches your attention?
The words in the opening passage were authored by the generative AI language model, ChatGPT. All it took was a question carefully crafted by a human, a wait time of two seconds, and voila! This is an example of AI attempting to uncannily replicate humans, possessing the power of communication and text generation that typically require human intelligence. So, what happens if the content generated is unverified or misinformed? What if the output is far from truth or reality?
What is AI and ChatGPT?
AI is more of a concept than a specific technology. It focuses on increasing the intelligence of computers so they can understand language, define problems, and make decisions through a set of algorithms. Generative AI is a type of AI that creates text, art, music, images, and other types of output, with many players in the space offering complex and multifaceted technologies and applications.
ChatGPT is an example of a generative AI language model developed by OpenAI. Once a human inputs a command into ChatGPT, the AI language model analyzes its data reserve, applies algorithms to identify patterns, and produces responses that have gone through designer training and evaluation. A ChatGPT user journey is shown below.
So, let’s ground this in reality: What does HR need to validate before using ChatGPT?
Trends show that HR needs to improve digital literacy, and given ChatGPT has a low barrier to entry because of its ease of use, HR must be intentional in terms of why and where to leverage this AI language model. HR professionals can cautiously leverage ChatGPT in the following practices:
- Creating Job Descriptions: HR professionals can prompt ChatGPT to create job descriptions for specific roles, and the output will include required skills, experiences, responsibilities, etc. In the example of a Project Manager job description, ChatGPT may assume that a PMP Certification is required. Humans with discernment will point out that the certification may be costly and not necessarily required. Furthermore, some jobs share the same title, but requirements can vary between organizations. Therefore, HR professionals will need to proofread the output to ensure accuracy.
- Streamlining Onboarding Programs: ChatGPT can support HR teams with streamlining the onboarding process for new hires. However, ChatGPT generates a more traditional approach to onboarding, with insufficient focus on organizational needs, socialization needs, and job clarity that a new hire requires. HR professionals in this case should leverage the output only as a complement to the customization of their organizational onboarding programs.
- Developing Action Plans: HR professionals can ask the AI language model to produce engagement action plans to monitor and evaluate employee satisfaction. While the output can appear adequate, there will be a need for human authentication to ensure the plans generated are customized, realistic, and aligned to the organization.
- Advancing Performance: ChatGPT can provide suggestions into ways a team member can improve their performance. The AI language model can provide recommendations on how to set goals and identify competencies required for specific roles. While the generated recommendations may seem noteworthy, HR professionals should ask deeper questions to obtain even better results and add a human touch to ensure the goals and competencies are accurate and aligned with the organization’s culture.
While ChatGPT can impact the way HR operates, it is important to ensure that HR professionals are assigning accountability and creating an ethical AI framework that reviews AI-generated content and evaluates the implications it can have on employees.
What are some of the ChatGPT risks?
There exists a plethora of information at our fingertips that highlights ChatGPT benefits and how the oracle of AI can be leveraged in HR. While ChatGPT has the potential to support HR practices, it cannot see past common wisdom. Organizations should therefore carefully consider its potential risks, limitations, and challenges, listed below:
- AI-Assisted Plagiarism: Written content in the public sphere leveraged by HR professionals today, appear to possess inaccurate “words of wisdom” generated by ChatGPT. This information is not only misinformed, but some authors are diffusing content without properly referencing or acknowledging their AI Writing Assistant. This brings into question the ethical boundaries individuals cross when releasing an article that is authored by generative AI, claiming it as their own. Therefore, HR professionals should verify and validate the accuracy of what they are reading, even if it is said to be written by a human, as one can no longer guarantee authenticity.
- Unverified Generated Output: Simply put, ChatGPT is no human – it has no principles, morals, or values. ChatGPT’s output may be plausible sounding but can also be incorrect or invalid. Given that ChatGPT was trained up to a certain year, and makes claims that are neither verified nor endorsed, its biggest issue is accessing factual data. Therefore, HR professionals must verify the accuracy of ChatGPT’s output via critical thinking and analysis to avoid falling into the trap of yielding misinformed decisions.
- Illegitimate Job Applications: ChatGPT has the capability of crafting cover letters and resumes as well as generating convincing answers to interview questions. ChatGPT can also complete writing assignments for candidates, making it more difficult for HR professionals to assess an applicant. HR professionals will therefore need to change talent acquisition practices to limit the role of ChatGPT and evaluate a candidate and their skillsets rather than their ability to use generative AI.
- Biased and Discriminatory Content: ChatGPT has the potential of perpetuating bias and discrimination, depending on the data used to train it. The machine itself is not biased, it is the factors designers weigh and feed into the AI language model that can consequently harbor bias. For example, if ChatGPT is asked about how much an individual should be paid for a specific position, the generative AI application will look at historical salary patterns and may automatically generate a recommendation around one gender making more than another – perpetuating biases around pay.
- Unauthorized Access to Data: ChatGPT has the potential risk of leading organizations into privacy breaches, depending on the data being used. If HR professionals input personal identifiable information (PII) into the generative AI language model to generate solutions, this can grant others the opportunity to access the same unauthorized private data. HR professionals therefore need to be wary of the legal and ethical implications around privacy when PII is being used.
- Absence of Human Connectedness: ChatGPT can provide informative responses and automate routine asks; however, it is deficient in human empathy, emotional intelligence, and intuition. In this case, HR professionals can ensure that generative AI only complements their work as machines cannot (yet) replace human judgement, interaction, and support.
- Limited Creativity and Innovation: ChatGPT is trained to generate responses based on what is already out there. While it can generate clever outputs, it does not possess the ability to discover an innovative solution that addresses organizational needs. This can therefore hinder an HR team’s ability to foster an environment of breakthrough thinking if team members are using ChatGPT more regularly.
Now what?
Generative AI is astonishingly fast, articulate, and effective. The “intelligent” generative AI, however, has potential risks, limitations, and challenges that need to be addressed through human connectedness, critical thinking, and a healthy dose of skepticism. The goal should be to ensure the output generated is accurate and reliable.
HR teams need to set clear strategies and processes to train team members on how to ask ChatGPT the right questions and how to effectively interpret the responses in a critical manner. There needs to also be ethical standards established in a way that respects privacy and ensures the generative AI does not perpetuate bias or discrimination. The key here is to understand that while generative AI applications produce fast and articulate outputs, they are not infallible.
AI is here to stay. So ask yourself, do we ring the bell or sound the buzzer?
For more information and guidance on generative AI applications, McLean & Company is here to help. ASE members have access to the McLean & Company portal by logging into the ASE Member Community. Watch for upcoming research on generative AI and HR!
Oh, and this article was in fact written by a human with a tenet of authorship integrity and ethics.