Beware: The Risks of Managers Using AI for HR Questions - Essential Advice for HR Professionals

2024hrcompliance ai ai at work ai for hr May 23, 2024

 Theresia Intag, Co-Founder of Tag4HR and Founder of IntagHire

In today’s dynamic workplace, AI tools like ChatGPT have become increasingly popular for streamlining tasks and boosting productivity. Many departmental managers are turning to AI to assist with various responsibilities, including addressing HR-related queries. While this can offer numerous benefits, HR professionals must be vigilant about the potential risks, particularly with the issue of AI hallucinations. This blog post delves into the concerns of managers using AI for HR questions and highlights how HR professionals can mitigate these risks.


The Hidden Dangers of Letting AI Handle HR Queries


Departmental managers often rely on ChatGPT to provide quick answers to common HR questions, hoping to save time and ensure consistent communication. For example, when employees inquire about company policies or benefits, managers might use ChatGPT to draft responses. While this can be efficient, it’s crucial for HR professionals to ensure that the information being disseminated is accurate. Unverified AI-generated responses can lead to misunderstandings and misinformation.

One significant risk with using AI in this context is the phenomenon of AI hallucinations, where AI generates information that isn’t based on factual data. Consider a scenario where a manager asks ChatGPT about the latest labor laws in California. The AI might respond with, “As of 2024, California has mandated that all employers must provide a $500 monthly stipend for remote work expenses, and employees must work from home at least three days a week to qualify.” However, no such mandate exists. Such inaccuracies can create confusion and lead to compliance issues if not corrected promptly.

Another example involves managers seeking data on employee preferences. If a manager asks ChatGPT about what percentage of Gen Z employees prefer flexible work schedules over higher pay, the AI might claim, “According to a recent survey, 85% of Gen Z employees prioritize flexible work schedules over higher pay.” Without a specific and recent survey to back this claim, it can be misleading. HR professionals need to ensure that any statistics provided by AI are verified by legitimate sources, including internal insights, to maintain trust and credibility within the organization.


Best Practices for HR Professionals


To safeguard against the risks of AI hallucinations, HR professionals can establish best practices for using AI tools.

  • First, HR professionals can ensure that all AI-generated information is cross-checked with independent, reliable sources before being shared or used for decision-making.
  • Second, HR professionals can encourage managers to use AI as a supplementary tool rather than a primary source of information. This approach combines the efficiency of AI with the accuracy of human expertise.
  • Finally, ongoing training for managers on the proper use of AI can help them better identify and correct potential inaccuracies. Consider drafting a company-wide AI policy. 

In conclusion, while AI tools like ChatGPT offer substantial benefits for departmental managers, HR professionals must remain cautious and proactive in verifying AI-generated information. By understanding the potential for AI hallucinations and implementing robust verification processes, HR teams can prevent the spread of misinformation and ensure that AI enhances rather than hinders HR functions. Ultimately, the successful integration of AI into HR practices relies on a balanced approach that leverages both technological advancements and human oversight.