Sign Up - Newsletter
Join Now

Artificial Intelligence Comes to Work

ai ai at work ai for hr artificial intelligence human resources small business Dec 05, 2023


David SchlottmanPartner – Jackson Walker LLP

It’s fun to watch old movies and see their predictions for the future.  According to the classic 1985 film Back to the Future, the year 2015 would bring us flying cars, self-tying shoes, and—best of all—floating hover boards (!!!).  While those predictions were wrong (to my great personal regret), others hit closer to home. Take for instance another classic from 1984, The Terminator, where an artificial-intelligence system called Skynet gains sentience, comes to the conclusion that humanity is the world’s greatest threat, and then starts a nuclear war and enslaves the earth’s remaining population to alleviate its concerns.  Comforting, right?

While we all hope the “singularity”—i.e., the moment AI becomes uncontrollable—is nothing more than the alarmist musings of a vivid imagination, the reality of artificial intelligence has become much more real in just the past few months.  Its potential applications and implications for work are both exciting and in some regards concerning.  But whether the workplace is ready for AI or not, it’s here.  Some considerations are below. 

What is artificial intelligence?

Who better to answer this question than ChatGPT (a generative AI tool).  I asked ChatGPT, “What is generative artificial intelligence?”  Here’s what I got:

Generative AI refers to a type of artificial intelligence that creates new content, such as images, text, or even music, rather than simply recognizing patterns in existing data.  It operates by learning patterns from a dataset and then generating new, similar content.  GPT-3, the model powering our conversation, is an example of generative AI for natural language processing.

Basically, generative AI takes in a lot of data, studies that data and recognizes patterns, and then make predictions based on the input data set.  (And for the record, I also asked ChatGPT, “are you Skynet?” It told me no, so at least we have that going for us.)

How can artificial intelligence be used at work?

The answer is quickly becoming limitless.  Developers are working rapidly to create AI tools to address many work tasks.  Here are some examples:

  • Recruiting and hiring – AI can identify and screen job candidates.
  • Productivity management – Employees can use AI tools to take meeting notes and help streamline task management and workflow.
  • Website design – AI can design a website.
  • Content generation – Employees can prompt AI tools to create presentations or reports, summarize content, generate videos or social media posts, and much more.

What are the risks?

The regulation of artificial intelligence is a new frontier.  The way old laws might apply to this new technology is unclear.  Fresh laws specifically regulating AI are also likely (and already exist in some places).  Because so much is unknown about the technology and the effects of its use, there is undoubtedly first-adopter risk associated with use of AI tools.  Despite the uncertainty, here are a few things to think about.

You are likely responsible for what AI does.  Because generative AI makes predictions from data sets created by humans, we’ve quickly discovered that AI can reproduce some of humanity’s less noble tendencies.  For instance, some AI-powered recruiting tools have been found to be biased against candidates within certain protected categories.  Companies have been sued for discrimination as a result.  Some jurisdictions (like Illinois and New York City) have already implemented laws regulating the use of AI in hiring, and more laws are likely.  The broader lesson here is that AI-powered decisions can result in legal liability just the same as regular human decisions. 

AI is not always right. On a related note, AI gets things wrong and may even make things up.  The technology is new and imperfect.  The so-called “hallucinations” of generative AI tools have been widely publicized.  In one notable pending lawsuit, a Georgia radio host sued OpenAI (the creator of ChatGPT) for defamation claiming that ChatGPT falsely reported that the host had defrauded and embezzled funds from a non-profit organization.  It is not difficult to imagine that users of generative AI tools might face similar claims.  For example, the dissemination of an AI-created presentation or report containing false statements or figures might lead to legal claims for defamation, misrepresentation, fraud, or other causes of action.  The takeaway—buyer beware when using AI tools. 

AI uses and probably keeps whatever is put into it.  AI tools process data input by users to provide a desired outcome.  Many AI tools also retain inputted user data to further develop and enhance their data sets.  The legal implications of that fact are potentially significant.  Data privacy laws in some states and countries may regulate the sharing of certain types of data with an AI tool.  Inputting client or other third-party data into an AI tool may inadvertently violate contracts containing non-disclosure provisions.  Inputting protected employee data (such as personal health information) could also cause issues under privacy laws. Finally, ChatGPT faces multiple copyright lawsuits from authors and other content creators alleging that ChatGPT included their content in its data set and infringed on their intellectual property by using that content.  It’s not clear how this litigation will play out, but it is conceivable that users of generative AI tools could face similar claims. 

Regulation is coming.  Currently, there are very few laws that regulate AI specifically, but that will change.  President Biden recently issued an executive order on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”  The order is broad and directs various federal agencies to consider necessary regulation in a variety of contexts, including the effect of AI on labor standards and job quality.  California has also proposed sweeping legislation governing the use of AI.   What will ultimately be regulated remains to be seen.  It is likely that use of AI for certain activities or industries may be regulated.  And in light of some predictions about the displacement of human workers as a result of AI, it is conceivable that even broader regulation may be enacted.  Users of AI tools will need to monitor continuing legal developments. 

What should I be doing?

  • Assess how AI is currently being used in your workplace.  Even if you don’t think employees are using AI in their work, they are.  Understanding how AI is being used in your business will help to assess both risks and opportunities.
  • Set rules.   Use of AI tools has legal and practical risk.  Establish rules and policies about who may use AI tools and under what circumstances.
  • Set operational guiderails.  Implement processes to ensure compliance with the rules and policies you set.  For instance, if certain tools are unauthorized, block web access to those tools. 
  • Understand how the technology works.  The greatest risk is often the one you don’t know about.  Knowing how authorized AI tools work will help to avoid unwanted compliance surprises. 
  • Review terms of serviceTerms of service should inform you how your data is being used and what rights you do or don’t have.  Make sure what you think is happening is actually happening. 
  • Review outputs.  AI can be wrong.  Double check to confirm AI is right.