Efficiency at What Costs? Creating Internal AI Policies Mitigates Legal and Business Risks Now

By Jameson E. Tibbs

September 27, 2023 | Articles
CLICK HERE TO DOWNLOAD A PDF VERSION

With the increase in popularity of artificial intelligence uses in business, companies are scratching the surface of AI’s potential to create efficient processes, automate tasks, and generate content at a low cost. With headlines focused on polarizing and intriguing applications of AI systems and popular chatbots like ChatGPT, Google Bard, and Bing Chat, the use of AI in a business’s day-to-day operations flies under the radar. While the convenience of delegating repeatable and mundane tasks to chatbots and AI is undeniable, businesses must be aware of the covert risks that seemingly harmless use of AI can carry. Addressing AI’s role in business operations through an AI use policy immediately protects a business and provides a distinct advantage to traditional reactionary compliance policies. 
 
At a minimum, the unrestricted use of AI in business operations poses immediate threats to data privacy, intellectual property, and sensitive business plans. Due to the direct and immediate impact these risks have on businesses, executives and leaders must consider how to mitigate these risks despite a general lack of legislation on the use of AI. Implementing an AI use policy simultaneously mitigates the immediate risks highlighted above, while adequately preparing businesses for impending regulation on the use of AI.  
 
From a high level, executives should consider the following questions when adopting a corporate AI use policy: 
 
  1. What are the risks associated with the use of AI in my business? 
  2. How should the use of AI be regulated within my business if there are so few regulations on AI to begin with? 
  3. What should I do when AI laws and regulations are enacted that impact my business? 

These questions are fundamental to consider when addressing and drafting an AI use policy tailored to a particular business.  
 
1. What are the risks associated with the use of AI in my business? 
 
While risks associated with the use of AI within a business are always specific to the industry and operations of a business, there are many risks that apply to most businesses:  
 
  • Disclosure of trade secrets and other intellectual property, disclosure of confidential or highly sensitive business information. 
    • Example: An employee copies and pastes an email from a supervisor that discusses highly sensitive business plans developed by the company into an AI chatbot to formulate a response. Unbeknownst to the employee, the chatbot uses the data from the email to learn. Upon receiving a similar prompt from a competitor, the chatbot generates a response that discloses the sensitive business plans of the company.  
  • Data privacy of the business, its customers, clients, and employees. 
    • Example: An employee uses an AI chatbot not vetted or approved by a company to process a customer warranty complaint which contains personal data about the customer. The chatbot’s terms and conditions do not protect personal data it receives from users, and the terms and conditions explicitly require the user to protect the AI chatbot provider from damages arising from the use of the system. Liability arising from the disclosure of the customer’s personal information is likely to fall on both the employee and the company. 
  • Risk of dissemination of misinformation to customers.  
    • Example: A company’s marketing employee is on a tight deadline to implement a new advertisement about a new promotion the company is putting on and uses generative AI to create a written advertisement about the promotion. Without verifying the results of the promotion, the marketing piece is published without human verification and contains a false advertisement regarding the promotion. As a result, consumers are fooled by the misinformation about the promotion. While this hurts the reputation of the Company, consumers affected by the misinformation may file complaints for false advertising under state and federal law. 
  • Risk of noncompliance with law, regulations, regulatory guidance, and industry-standard practices. 
    • Example: The Equal Employment Opportunity Commission (EEOC) recently settled its first-ever lawsuit regarding AI bias in hiring. A company that used an AI system to process job applications allegedly automatically rejected all male applicants over the age of 65 and all females over the age of 55. Accordingly, the EEOC brought a lawsuit against the company and settled the suit for $365,000.  

While the examples highlighted above may not have an immediate legal impact or pose significant liability to a business, each example highlights a risk to the reputation, perception, and goodwill of a business. If the skeptical view is taken that actual liability from the unrestricted use of AI systems is unlikely, the costs to protect businesses from such risks are relatively low. A proper AI use policy works to mitigate any potential liabilities from these risks. 

2. How should the use of AI be regulated within my business if there are so few regulations on AI to begin with? 
 
While there are relatively few laws and regulations tailored to the use of AI by businesses, the use of AI by employees should be regulated by the use of an internal AI use policy which accounts for the specifics of that business and incorporates guidelines, reporting systems, and is integrated thoughtfully into a business’s hierarchy. A properly crafted policy accounts for the shortfalls of the unregulated use of AI system in business, while addressing any business-specific solutions for the proper use of AI. Because the risks of the unregulated use of AI are not necessarily legal risks, but business risks, the policy must be oriented with those business risks in mind. As such, businesses are forced to implement their own governance policies to adequately address business risks arising from the use of AI.   

3. What should I do when AI laws and regulations are enacted that impact my business? 

As laws and regulations are implemented, an internal AI use policy must be both flexible enough to adapt to challenges presented by new laws and regulations and regularly maintained, reviewed, and audited based on the results of such policies.  
 
Implementing flexible policies can help decision-makers respond quickly to adjust to feedback along with new laws and regulations. By ensuring that policies are regularly reviewed, updated, and audited, businesses can build principles that mitigate the risk of liability along with a record of compliance.

A well-tailored and thoughtful AI use policy can provide much-needed value immediately for your business and provide the foundation for your business to prepare for the future of business in the age of AI and beyond. Unlike most compliance policies, which arise from legislation or regulation, adopting an AI use policy anticipates regulation of the use of AI, while mitigating covert business risks associated with the unfettered use of AI in business. For more information and advice on crafting an AI use policy for your business please reach out to Brandon Lê (ble@lippes.com or Jameson Tibbs (jtibbs@lippes.com). 
This website uses cookies to enhance user experience and to analyze traffic. To learn more about cookies and how we use them, please review our Privacy Policy. To continue use of this website, you must provide your consent to its use of cookies by clicking the "Accept" button.