Think Your Employees Aren’t Using ChatGPT?

Nicholas J. Fiorenza, Esq.

Think again. Survey data is consistent that use of ChatGPT and other “generative” artificial intelligence (AI) platforms is expanding exponentially. While difficult to quantify, surveys suggest that most employees in “white collar” businesses regularly use such platforms to complete or assist with work tasks. The majority of those do not tell their bosses about it.

AI in the workplace is not a “stick your head in the sand” issue. Just like the early days of automation and robotics in industry, AI use by employees will continue to expand. You can expect it will be used to accomplish both mundane tasks such as writing everyday emails as well as complex project work including the development of marketing plans, sales presentations, and human resources programs.

For all its benefits, use of AI to generate work product is fraught with pitfalls. Basing your business’s work product on false information, infringing on another entity’s protected intellectual property, inadvertently divulging your own proprietary information and the related potential for damage to your business reputation are just a few of the issues. Moreover, it is increasingly difficult to determine work product originally developed by an employee from that generated through AI.

So, employers would do well to address AI issues at the same time they craft their overall cybersecurity policies. Take the time to develop a thoughtful policy and offer related training to your employees. Be transparent when addressing the following questions: What is acceptable use of generative AI? What are the expectations for advising the employer of when and how it is used at work? What is the interplay between other related employee policies concerning employee performance, confidentiality requirements, privacy policies and others? What are the consequences for using generative AI in a manner not in line with the employer’s expectations?

Employers should work closely with employment counsel to stay ahead of this rapidly developing issue.

Nick Fiorenza: