Request Demo
April 30, 2023

How to Create the Best ChatGPT Policies

HR teams need to ask and answer many questions before setting policies to guide employees' use of OpenAI's ChatGPT and other generative artificial intelligence tools, legal experts say. The content-producing technology is growing more popular by the day, and companies in multiple industries are excited about its prospects while also curious, if not fearful, about […]

HR teams need to ask and answer many questions before setting policies to guide employees' use of OpenAI's ChatGPT and other generative artificial intelligence tools, legal experts say.

The content-producing technology is growing more popular by the day, and companies in multiple industries are excited about its prospects while also curious, if not fearful, about where it can lead.

One-third of U.S. workers believe their job will rely more on workplace automation in the next few years, according to a survey of 521 workers conducted by AmeriSpeak Omnibus for SHRM in early April. However, 91 percent said their current job duties haven't been impacted by AI at all.

Meanwhile, a poll of 62 HR leaders in February by consulting firm Gartner found that about half of them were formulating guidance on employees' use of ChatGPT, Bloomberg reported.

John Carrigan, an attorney with Cozen O'Connor in Los Angeles, said employers should consider to what extent their existing policies might already address some of the "new" issues raised by AI.

"For instance, there have been some reports of employees who believe ChatGPT makes them so productive that they can secretly take on multiple full-time jobs without informing their employer," he said. "And while that may seem like a novel problem, it could be something that would already be addressed appropriately by an existing moonlighting or conflict-of-interest policy."

Employers should instead think about areas where they wouldn't want to see AI used and then set clear guidelines, Carrigan said.

"When deciding how much to allow or limit the use of ChatGPT or similar tools, it makes sense to strategize about the sorts of tasks for which the AI might be used, as well as the strengths and limitations of AI tools for that purpose," he said.

Carrigan added that it's probably also good practice to designate certain point people to oversee AI usage and troubleshoot problems as they may arise.

"This is an area where we expect to see a tremendous change in a fairly short time, both in terms of the capabilities of the technology itself and in legal responses to the use of the technology," he said. "So, employers need to be prepared to be flexible."

Weigh Your Risk Tolerance

Attorney Jenn Betts, shareholder and co-chair of the technology practice group at Ogletree Deakins in Pittsburgh, said companies should determine their level of risk tolerance and write policies that clearly spell out expectations.

"Chances are, conservatively-postured companies will not be comfortable with allowing widespread use of generative artificial intelligence, given questions about the technology, its accuracy, and security," she said. "Many organizations have serious concerns related to data security and accuracy."

Even organizations that are willing to embrace generative AI have concerns about preserving and protecting their own data, as well as potentially the data of their clients or customers, Betts said.

"Employers developing generative artificial intelligence policies typically forbid any confidential information from being uploaded or used as part of their ChatGPT efforts," she noted.

However, banning the technology may not be the right move. Consulting firm Gartner said in an FAQ this month that blocking ChatGPT outright may give organizations "a false sense of compliance" and instead lead to "shadow" ChatGPT usage by employees.

"A sensible approach is to monitor usage and encourage innovation, but ensure the technology is only used to augment internal work and with properly qualified data, rather than in an  unfiltered way with customers and partners," Gartner said.

[SHRM members-only toolkit: Using Artificial Intelligence for Employment Purposes]

Protecting Proprietary Information

Gartner also advised that all employees who use ChatGPT should be instructed to treat the information they post as if they were posting it on a public site (e.g., a social networking site or a public blog).

"They should not post personally identifiable information, company or client information that is not generally available to the public," the firm said. "There are currently no clear assurances of privacy or confidentiality. The information posted may be used to further train the model."

Microsoft, a long-term partner with OpenAI, will be introducing privacy assurances for its Azure OpenAI service, just as it does for its other software services, according to Gartner.

Fact-Checking ChatGPT and Employees' Knowledge Levels

Janice Agresti, associate attorney at Cozen O'Connor in New York City, said that when deciding what course to take in setting a policy, consider including the type of work your organization does; how familiar your workforce is with developing technologies and their limitations; how much of their work is output driven; and your workforce's areas of expertise. ChatGPT still makes a lot of mistakes, and users need to be able to recognize them.

"If the technology user is an expert, for example, then that individual can fact-check the output and modify it according to their expertise. In that case, the individual is more likely to use ChatGPT as a starting point, as opposed to a final output, which poses less reputational, legal and other risks for a company," Agresti said.

She continued, "Further, if the user is aware of ChatGPT's limitations, then they would take steps to ensure that the output is checked for facts, legality, etc. Again, this would pose less of a reputational, legal and other risks for a company."

On the other hand, Agresti said that if the user is not quite sure about how ChatGPT functions or its limitations, or if the user is using the technology to seek information they are not familiar with, "that significantly increases the risks associated with [ChatGPT's] usage."

Employers also want to feel confident that any work product they put out with their name on it isn't going to put them in any sort of hot water for copyright violation or plagiarism issues.

Considerations for Unionized Companies

Employers that are unionized should consider whether any of their policies regarding ChatGPT and other generative AI will constitute mandatory subjects of bargaining, Carrigan said.

"There is a lot of concern in particular regarding white-collar positions that some of the new AI tools may reduce the demand for certain jobs and might make certain employees obsolete," he said. "Unions are rightly going to want to have a place in those sorts of discussions."

Agresti said that ultimately, generative AI should only be used to enhance human performance, not replace it.

"If used, it should be used as a tool to assist in a user's work, and not as a substitute for the user's own creativity, good judgment or expertise," she said.

SAMPLE POLICY: WHEN GENERATIVE AI IS MOSTLY FORBIDDEN

Employees are not allowed to use ChatGPT and other third-party generative AI services to conduct business. This includes using such services to generate computer code or any kind of customer communication, even as a starting point. You are also prohibited from using the services to generate internal communications, policies or documentation, or any other materials that are intended to be used in connection with the running of businesses.

You are allowed to use such services for the purpose of educating yourself about how these services work, subject to the limitations stated below. As of today, all teams are approved to use the ChatGPT web UI (not including any API) for experimental and testing purposes only, provided that you do not use any company or third-party proprietary or confidential information, any personal information, or any customer or third-party data as an input. To use any other third-party generative services, you must obtain approval to use the service in accordance with our third-party approval process.

If you work on large language models (LLMs) as part of your job, do not test any third-party generative AI service to see how it handles certain types of inputs/requests or how its behavior differs from other LLMs.

And please do not provide any feedback to any third-party service provider without receiving specific authorization from the company's legal and corporate architecture.

Article written by:  Orville Lynch, Jr.
Mr. Lynch, a member of the legendary two-time Ohio Civil Rights Hall of Fame Award winning Lynch Family. Mr. Lynch is a nationally recognized urban media executive with over 20+ years of diversity recruitment and serial entrepreneur with numerous multi-million dollar exits.
Newsletter Sign Up

Subscribe to Our Newsletter

Sign up for the Career Town newsletter to receive the latest news, upcoming events, and updates from Career Town.


crossmenu