Americas

  • United States

Asia

mike_elgan
Contributing Columnist

The one real problem with synthetic media

opinion
Dec 31, 20226 mins
Artificial IntelligenceAugmented RealityTechnology Industry

Companies are publishing DALL-E 2 and ChatGPT content into a legal void.

artificial intelligence

Real life comes at you fast. Fake life comes even faster.

Content creators, marketers, company bloggers, and others are rushing to take advantage of the new synthetic media trend.

You can see why. Art created with artificial intelligence (AI) enables a more flexible and original alternative to stale old stock photography. And AI content generators, most especially ChatGPT, can literally write decent quality blog posts, advertisements and marketing content in seconds.

The year 2022 turns out to have been the year synthetic media tools went mainstream.

Most of the credit for this sudden turn toward synthetic media by millions of people goes to a San Francisco-based company called OpenAI. The company, which is a for-profit firm owned by a non-profit company — both called OpenAI — was founded by Sam Altman, Elon Musk, Greg Brockman, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services, Infosys, and YC Research and backed to the tune of $1 billion by Microsoft. OpenAI gets the credit because it is responsible for both DALL-E 2 and ChatGPT, the services that put both AI art and uncanny AI chat on the map.

Hundreds of new products and online services have emerged in recent weeks enabling easy use of these foundational tools. But OpenAI is at the core of it.

The real problem with synthetic media

Furman University philosophy professor Darren Hick warned recently on Facebook that teachers and professors can expect a “flood” of homework essays written by ChatGPT.

We can expect “cheating” by company content creators, too.

Public synthetic media tools based on DALL-E 2 and ChatGPT save time and money and by generating quality content fast. Companies are already using it for social posts, blog posts, auto-replies, and illustration.

Synthetic media promises a very near future in which advertisements are custom generated for each customer, super realistic AI customer service agents answer the phone even at small and medium-sized companies, and all marketing, advertising and business imagery is generated by AI, rather than human photographers and graphics people. The technology promises AI that writes software, handles SEO, and posts on social media without human intervention.

Great, right? The trouble is that few are thinking about the legal ramifications.

Let’s say you want your company’s leadership to be presented on an “About Us” page on your website. Companies now are pumping existing selfies into an AI tool, choosing a style, then generating fake photos that all look like photos taken in the same studio with the same lighting, or painted by the same artist with the same style and palate of colors. But the styles are often “learned” by the AI by processing (in legal terms) the intellectual property of specific photographers or artists.

Is that intellectual property theft?

You also run the risk of publishing content that’s similar or identical to ChatGPT content published elsewhere — at least being downgraded in Google Search for duplicating content and at most being viewed (or sued) for plagiarism.

For example, let’s say a company uses ChatGPT to generate a blog post, making minor edits. Copyright may or may not protect that content, including the bits generated by AI.

But then a competing company tasks ChatGPT to write another blog post, generating language that’s identical in expression to the first. After minor edits, that copy goes online.

In this case, who is copying whom? Who owns the rights to the languages that’s identical in each case? OpenAI? The first poster? Both?

It could be that if the second ChatGPT user never saw the first user’s content, it’s not technically plagiarism. If that’s the case, we could be facing a situation in which hundreds of sites are getting identical language from ChatGPT but no person is technically copying any other person.

Adobe is accepting submissions of AI-generated art, which they’ll sell as stock “photography” — and with that arrangement claim ownership of the images with the intention of preventing others from copying and using them without payment. Do they or should they have the right to “own” these images — especially if their style is based on the published work of an artist or photographer?

The greatest legal exposure may be the blind publication of outright errors, which ChatGPT is notoriously capable of. (Hick, the Furman professor, caught one student using Chat GPT because her essay was flawlessly written and totally wrong.)

It could also generate defamatory, offensive or libelous content, or content that violates the privacy of someone.

When AI’s words transgress, who’s transgression is it?

ChatGPT grants permission to use its output, but requires you to disclose that it’s AI-generated content.

But copyright cuts both ways. Most ChatGPT is generic and anodyne where the sources on that topic are many. But on topics where sources are few in number, ChatGPT itself may be infringing copyright. I asked ChatGPT to tell me about my wife’s business, and the AI described it perfectly — in my wife’s own words. ChatGPT terms and conditions allow use of its output — in this case, it claims to allow use of my wife’s copyrighted expression, permission for which she granted neither to OpenAI nor its users.

ChatGPT is presented to the world as an experiment, and its users are contributing to its development with their inputs. But companies are using this experimental output in the real world already.

The problem is that important laws and legal precedents have not been written yet; putting synthetic media into the world means that future law will apply to present content.

The rulings are just starting. The US Copyright Office ruled recently that a comic book using AI art is not eligible for copyright protection. That’s neither a law nor a legal ruling, but it is a precedent that may be considered in the future.

OpenAI greenlights the use of DALL-E and ChatGPT output for commercial uses. In doing so, it passes the legal burden to users, who may be lulled into complacency about the appropriateness of use.

My advice is: Don’t use synthetic media for your business in any way. Yes, use it for getting ideas, for learning, for exploration. But don’t publish words or pictures generated by AI — at least until there’s a known legal framework for doing so.

AI-generated synthetic media is arguably the most exciting realm in technology right now. Some day, it will transform business. But for now, it’s a legal third rail you should avoid.