AI's Creative Dilemma
This week, a group of prominent authors, including John Grisham and Jodi Picoult, sued Open AI over alleged copyright infringement in training ChatGPT using their works. This is not the first case, there have been several others: in February, Getty Images sued Stability AI, alleging that they copied 12 million images for training data for their text-to-image generator.
Intellectual property has been an ongoing discussion for centuries, adapting to each technological and creative wave. We've established performance rights for composers, recognised photography as a form of art and grappled with questions surrounding recorded music, streaming and downloads. Those who came of age in the 90s will remember how Napster revolutionised music sharing, sparked copyright debates and invoked fury from the music industry.
Now, generative AI is presenting new challenges around Intellectual Property. Smartphone apps can mimic voices. In April 2023, an AI song mimicking Drake and the Weeknd, Heart on my Sleeve, went viral before being taken down across all streaming platforms. But by that time it had already achieved 600,000 Spotify streams, 15 million TikTok views and 275,000 YouTube views.
A human can be inspired by sources without facing copyright issues, in fact, this is often seen as a homage rather than theft but with AI the speed of learning and precision of the output is making us uncomfortable. The technology amplifies actions previously possible on a smaller scale: we would have to spend '000s of hours of practice to achieve what AI can do in minutes.
Another issue that has arisen is concerning the consumption and sharing of news. Web crawler tools are widely available - these scape and monitor content from any website. If an AI is aggregating and summarising content from newspapers and then sharing a summary of the news with me, the industry believes they should be compensated.
Recommended by LinkedIn
The US Copyright Office has opened public comment on AI and copyright issues, to address three main questions: (i) how AI models should use copyrighted data in training; (ii) whether AI-generated material can be copyrighted even without a human involved; (iii) how copyright liability would work with AI. In the UK, a similar consultation was carried out but no action has been taken. The challenge is not just about applying existing laws, but redefining them for AI.
Tools like ChatGPT analyse inputs without retaining and storing the content. They are designed to find patterns, this is very different from copying songs from the radio onto a cassette (I am including this reference for the older readers). AI is evolving very quickly and AI models will likely become less reliant on specific data sources and start to produce valuable results with less data.
Looking ahead, the challenge isn't just about what AI consumes (the input) but what it produces (the output). AI can generate entirely new and original content based on training data. AIs are tools, will they eventually be considered artists? It seems inevitable that culturally, if not legally, AI-generated content will be consumed in the same way as other creative content.
I am optimistic that because humans love to create and humans love to seek authentic experiences, there will be a world in which AI art and human art live side by side. Historic debates around innovation and authenticity have always found a way forward and, while lawyers are working through the thorny legal issues, we also need to think about our ethical position.
What do you think? Should original creators be compensated when their works are used in training AI models? What about the output, should there be revenue sharing with the original creators? Could an AI own the copyright to material it produces, or must there always be a human "author"?
Senior Manager @ Advisense
1yI love the questions you pose with all of the issues surrounding AI. I don’t have answers but am currently looking for them alongside others in my organisation. To give you my thoughts however: if a songwriter’s lyrics and music are copied even in a small way by other artists, then we’ve seen countless lawsuits over the years, Marvin Gaye’s estate in particular has called numerous artists’ work into question. With AI, I would say what’s the difference? For paintings and other works, artists have been copying techniques for generations inspired by the Picassos, Warhols and Manets of the world. However, copies of sunflowers are generally viewed as copies even the best ones and are not worth as much. So I would ask (because I don’t know), if a graphic designer were to create imagery for a product that went to mass media, and it was inspired by Munch’s The Scream, would the producers need to pay for use of the inspiration? If the answer is no, then why should an AI company? Next up, news outlets expecting to be rewarded for AI collating their news and reporting. Today, the BBC and probably other media outlets summarise the newspapers in their morning broadcasts. Do they pay for their review of The Times and The Guardian? To be cont’d