In an article for The Times, Partner Nick Eziefula argues that The New York Times' lawsuit against OpenAI feels like a ‘line in the sand’ moment in the battle that has been brewing between content owners and AI developers over generative AI.
What a year for OpenAI and its most prolific investor, Microsoft. Despite heated issues in the boardroom, OpenAI reportedly ended 2023 by fundraising at a $100bn valuation. Microsoft has now made the ChatGPT-powered Bing ‘copilot’ available to the world. Not bad for a product most people hadn’t heard of in 2021.
Exponential growth. But at what cost? Possibly a hefty legal bill for copyright infringement and reputational harm, if the New York Times prevails in its lawsuit against OpenAI and Microsoft.
2023 saw a furore of litigation as content owners sought to enforce their rights against AI developers. In that wider war, the New York Times’ suit is something of a ‘line-in-the-sand’ moment. The paper is (rightly) worried that AI-generated summaries will keep readers on Bing and off its website, reducing subscriber revenue.
Many jurisdictions broadly protect copyright works from being copied without consent. This generally means that – absent permission and payment – infringement may occur where copyright material is ingested to train an AI model. Caveats, like ‘fair dealing’ exceptions, are often narrow in scope.
Infringement cases are complex and are sometimes settled – at this stage it is unclear whether the New York Times’ suit will go all the way to judgment. Even if it does, this US case won’t set a precedent that is directly applicable to other jurisdictions – here in the UK, we keenly await judgment on the substantive issues in litigation between Getty Images and Stability AI. Yet, given the prominence of the parties and the fundamental nature of the issues at hand, the outcome of the New York Times’ dispute with OpenAI and Microsoft will likely have a profound impact on the approach to AI regulation globally.
An intriguing aspect of this case is the allegation that hallucinations in Bing’s AI-generated responses falsely attribute content to the New York Times. Different jurisdictions approach this type of claim differently, but the New York court will likely have to grapple with difficult questions. Does the average user understand the risk of hallucinations? Did the New York Times actually suffer any harm as a result of them?
Ultimately, in an era where the tendency is to build first and ask questions later, the New York courts (and the US Supreme Court, should the case proceed to appeal) will be under pressure to apply copyright law in a manner that balances the desire to support technological innovation against the need to protect the livelihood of those whose material has been used. Transparency is key – as emphasised in the provisionally agreed EU AI Act. My hope is that the US courts require AI models to be transparent when using copyright material, facilitating the implementation of workable licensing regimes.
Nick's article was published in The Times, 11 January 2024.