Secondary ruling in Getty’s UK showdown with Stability AI begs the primary question

In a long-awaited decision, the High Court has ruled that the Stable Diffusion AI tool was not an infringing copy for the purposes of secondary copyright infringement, as it did not contain or store any copies of the copyright works on which it was trained.[1] The model was capable of being an "article" (which could be intangible), and the model weights of the various Stable Diffusion versions were altered during training by exposure to infringing copies of copyright works. Yet the weights themselves did not, at any stage, store the actual visual information of the works, but only learned patterns and features.
The claimants had abandoned their core claim for primary copyright infringement, as the unauthorised reproduction during the training process took place outside the UK. So, key questions about primary infringement in the context of generative AI remain unanswered.
Background
The first five claimants were members of the Getty Images group, which licenses photographs, videos and illustrations to individuals and business users. The sixth claimant was a US company that produces commercial images for Getty to license. The defendant was Stability AI, whose image-generation tool Stable Diffusion, based on its deep-learning AI model, generates synthetic images in response to prompts from users.
At the centre of the claimants' original case was the claim that Stability AI, without permission, scraped millions of "visual assets" owned or exclusively licensed by Getty and used them unlawfully to train and develop the various iterations of Stable Diffusion. The claimants originally put forward several claims, the main ones being:
- primary copyright infringement by training Stable Diffusion on copyright works without consent (along with infringement of database rights);
- secondary copyright infringement by importing an article known to be an infringing copy of copyright works;
- copyright infringement in the form of synthetic images generated by Stable Diffusion that reproduced a substantial part of copyright works; and
- trade mark infringement by allowing users to make essentially identical copies of copyright works, some of them featuring the claimants' watermarks.
Abandoned claims
The claimants initially argued that the training and development of Stable Diffusion on copyright works without consent amounted to primary copyright infringement under the Copyright, Designs and Patents Act 1988 (CDPA) on the basis that those works were downloaded onto servers in the UK.
Stability AI maintained that the training took place on cloud hosting and processing services overseas and, to that effect, applied for summary judgment. Yet the claim was allowed to proceed, on the basis that (a) the defendant and members of its development team were based in the UK and (b) disclosure might shed further light on where the training and development actually took place.
Nonetheless, the claimants were forced to abandon their training claim after they acknowledged that they could not provide sufficient evidence that the training and development of the model took place in the UK, dashing the hopes of many rights-holders that had wished to see the English courts examine the issue of whether ingesting copyright works without permission for AI training purposes amounts to infringement.
The output claim had originally turned on the ability of UK users of Stable Diffusion to generate synthetic images that reproduced a substantial part of the copyright works using text, image and certain combined prompts. Yet the claimants abandoned their output claim when Stability AI altered the product's functionality so as to block such prompts, making it difficult to prove loss, and thereby leaving the claimants with no case for seeking relief in the form of damages or an injunction on that ground.
Accordingly, the main issues before the court were reduced to (a) the secondary copyright-infringement claim and (b) the registered and unregistered trade mark claims.
Secondary infringement
After the withdrawal of the training and output claims, the claimants' copyright case hinged on the secondary infringement claim under sections 22 and 23 of the CDPA.
- Under section 22, copyright is infringed where a person, without a licence from the copyright owner, imports an "article" into the UK (other than for private or domestic use) "which is, and which he knows or has reason to believe is, an infringing copy of the work".
- Section 23 is worded similarly, but relates to the possession of, or dealing with, an infringing copy.
The claimants argued that Stable Diffusion amounted to an "article" and an "infringing copy" for such purposes. They accepted that the "model weights" – the parameters that recognise patterns and features in the data on which the model is trained – did not themselves store copies of particular works, and that the training took place outside the UK. But they argued that the making of the model weights would have infringed the copyright in the works had those weights been made in the UK – i.e. contrary to section 27, under which an article is also an "infringing copy" if its making in the UK would have infringed the copyright in the work in question, or breached an exclusive licence agreement relating to the work.
Stability AI countered that (a) an "article" could only be a tangible object and (b) an article can only be an "infringing copy" if it actually contains a copy of the copyright work in question.
The analysis of the judge, Mrs Justice Joanna Smith DBE, focused on whether Stable Diffusion was capable of being an "article" for such purposes, and whether it was an "infringing copy".
Meaning of "article"
There is no statutory definition of "article" under the CDPA. The claimants argued that section 22 was sufficiently broad in its wording to encompass an article whose creation involved copyright infringement.
The judge held that an "article" was not intended to be limited to a tangible object, as Parliament's desired scope for the concept was broad, and so was "capable of being an electronic copy stored in intangible form". Besides, in the judge's view, the CDPA must be interpreted in accordance with the "always speaking" principle,[2] taking account of changes that have occurred since the statute was enacted, including technological developments that the legislators might not have foreseen. So, given that intangible means of storage such as cloud storage, which are now commonplace, had not been invented when the CDPA was drafted, the failure to refer to such intangible means directly in the legislation was unlikely to indicate a deliberate policy decision to exclude them. Besides, section 17 (which defines "copying") expressly provides for copying by storage "in any medium by electronic means".
Analysis of "infringing copy"
To train Stable Diffusion, images needed to be downloaded onto cloud servers and copies temporarily made in the memory of computer graphics processing units during the training process. Stability AI accepted that "at least some" copyright works and images from the claimants' websites were used in that training.
Stability AI argued that an infringing copy under section 27 must be a reproduction of a copyright work. The claimants, on the other hand, maintained that section 27 requires the making of the infringing copy to constitute infringement, and they pointed to section 17(6) of the CDPA as providing that "copying in relation to any description of work includes the making of copies which are transient or are incidental to some other use of the work". The claimants sought to rely on Sony v Ball, in which it was held that an article becomes infringing “because of the manner in which it is made”,[3] since retention of a copy is not required.
For the judge, the critical issue was whether an AI model that derives or results from a training process involving the exposure of model weights to infringing copies is itself an infringing copy. The judge accepted Stability AI’s submission that an infringing copy must be a copy, and held that the model weights, although altered by exposure to copyright works, had "never contained or stored an infringing copy". The judge contrasted this scenario with the position in Sony v Ball, in which a RAM chip briefly contained a copy of a copyright work and so was an infringing copy, but only for the short time that it contained the copy.
In the judge's view, the wording of section 17 precluded the potential for an article that has never contained a copy from being an infringing copy. Further, sections 27(2) and (3) were "not concerned with a process which (while it may involve acts of infringement) ultimately produces an article which is not itself an infringing copy". Rather, the central question was whether the article itself was a copy.
On the facts, the model weights were merely the product of the patterns learned during training and, on that basis, the judge found that the claimants' argument that the AI model became an infringing copy as soon as it was made was "entirely misconceived". Accordingly, the claimants' claims of secondary infringement failed.
Strikingly, the judge noted that the fact that the development of Stable Diffusion involved the unauthorised reproduction of copyright works – through storing the images locally and in cloud computing resources, and then exposing the model weights to those images – was "of no relevance" to the question of whether the article itself was an infringing copy.
The judge also made an interesting lesser finding that gaining access to a model via a remote hosted service that is provided from ex-UK servers does not involve importation or a transfer of a copy of the model to the UK. In that scenario (and by contrast with downloadable models), no copy of the model is ever provided to the user, and all inference and output synthesis takes place outside the UK. But the nature of access was irrelevant in this case, given the judge's finding that the model contained no relevant infringing copies in any event.
Trade marks
Despite abandoning the output claim, the claimants pleaded that some synthetic images produced by Stable Diffusion generated watermarks that were identical or similar to the trade marks of Getty Images and iStockPhoto, amounting to trade mark infringement under section 10 of the Trade Marks Act 1994, as well as passing-off.
Stability AI accepted that it might be "possible" to generate synthetic images featuring watermarks, and that it was possible to "push" Stable Diffusion to generate watermarks. But it argued that the claimants' experiments using different prompts amounted to a "wilful contrivance". Stability AI acknowledged that the appearance of watermarks in an earlier version of the model was "non-trivial", but maintained that real-world users would not use the prompts entered by the claimants in their tests, because such users would try to avoid generating images bearing watermarks.
The claimants were required to establish, as a threshold question, that at least one user in the UK would have encountered a watermark. While the court made such findings in relation to earlier iterations of the model, the judge described those as "historic and extremely limited in scope". The claimants did not plead a case based on the probability of the number of watermarks – or, more importantly, infringing watermarks – that may have been produced by users in the UK. So, the claimants succeeded in their claims under sections 10(1) and 10(2), but only partially, i.e. solely for a limited number of iterations and access routes where UK instances had been shown.
The judge rejected the claimants' claims under section 10(3), on the basis that the use of the signs did not involve (a) detriment to distinctive character or reputation or (b) unfair advantage. In particular, the claimants could not produce any evidence of an actual or likely change in economic behaviour in relation to detriment to distinctive character or reputation. The passing-off claim was not considered, given that the claimants had submitted that it would "stand or fall" with the trade mark claim, and as far as the judge was concerned added nothing to her findings in that regard.
Comment
What was initially seen as a test case between rights-holders and AI firms ended up in a narrower, technical and intricate judgment that delves into great detail on the complex nature of AI development, but ultimately delivers quite a literal reading of secondary-infringement rules that were drafted in the era of physical piracy, a world apart from offshore cloud clusters and machine learning.
The outcome is, on the face of it, unsatisfactory for creators and rights-holders, who will have hoped to have had their central grievances against AI firms aired in court. But nothing in the ruling appears to suggest that AI developers would have a defence to unauthorised copying itself, nor that they would have a defence to substantial copying as a result of the use of a generative AI tool. So, we await a more comprehensive test case, including, in particular, the outcome of the parallel case between the parties that is continuing in the US, in which Getty's more substantive claims still need to be adjudicated.[4]
In terms of secondary copyright infringement, it seems counter-intuitive that an article built on an infringing basis does not amount to an infringing article. That finding turned on quite specific facts, which might be distinguishable in other cases. Further, while the High Court's reasoning on the central legal principle in this case is understandable in terms of the particular legislative provisions in front of the court, the defendant's escape from liability on a logical technicality begs the question of whether policy-makers might ultimately seek to update the secondary-infringement rules.
Meanwhile, AI firms will need to establish a wider certainty that their business models have a firm legal foundation, rather than relying on technicalities on a case-by-case basis. Besides, internationally available GenAI tools will need to satisfy copyright rules in all relevant jurisdictions, and a partial success in a UK case is no guarantee of wider lawfulness.
At the time of writing, the UK government is still preparing its response to the AI and copyright consultation,[5] which is expected to address concerns around how creators and rights-holders can, in practice, opt out of having their copyright works used to train AI models. More generally, while some fundamental questions remain unanswered by this case, one thing seems likely: in a world of rapid change driven by innovative and disruptive technology, it may not always be possible for the UK courts to rely on pre-AI law to resolve all of the novel issues posed by unlicensed copying in a globalised AI context. Ultimately, striking a balance between the interests of AI firms and the creative industries may require some form of legislative intervention and/or self-regulatory licensing schemes.
Article written for European Intellectual Property Review.




