Generative AI in Video Games – A Series

Welcome to Lee & Thompson’s three-part series on Generative Artificial Intelligence in Video Game Development – Risks, Opportunities and Legal Challenges

Written by Julian Ward, Andy Florence and Josh Colby

Introduction

 

That past 12 months has seen a massive increase in the use of artificial intelligence (“AI”) in the video games sector. Developers and publishers alike have been keen to embrace this technology in a variety of scenarios: be it the creation of in-game art or 3-D assets; increasing the speed and reducing the cost of development; creating more immersive and realistic interactions with characters; creating adaptive storylines; or helping businesses moderate in-game and online interactions. The potential uses of AI in the games sector appears endless.

However, as with the adoption of any new technology, it is critical for publishers, developers, and related organisations to recognise the inherent risks associated with AI.

The use of AI in video game development raises a number of legal and commercial risks and considerations. These range from the initial stages of training AI right through to the creation of computer-generated works and their subsequent exploitation.

This guidance note will be split into 3 parts where we will highlight the current legal issues that are being grappled with and try to highlight the differing approaches being taken across various territories.

Over the course of the series we will cover:

 

 

Generative AI are artificial intelligent systems that can create a wide variety of data based on user prompts, including text, images, and audio-visual works (e.g., deepfakes).

 

These systems are created through a multi-step process that involves data collection and training. Initially, a large dataset representing the desired output is collected, such as images, text, or music. For example, if you wanted to train an AI to generate pictures of faces, the dataset would be a collection of photographs of people’s faces. This dataset serves as the training data for the generative AI model. While there are a number of ways to train an AI, the main point to note is that humans are not involved in “teaching” the AI what is contained in the dataset. Instead, the AI itself accesses the contents of the dataset to analyse the information and build an understanding of the relationships and links in that data. Following this initial training, the AI will then undergo an iterative process of validation and tuning where the output of the AI is tested, and feedback provided so that it can gradually improve its performance. Again, this is not a human-led process, but an automated one. Over time the output from the AI will improve to the point that it is hard for humans to distinguish real data from synthetic data that was created by the AI.

One example of this training process is a Generative Adversarial Network (GAN), which consists of two main components: a generator and a discriminator. The generator is trained to create synthetic samples based on the training data, so carrying on from the example of an AI used to generate images or human faces, after analysing the training set data, the generator will try to generate a series of pictures of individual faces. Initially these faces will not be particularly high quality and will be fairly easy to distinguish from the original images in the dataset. At the same time, the discriminator is trained to distinguish between real and synthetic data created by the AI. In our example, the discriminator would be shown a combination of synthetic images created by the generator alongside some original images from the dataset.  The role of the discriminator is then to identify which images are synthetic and which are real. The training process is then an adversarial feedback loop, where the generator tries to fool the discriminator, and the discriminator improves its ability to differentiate real from synthetic samples. Through iterative training, both components of the model gradually improve their performance, so the generator is able to create more realistic output, while the discriminator improves its ability to distinguish the synthetic data from the real data. Once the training is complete, the generator can be used to generate new content that will appear very realistic and hard to distinguish by humans from the original content.

Although this technology is not new, advancements in machine learning algorithms have caused generative AI tools such as Open AI’s ChatGPT and DALL-E, Midjourney, and Google’s Bard/Gemini to take the spotlight throughout 2023 and into 2024. The growing wave of generative AI are dominating the daily headlines – we have already seen some fantastic examples beyond just the creation of static assets for games, with studios starting to integrate the AI within the game to create more realistic responses from in game characters and even using AI to help create the narrative of the story.

 

 

In order for copyright to subsist in a work in the UK, the work must fall into one of the categories of ‘work’ in the UK’s Copyright, Designs and Patents Act 1988 (“CDPA“).[1]

These categories include:

  • Original literary, dramatic and musical works (s.1) which are recorded in some way, “in writing or otherwise” (s.3(2));
  • Original artistic works (s.1);
  • Sound recordings, films and broadcasts (s.1); and
  • Typographical arrangement of published editions (s.1)

In the context of video game development, the most relevant categories of work include literary, musical, and artistic works, all of which must be original to be protected as a copyright work. A work will be ‘original’ if it has been created through the author’s skill, judgement, and individual effort and is not copied from other works.[2]

Similarly, under EU law, originality is a requirement for copyright protection and is determined by assessing whether the work is “the author’s own intellectual creation”.[3]

Currently, it is unclear whether AI-generated works meet the originality requirement in the UK and EU, due to the limited creative input by the human user. Work arising out of a simple selection of prompts for the AI tool to generate the work is unlikely to amount to the author’s skill, judgement or effort or the author’s “intellectual creation”. This is a factual assessment which will vary on a case-by-case basis.

Therefore, whether AI-generated work will be protected by copyright will depend on the level of input by the human author using the AI tool. As a rule of thumb, the more involvement and choices made by the human user, the higher the likelihood of establishing originality and copyright protection arising. However, generative AI and its impact on a work’s originality has not yet been substantially tested by the courts in these territories and should therefore be approached with caution.

[1] Copyright, Designs and Patents Act 1988

[2] Ascot Jockey Club Ltd v Simons [1968] 64 WWR 411

[3] Infopaq International A/S v Danske Dagblades Forening (Case C‑5/08)

 

By comparison, the US copyright law regime, while also including a requirement that copyright works are original, focus more on the element of “human creativity”[1]. Human input is essential before copyright protection is afforded to the work, as is demonstrated by the US courts’ consistent refusal of copyright protection for works created by mechanical processes and animals, among other things[2].

In March 2023, the US Copyright Office issued guidance on “Works Containing Material Generated by Artificial Intelligence”[3] and confirmed that works entirely produced by AI are not protected by copyright where the human user’s input only amounts to providing prompts or commands to create the end-result. The US Copyright Office states that the starting question is whether the work is “basically one of human authorship, with the computer [or other device] merely being an assisting instrument, or whether the traditional elements of authorship in the work (literary, artistic, or musical expression or elements of selection, arrangement, etc.) were actually conceived and executed not by man but by a machine.” For AI-generated works, the question is whether the contributions made by the AI tool are a “mechanical reproduction” or instead the author’s “own original mental conception to which [the author] gave visible form.”

Plainly speaking, the US Copyright Office likens the use of prompts to providing instructions to a commissioned artist: the commissioner however has not satisfied the requirements for copyright protection as human authorship is absent. The AI tool makes the creative choices when complying with the chosen prompts and these are the creative choices which, if made by a commissioned artist, would give rise to copyright protection. Where normally the commissioned artist would assign their right in the copyright work to the commissioner, the AI tool cannot own the copyright work as it is not a human author and therefore there is no copyright in existence to assign.

However, the US Copyright Office does provide practical examples of when computer-generated work may be protected by copyright. In each example, the human user makes a greater creative contribution compared with the simple selection of prompts. For example, the human user can exercise its free and creative choices to select and arrange AI-generated material in such a way that “the resulting work as a whole constitutes an original work of authorship.” Similarly, the human author can choose to make amendments and/or additions to the AI-generated work. In both examples, copyright protection will only be afforded to the human-created elements of the work – e.g. the arrangement, amendments and additions.

In order for copyright to be enforceable in the US, authors are required to register their work with the US Copyright Office; an obligation not required under the UK or EU copyright systems.  This additional step has meant that there have already been a number of tests of the US system and the application of the guidance on works created by an AI. Interestingly, the Copyright Office has taken a very strict interpretation of the need for human involvement and all the applications so far to protect an AI generated work have been rejected. This includes works that have involved a huge number of human prompts and refinements in their creation.[4]  These cases have proved an interesting contrast to the approach taken in other jurisdictions including the approach in China, outlined below.

[1] Refusal to Register A Recent Entrance to Paradise, where the application listed the AI as the author of the work, and Refusal to Register SURYAST, which involved an AI tool used to alter a photograph taken by the applicant but the creative decisions specific to the alterations were held to be taken by the AI.

[2] Naruto v. Slater, 888 F.3d 418 (9th Cir. 2018) held that animals have neither a constitutional nor statutory standing and cannot therefore own copyright in a selfie.

[3] United States Copyright Office, Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence (March 2023)

[4] Refusal to Register Théâtre D’Opéra Spatial, which involved the use of 624 revisions and text prompts to arrive at the final image.

 

China has also recently addressed the issue of copyright subsistence in the Beijing Internet Court case of LI v. LIU. In this case, Mr Li alleged that Ms Liu had infringed his copyright by reusing an image he had created using AI, on her blog.

The image in question was originally created by Mr Li using the Stable Diffusion AI. In order to create the image, Mr Li entered around 30 prompts to the AI along with around 120 “negative prompts” to refine the image and remove elements that he did not want present in the final image. For example, some of the negative prompts used to refine the image included pointing the AI towards errors in images that it had created including “missing fingers” and “out of focus”.

In determining whether or not the image in question should be protected by copyright, the courts applied a 4-step test:

  • Is the work within the scope of literature, art and science?
  • Is the work original?
  • Does the work contain a certain form of expression?
  • Is the work the result of intellectual achievement?

In analysing whether the work was the “result of intellectual achievement” the courts took into account a number of specific choices made by Mr Li including his selection of an AI tool that would create an image in the style he wanted. It also looked at the creative choices made by Mr Li in refining the image and applying certain parameters to achieve the desired output.

Ultimately the court held that the image in question reflected Mr Li’s intellectual investment and that it should be protected by copyright.

 

In the context of developing video game assets, whether or not an AI asset will be protected by copyright will be a primary concern and should be determined at the outset. Without copyright protection, developers will need to consider whether it is worthwhile investing in AI-generated assets which they will not be able to protect against copycats.

One approach would be avoiding the use of AI tools to generate core assets (e.g., main characters) which the developer will likely wish to protect from being copied. AI tools can still be used for less important assets, similar to Ubisoft’s utilisation of Ghostwriter to create “barks” for NPCs or High on Life’s background art, which will be of less importance to the developer.

Another approach is to ensure all AI-generated works include sufficient human input to give rise to copyright protection. This could be achieved by a mixture of employee training and an internal policy/handbook which require steps (human input) before AI-generated material can be incorporated into a game. With the exception of the approach in the United States, it is likely that the courts will apply a quantitative test to assess whether the human involvement has been sufficient for the asset itself to be regarded as the authors intellectual creation.  Looking at the approach from the Li vs Liu case, it is clear that the detailed documentation of the steps taken by Mr Li were an important part of this assessment, so studios would be advised to keep sufficiently detailed notes of the prompts used when creating assets to give the best argument that the AI was merely a tool for the author’s creation.

It will be interesting to see the case law evolve on this area in determining how much involvement is sufficient in to qualify for copyright protection. For example, when looking at the image created by Mr Li, the court looked at previous iterations of the image, but would each of these qualify for copyright protection? And if not, at what point was the involvement of Mr Li and the refinement of the prompts sufficient for the image to qualify?

 

Find out more

For more information about our experience and work, read about our experience on our dedicated Video Games and Digital & Tech pages.