Understanding the ICO consultation on generative AI

The landscape of artificial intelligence (AI) is rapidly evolving, presenting both opportunities and challenges, particularly in the realm of data protection and privacy.

On 12 April 2024, the Information Commissioner’s Office (ICO) took a proactive stance in addressing these challenges by launching a call for evidence on the application of the accuracy principle of the UK General Data Protection Regulation (UK GDPR) to generative AI models. The most recent consultation on generative AI focuses on the link between the specific purpose for which a generative AI model will be used and the need for accuracy, as well as warning of the potential consequences of inaccurate training data, leading to inaccurate outputs.

Key considerations

Purpose-driven accuracy:

The need for accuracy in generative AI outputs depends on the specific purpose of the application. Models used for decision-making or providing factual information require higher precision as the information provided is being relied on. This differs from a scenario where the generative AI model is being developed for a purely creative purpose where the outputs being accurate are not their priority. For instance, the consultation highlights the difference between models employed in triaging customer queries which would need to uphold a higher level of accuracy compared to those utilised in generating ideas for video game storylines.

Training data impact:

The accuracy of generative AI outputs is influenced by the quality of training data. Developers must curate training data carefully, ensuring it reflects the intended purpose and complies with data protection principles. Transparent communication of accuracy limitations to deployers and end-users is essential to mitigate risks associated with inaccurate outputs.

Transparency & information rights:

Developers and deployers must clearly inform users about the statistical accuracy and intended use of generative AI applications. By monitoring user interactions, both transparency and accountability are enhanced in line with individuals’ rights to understand AI-driven decisions.

Clear communication among developers, deployers, and end users is paramount to ensure the model’s final application aligns appropriately with its level of accuracy.

Why is it important?

The ICO emphasises that the use of inaccurate training data can lead to erroneous outputs, thereby breaching the accuracy principle. Consequences of such inaccuracies extend beyond data integrity issues, potentially causing damage, distress, and reputational harm to individuals and organisations alike. Non-compliance may also result in enforcement action from the ICO and liability for compensation payable to affected individuals.

Practical considerations for developers & deployers

Developers:

  • Consider whether the statistical accuracy of the generative AI model output is sufficient for the purpose for which the model is used, especially when it will be used for purposes that demand precision.
  • Conduct a comprehensive assessment of training data to ensure accuracy, relevance, and compliance with data protection principles. Document the impact of training data on model outputs to facilitate informed decision-making.
  • Ensure transparent communication of accuracy limitations and clear expectations to deployers and end-users, as this is essential to mitigate risks associated with inaccurate outputs.
  • Monitor user interactions and feedback to identify areas for improvement and ensure compliance with accuracy requirements.

Deployers:

  • Anticipate and address the potential impact of inaccurate training data and outputs on individuals before deployment, such as by limiting user queries.
  • Ensure transparent communication regarding the statistical accuracy and intended usage of the application.
  • Continuously monitor the application’s usage to enhance public information and, if needed, refine restrictions on its usage.

Please note that this information is for general guidance only and should not substitute professional legal advice. If you have specific concerns, we recommend consulting one of our legal experts.

If you are wanting to develop or use AI for your business and you would like to discuss the content of this article or any other concerns you may have, book a 30-minute FREE consultation or fill in the form below requesting a call back from Haroon Younis, Partner & Head of Commercial.

Scroll to next section

Scroll back to the top

Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

For more information on how these cookies work, please refer to our Cookies Policy.

Strictly necessary cookies

Necessary cookies enable core functionality such as security, network management, and accessibility. You may disable these by changing your browser settings, but this may affect how the website functions.

Analytics Cookies

These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our website. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous.

Force24 cookies & tracking

This website utilises Force24’s marketing automation platform. Force24 cookies are first-party cookies and are enabled at the point of cookie acceptance on this website. The cookies are named below:

F24_autoID
F24_personID

They allow us to understand our audience engagement thus allowing better optimisation of marketing activity.