We provide the complete commercial debt recovery service; from outsourced early arrears collections through to expert litigation, all handled in-house by a multi-award-winning law firm.


Visit our debt recovery website

Understanding the ICO consultation on generative AI

The landscape of artificial intelligence (AI) is rapidly evolving, presenting both opportunities and challenges, particularly in the realm of data protection and privacy.

On 12 April 2024, the Information Commissioner’s Office (ICO) took a proactive stance in addressing these challenges by launching a call for evidence on the application of the accuracy principle of the UK General Data Protection Regulation (UK GDPR) to generative AI models. The most recent consultation on generative AI focuses on the link between the specific purpose for which a generative AI model will be used and the need for accuracy, as well as warning of the potential consequences of inaccurate training data, leading to inaccurate outputs.

Key considerations

Purpose-driven accuracy:

The need for accuracy in generative AI outputs depends on the specific purpose of the application. Models used for decision-making or providing factual information require higher precision as the information provided is being relied on. This differs from a scenario where the generative AI model is being developed for a purely creative purpose where the outputs being accurate are not their priority. For instance, the consultation highlights the difference between models employed in triaging customer queries which would need to uphold a higher level of accuracy compared to those utilised in generating ideas for video game storylines.

Training data impact:

The accuracy of generative AI outputs is influenced by the quality of training data. Developers must curate training data carefully, ensuring it reflects the intended purpose and complies with data protection principles. Transparent communication of accuracy limitations to deployers and end-users is essential to mitigate risks associated with inaccurate outputs.

Transparency & information rights:

Developers and deployers must clearly inform users about the statistical accuracy and intended use of generative AI applications. By monitoring user interactions, both transparency and accountability are enhanced in line with individuals’ rights to understand AI-driven decisions.

Clear communication among developers, deployers, and end users is paramount to ensure the model’s final application aligns appropriately with its level of accuracy.

Why is it important?

The ICO emphasises that the use of inaccurate training data can lead to erroneous outputs, thereby breaching the accuracy principle. Consequences of such inaccuracies extend beyond data integrity issues, potentially causing damage, distress, and reputational harm to individuals and organisations alike. Non-compliance may also result in enforcement action from the ICO and liability for compensation payable to affected individuals.

Practical considerations for developers & deployers


  • Consider whether the statistical accuracy of the generative AI model output is sufficient for the purpose for which the model is used, especially when it will be used for purposes that demand precision.
  • Conduct a comprehensive assessment of training data to ensure accuracy, relevance, and compliance with data protection principles. Document the impact of training data on model outputs to facilitate informed decision-making.
  • Ensure transparent communication of accuracy limitations and clear expectations to deployers and end-users, as this is essential to mitigate risks associated with inaccurate outputs.
  • Monitor user interactions and feedback to identify areas for improvement and ensure compliance with accuracy requirements.


  • Anticipate and address the potential impact of inaccurate training data and outputs on individuals before deployment, such as by limiting user queries.
  • Ensure transparent communication regarding the statistical accuracy and intended usage of the application.
  • Continuously monitor the application’s usage to enhance public information and, if needed, refine restrictions on its usage.

Please note that this information is for general guidance only and should not substitute professional legal advice. If you have specific concerns, we recommend consulting one of our legal experts.



Scroll to next section

Scroll back to the top