Insight
Understanding the Data (Use and Access) Bill: What businesses need to know
Discover the key changes introduced by the Data (Use and Access) Bill and how organisations must adapt to meet compliance requirements.
Read moreInsight
The landscape of artificial intelligence (AI) is rapidly evolving, presenting both opportunities and challenges, particularly in the realm of data protection and privacy.
On 12 April 2024, the Information Commissioner’s Office (ICO) took a proactive stance in addressing these challenges by launching a call for evidence on the application of the accuracy principle of the UK General Data Protection Regulation (UK GDPR) to generative AI models. The most recent consultation on generative AI focuses on the link between the specific purpose for which a generative AI model will be used and the need for accuracy, as well as warning of the potential consequences of inaccurate training data, leading to inaccurate outputs.
Purpose-driven accuracy:
The need for accuracy in generative AI outputs depends on the specific purpose of the application. Models used for decision-making or providing factual information require higher precision as the information provided is being relied on. This differs from a scenario where the generative AI model is being developed for a purely creative purpose where the outputs being accurate are not their priority. For instance, the consultation highlights the difference between models employed in triaging customer queries which would need to uphold a higher level of accuracy compared to those utilised in generating ideas for video game storylines.
Training data impact:
The accuracy of generative AI outputs is influenced by the quality of training data. Developers must curate training data carefully, ensuring it reflects the intended purpose and complies with data protection principles. Transparent communication of accuracy limitations to deployers and end-users is essential to mitigate risks associated with inaccurate outputs.
Transparency & information rights:
Developers and deployers must clearly inform users about the statistical accuracy and intended use of generative AI applications. By monitoring user interactions, both transparency and accountability are enhanced in line with individuals’ rights to understand AI-driven decisions.
Clear communication among developers, deployers, and end users is paramount to ensure the model’s final application aligns appropriately with its level of accuracy.
The ICO emphasises that the use of inaccurate training data can lead to erroneous outputs, thereby breaching the accuracy principle. Consequences of such inaccuracies extend beyond data integrity issues, potentially causing damage, distress, and reputational harm to individuals and organisations alike. Non-compliance may also result in enforcement action from the ICO and liability for compensation payable to affected individuals.
Developers:
Deployers:
Please note that this information is for general guidance only and should not substitute professional legal advice. If you have specific concerns, we recommend consulting one of our legal experts.
Contact Us
If you are wanting to develop or use AI for your business and you would like to discuss the content of this article or any other concerns you may have, book a 30-minute FREE consultation or fill in the form below requesting a call back from Haroon Younis, Partner & Head of Commercial.
Related Services
Knowledge