5 pillars of drop shipping contracts: Creating security in a fast-growth model
Drop shipping is growing fast. Find out how the right contracts can protect your margins, brand and legal position.
Read MoreWith the rise of AI technologies, particularly generative models capable of autonomously creating content, concerns have surfaced regarding the accuracy and reliability of the data they generate.
Commercial & Data Protection|18 April 2024
Insight
The landscape of artificial intelligence (AI) is rapidly evolving, presenting both opportunities and challenges, particularly in the realm of data protection and privacy.
On 12 April 2024, the Information Commissioner’s Office (ICO) took a proactive stance in addressing these challenges by launching a call for evidence on the application of the accuracy principle of the UK General Data Protection Regulation (UK GDPR) to generative AI models. The most recent consultation on generative AI focuses on the link between the specific purpose for which a generative AI model will be used and the need for accuracy, as well as warning of the potential consequences of inaccurate training data, leading to inaccurate outputs.
Purpose-driven accuracy:
The need for accuracy in generative AI outputs depends on the specific purpose of the application. Models used for decision-making or providing factual information require higher precision as the information provided is being relied on. This differs from a scenario where the generative AI model is being developed for a purely creative purpose where the outputs being accurate are not their priority. For instance, the consultation highlights the difference between models employed in triaging customer queries which would need to uphold a higher level of accuracy compared to those utilised in generating ideas for video game storylines.
Training data impact:
The accuracy of generative AI outputs is influenced by the quality of training data. Developers must curate training data carefully, ensuring it reflects the intended purpose and complies with data protection principles. Transparent communication of accuracy limitations to deployers and end-users is essential to mitigate risks associated with inaccurate outputs.
Transparency & information rights:
Developers and deployers must clearly inform users about the statistical accuracy and intended use of generative AI applications. By monitoring user interactions, both transparency and accountability are enhanced in line with individuals’ rights to understand AI-driven decisions.
Clear communication among developers, deployers, and end users is paramount to ensure the model’s final application aligns appropriately with its level of accuracy.
The ICO emphasises that the use of inaccurate training data can lead to erroneous outputs, thereby breaching the accuracy principle. Consequences of such inaccuracies extend beyond data integrity issues, potentially causing damage, distress, and reputational harm to individuals and organisations alike. Non-compliance may also result in enforcement action from the ICO and liability for compensation payable to affected individuals.
Developers:
Deployers:
Please note that this information is for general guidance only and should not substitute professional legal advice. If you have specific concerns, we recommend consulting one of our legal experts.
Contact Us
If you are wanting to develop or use AI for your business and you would like to discuss the content of this article or any other concerns you may have, book a 30-minute FREE consultation or fill in the form below requesting a call back from Haroon Younis, Partner & Head of Commercial.
Related Services
Knowledge
Drop shipping is growing fast. Find out how the right contracts can protect your margins, brand and legal position.
Read MoreLearn about fiduciary duties, commission disclosure, and legal compliance after the Expert Tooling v Engie ruling.
Read MoreLearn how Rukhadze v Recovery Partners reinforces strict fiduciary duties and what it means for your business and governance.
Read MoreThe ICO and CMA's joint statement outlines new AI in finance regulations, focusing on data protection, competition, and consumer safeguards.
Read MoreA decade of progress – but the fight against modern slavery isn’t over, we highlight how businesses can meet stricter transparency rules.
Read MoreNavigate AI regulations in financial services. Key insights from the FCA & ICO on compliance, data protection, and innovation.
Read MoreExplore how to create an AI usage policy that mitigates risks and ensures responsible adoption for your business.
Read MoreEffective data safety and optimisation are key to business success, reducing risks and improving efficiency in a digital world.
Read MoreLandmark EU court ruling awards damages for unlawful data transfer. Learn what this means for GDPR compliance and safeguarding your business.
Read MoreProtect your SME from data breaches. Discover key tips for GDPR compliance and data security during Data Protection Week.
Read MoreBoost profitability with well-negotiated commercial contracts—learn essential terms to protect and grow your business.
Read MoreDiscover the key changes introduced by the Data (Use and Access) Bill and how organisations must adapt to meet compliance requirements.
Read MoreScroll to next section
Scroll back to the top


On Monday 29 September, Flint Bishop successfully completed the acquisition of the entire business of Lupton Fawcett LLP. You have been forwarded to the page most relevant to your visit.
Please feel free to explore our website and learn more about our legal services and professionals, including those who have recently joined us from Lupton Fawcett.
