We provide the complete commercial debt recovery service; from outsourced early arrears collections through to expert litigation, all handled in-house by a multi-award-winning law firm.

 

Visit our debt recovery website

Artificial intelligence (AI) is no longer a futuristic concept—it’s here, and it’s transforming industries at an unprecedented pace.

From automating routine tasks to enabling predictive analytics, AI offers immense potential for businesses to innovate and grow. However, the rapid adoption of AI has introduced a range of legal, regulatory, and ethical challenges that businesses must navigate carefully.

A 2024 McKinsey report found that 72% of organisations have adopted AI in at least one business function—an increase from approximately 50% in previous years. Yet, McKinsey has also reported that confidence in organisations mitigating AI-related risks is nowhere near the same level. This gap highlights the urgent need for businesses to establish robust AI usage policies that address not only internal practices but also the broader supply chain.

Here we will delve into the critical considerations for businesses developing an AI usage policy, offering valuable insights without providing a step-by-step guide. By understanding the complexities and strategic nuances, businesses can position themselves for success in the inevitable AI-driven economy.

The AI landscape: Opportunities and challenges

AI is reshaping industries, from healthcare and finance to retail and manufacturing. According to PwC, AI could contribute $15.7 trillion to the global economy by 2030, making it one of the most significant technological advancements of our time. However, this growth is accompanied by significant risks, such as:

  • Data protection concerns: AI systems often rely on vast amounts of personal data, raising compliance issues under regulations like GDPR and UK GDPR. A 2022 survey by Cisco found that 74% of organisations have delayed AI deployments due to data privacy concerns.
  • Intellectual property (IP) risks: AI-generated content and outputs can blur the lines of IP ownership. For example, who owns the output of a generative AI tool trained on copyrighted material, particularly if the training or output generation occurs outside of the UK? Additionally, under applicable laws and contractual terms, do you retain exclusive ownership and authority over all materials inputted into AI systems to generate outputs? These questions remain a legal grey area in many jurisdictions.
  • Ethical and reputational risks: AI systems can inadvertently perpetuate bias or discrimination, causing reputational damage. A 2021 study by Stanford University found that 68% of consumers are concerned about AI being used unethically.
  • Supply chain vulnerabilities: AI adoption extends beyond internal operations to subcontractors, sub-processors, and suppliers. A breach or non-compliance in the supply chain (or uncertainty in ownership of, and intellectual property rights or deliverables from the supply chain) can have cascading effects, exposing businesses to significant liability.

Key strategic considerations for an AI usage policy

Adopting AI tools in business operations offers immense potential but also introduces significant risks. To mitigate these risks, businesses must develop a comprehensive AI usage policy that considers key factors such as industry, size, AI use cases, supply chain, and risk appetite. While policies will vary, several universal considerations must be addressed, including access control, approval processes, supply chain due diligence, commercial contracts, and monitoring. Below, we expand on these areas, explaining their importance and providing additional relevant considerations.

1. Controlling access to AI tools

Why it matters

Not all AI tools are created equal. Some may pose significant risks related to data protection, intellectual property (IP), completeness, or accuracy, while others may not align with the business’s ethical standards. Unauthorised or unvetted AI tools can lead to issues like data breaches, IP infringement, reputational damage, regulatory action, claims and financial loss.

Key actions:

  • Centralised approval process: Establish a formal process for approving AI tools, ensuring they meet legal, ethical, and technical standards before use.
  • Technological safeguards: Use firewalls, software management tools, or access controls restricting the use of unauthorised AI tools.
  • Regular audits: Conduct periodic reviews of AI tool usage to ensure compliance with the policy and identify any unauthorised tools.

Additional considerations:

  • Shadow IT risks: Employees may use unapproved AI tools to streamline tasks, bypassing the approval process. Educate staff on the risks of using unauthorised tools.
  • Tool diversity: Different departments may require different AI tools. Ensure the approval process is flexible enough to accommodate diverse needs while maintaining control.

2. Approval process for AI software

Why it matters

Before allowing the use of any AI tool, businesses must conduct thorough due diligence to ensure compliance with laws, mitigate risks, and align with ethical standards. This may involve a team effort with members derived from senior management, IT, HR, compliance and operations.

Key actions:

  • Data protection compliance: Evaluate whether the tool complies with GDPR, UK GDPR, or other applicable data protection laws.
  • IP implications: Assess licensing terms and ownership of AI-generated outputs to avoid IP disputes.
  • Accuracy and completeness: Review the tool’s potential for producing incomplete or incorrect outputs, which could harm decision-making or customer trust.
  • Skill erosion: Assess whether excessive reliance on AI-generated outputs could diminish employees’ expertise and standards over time.
  • HR policies: Consider application of, or interaction with, other HR policies.
  • SoPs: Consider the application or interaction with, standard operating procedures and requirements of accreditation providers and contractual commitments with customers or others.
  • Ethical implications: Assess the tool for potential biases, discriminatory outcomes, or other ethical concerns.

Additional considerations:

  • Vendor reputation: Research the AI tool provider’s reputation and track record for addressing issues like bias or data breaches.
  • Scalability: Ensure the tool can scale with the business’s growth and evolving needs.

3. Supply chain due diligence

Why it matters

AI usage extends beyond internal staff to subcontractors, sub-processors, and other supply chain partners. A breach or non-compliance in the supply chain can expose the business to significant liability.

Key actions:

  • Due diligence: Conduct thorough assessments of suppliers’ AI practices, particularly in cross-border supply chains where regulatory requirements differ.
  • Alignment with standards: Ensure suppliers adhere to the same standards for data protection, IP compliance, risk mitigation, and ethical AI use within the business.
  • Gap analysis with customer contracts: Ensure, as far as commercially possible, that customer requirements around AI usage are not more stringent on you (or riskier for you) than your requirements on or concerning your suppliers.
  • Monitoring: Regularly monitor suppliers’ use of AI tools to identify and mitigate potential risks.

Additional considerations:

  • Contractual obligations: Include AI-related clauses in supplier contracts, such as data protection requirements and audit rights.
  • Transparency: Encourage suppliers to be transparent about their AI practices and any incidents that may affect the business.

4. Commercial contract considerations

Why it matters

AI-related issues must be addressed in both upstream (supply chain) and downstream (customer) contracts. Businesses should ensure, where commercially feasible, that supplier requirements align with customer obligations. This includes commitments, standards, truth statements, IP ownership, usage rights, and liability limitations.

Key actions:

  • Data protection: Require suppliers to comply with applicable data protection laws and prohibit using personal data in AI systems without proper safeguards.
  • IP ownership: Clarify ownership and usage rights of AI-generated outputs to avoid disputes.
  • Liability and indemnities: Allocate liability for AI-related incidents, such as data breaches or IP disputes, and include indemnities for breaching AI-related obligations.
  • Ethical standards: Require supplier’s adherence to the business’s ethical AI guidelines, including transparency and non-discrimination.

Additional considerations:

  • Cross-border issues: Address jurisdictional differences in data protection and IP laws when dealing with international suppliers or customers.
  • Future-proofing: Ensure contracts are flexible enough to accommodate evolving AI technologies and regulatory requirements.

5. Content for downstream customer contracts

Why it matters

In addition to allocating risk and responsibility and aligning supplier requirements and customer requirements, businesses must manage expectations and mitigate risks related to AI performance and data usage when providing AI-driven products or services to customers.

Key actions:

  • Transparency: Clearly explain how AI is used in the product or service and any associated risks.
  • Data usage: Specify how customer data will be used in AI systems and obtain necessary consent.
  • Performance guarantees: Manage customer expectations regarding the accuracy and reliability of AI-driven outputs.
  • Dispute resolution: Establish mechanisms for resolving disputes related to AI performance or outcomes.

Additional considerations:

  • Customer education: Provide customers with guidance on how to use AI-driven products or services responsibly.
  • Limitation of liability: Include clauses that limit the business’s liability for AI-related issues, such as incorrect outputs or data breaches.

6. Monitoring and enforcement

Why it matters

An AI usage policy is only effective if it is consistently enforced. Without monitoring and enforcement, businesses risk non-compliance and reputational damage.

Key actions:

  • Monitoring mechanisms: Implement tools and processes to detect unauthorized use of AI tools or non-compliant practices.
  • Consequences for violations: Establish consequences for policy violations, including disciplinary action for staff and termination of contracts for suppliers.
  • Regular reviews: Regularly review and update the policy to reflect evolving regulatory requirements and technological advancements.

Additional considerations:

  • Whistleblower protections: Encourage employees to report potential issues or breaches without fear of retaliation.
  • Third-party audits: Consider engaging independent auditors to assess compliance with the AI usage policy.

Some practical calls to action

Whilst other action points may be relevant for your organisation in your context, which we can explore further with you on a confidential basis, for now, we can state the following:

  1. Assess your AI landscape: Conduct a comprehensive review of how AI is used within your business and supply chain. Identify potential risks and areas for improvement.
  2. Engage legal and compliance experts: Work with professionals specialising in AI, data protection, IP, and supply chain law to develop a tailored AI usage strategy.
  3. Strengthen supplier relationships: Collaborate with supply chain partners to ensure alignment on AI-related standards and expectations.
  4. Educate your workforce: Provide staff training on the importance of responsible AI use and the specific requirements of your AI usage policy.
  5. Review and update contracts: Ensure that upstream and downstream contracts address AI-related issues and reflect your business’s risk appetite.
  6. Monitor regulatory developments: Stay informed about evolving AI regulations and industry best practices to ensure ongoing compliance.

Conclusion

Creating an AI usage policy is essential for businesses looking to harness AI’s potential while managing risks. By focusing on key areas such as access control, approval processes, supply chain due diligence, and contractual safeguards, businesses can build a solid framework for responsible AI adoption.

Now is the time to act—engaging with legal and compliance experts, strengthening supply chain oversight, and ensuring contracts align with evolving regulations. To support businesses in this process, we offer fixed-fee guidance on developing AI usage policies, with packages starting from £450 + VAT. Taking these steps now will help position your business as a leader in ethical and compliant AI use, ready to succeed in an AI-driven world.

Take the next step today—complete the form or book a free 30-minute consultation with one of our expert team to discuss your AI usage policy.

Fields marked with an * are required

ALL DATA WILL BE HANDLED IN ACCORDANCE WITH OUR PRIVACY NOTICE.

SHARE

Share

Scroll to next section

Scroll back to the top