In-house teams need to set the standard for ethical AI use

In-house teams need to set the standard for ethical AI use

AI adoption is accelerating, and with it, the pressure on in-house legal teams to keep things safe, compliant, and ethical. But in reality, legal’s role in AI implementation can (and should) go much further than just red flags and disclaimers. Legal teams are uniquely positioned to set the standard for how AI is used responsibly across the organisation.

We’ve spoken to legal leaders across industries who are doing exactly that: balancing innovation with integrity, and helping their companies avoid reputational missteps while unlocking real value from AI tools.

Here’s what they’ve learned.

Legal is best placed to lead

The roll out of AI across organisations isn't just for tech teams, it's also a legal, ethical and reputational issue. And no one else in the business sits at that intersection quite like legal.

Whether it’s deciding what data can be used in an AI tool, reviewing vendor terms, or advising on copyright and IP risks from AI-generated content, legal already owns many of the decisions that will shape how AI works in practice.

As Ty Ogunade, Contracts Manager at GWI, put it:

“The main concern for us is around proper usage of AI tools. Making sure the information we're putting into tools like ChatGPT isn’t being used to train it.â€

And as Alex Love, Corporate Counsel at Algolia, added:

“The legal team’s key role is giving a holistic view of the risks—both input and output. It’s not just about what the tool produces, but what you feed into it.â€

Start with a practical AI policy

A well-drafted AI policy doesn’t have to be a 30-page document. In fact, it shouldn’t be. Legal teams are increasingly creating concise, practical guidance that answers one key question:
“What can and can’t we do with AI?â€

That includes:

  • What types of data can be input into AI tools (especially customer or personal data)
  • Which tools are approved (and how new ones should be evaluated)
  • Who owns the output of an AI system
  • Guidance on copyright, plagiarism, and confidential information
  • Mandatory disclosures for AI-assisted content

And just as importantly: who to ask when something’s unclear.

Read our precedents on AI policy

Luis de Freitas, Managing Legal Counsel at Boston Consulting Group (BCG), noted:

“You have to be very clear about which tools are safe and how to use them responsibly—especially when you’re rolling out across a global team.â€

Collaboration builds trust

Setting AI policy is not a solo act. The most successful in-house teams we’ve spoken to don’t just hand down rules, they co-create them.

That might mean running workshops with stakeholders from product, data, IT, and HR, or building a cross-functional AI taskforce to explore use cases and flag risks early.

As Luis explained:

“Leadership buy-in is really important... but you also need enthusiasts inside the legal team, people who want to test the tools and make the information flow.â€

And it’s not just about tech implementation. Cheryl Gale, Head of Legal at Yoto, highlighted the importance of culture:

“Transparency is key, and we really do see that from the top down. Legal isn’t just there to say no, we’re a core part of every business conversation.â€

AI governance is not a one-and-done

The rules are evolving and your policy should too. AI laws are developing rapidly across the UK, EU, and US, and with them come new obligations around explainability, transparency, and risk classification.

That’s why some in-house teams are putting in place lightweight AI governance models, regular check-ins, training sessions, and shared registers of approved tools. It’s not bureaucracy; it’s future-proofing.

As Laura Dietschy, Commercial Legal Lead at Palantir, said:

“The mistake people make is buying point solutions and slapping them on top of problems. You need to start with your data, your risks, your reality.â€

In-house teams as ethical AI champions

AI is rewriting how businesses operate, and legal teams have a real opportunity to shape that transformation for the better.

By drafting clear, actionable policies, working cross-functionally, and staying on top of emerging risks, legal teams can go beyond compliance. They can become champions of ethical, effective, and empowered AI use.

And in a world where customers, regulators and employees are all watching closely, that leadership is more valuable than ever.

Need help building your AI policy? Read our precedent here.


Latest Articles:
About the author:
Siobhan leads marketing for the in-house community at UUÂãÁÄÖ±²¥. Creating thought leadership, blogs and content based on data and market insight to provide the best information possible to help in-house lawyers succeed.