Responsible AI governance
What does good AI governance look like? The UK AI Safety Summit 2023 has built on the government’s intention to regulate the use and not the tech – focusing on the need for responsible engagement by owners and users of AI systems
Best practice is evolving on what a good AI governance strategy looks like, whether you are a tech start-up designing an AI system or a business wanting to integrate AI innovations to drive productivity. What is clear, though, is the need to innovate responsibly – building in guardrails to internal and external processes to drive AI decision-making transparently and fairly. Some current issues we advise on:
- Copyright and database rights for AI datasets. The use of text, images or other data copied from any source for use in an AI system is a common pitfall; the provenance needs to be available so that consent from the owner can be sought where necessary. Similarly, repeating data from a database could be a breach of the database right of the owner.
- Commercialisation of AI models and systems. The broad AI ecosystem of rights holders and technology providers behind any AI system could impact a commercial investment strategy – e.g. licence fee payments for training data.
- Governance for use and monitoring of AI systems. Many enterprises are unaware of AI systems which may be used internally or incorporated into supply chains. Internal audits and AI governance policies ae becoming an increasingly useful way to control and regulate the use of AI tools by employees, sub-contractors and suppliers, including the need for oversight to verify and validate AI-driven findings, outputs and decisions.
To find out more about our broad AI expertise within our Technology, Media and Telecoms Practice click here