AI policy
Drafted by ChatGPT based on the EU’s AI Act
1. Purpose
This policy sets out the principles and rules governing the use of Artificial Intelligence (AI) tools and systems within Lyons Bennett Limited in alignment with the European Union Artificial Intelligence Act (EU AI Act). It ensures responsible, ethical and secure use of AI, particularly in relation to client confidentiality, regulatory compliance and reputational risk.
Artificial Intelligence (AI) describes computer systems which can perform tasks usually requiring human intelligence. This could include visual perception, speech recognition or translation between languages.
2. Scope
This policy applies to all employees, contractors and third parties engaged with Lyons Bennett Limited who may interact with or use AI tools for any professional activity.
3. AI risk classification and use guidelines
Following the EU AI Act’s risk-based framework, AI systems and activities are classified into four categories:
Unacceptable, High Risk, Limited Risk and Minimal Risk.
3.1. UNACCEPTABLE RISK – STRICTLY PROHIBITED
The following uses are prohibited due to confidentiality, legal and regulatory concerns:
- Uploading or inputting pre-announcement or non-public information (e.g. earnings, M&A, IPO data, draft regulatory filings) into any AI system, including ChatGPT, GitHub Copilot, Bard or similar tools.
- Using AI to generate or manipulate content that could mislead stakeholders regarding the origin or authenticity of communications.
- Training AI models using client-owned data without explicit contractual permission.
3.2. HIGH RISK – RESTRICTED USE, SUBJECT TO APPROVAL
These uses carry significant confidentiality or compliance concerns and must be approved in writing by the Head of Compliance or designated Data Compliance Officer: Lee Sargent is the company’s Data Compliance Officer.
- Inputting anonymised pre-announcement text into AI systems after removing all identifying figures, names and context.
- Using AI to assist in the creation of client-facing financial narratives or reports that may affect investor sentiment.
- Automating decision-making or producing outputs that may be interpreted as formal financial advice.
Note: All outputs must be independently reviewed and approved by a qualified human before dissemination.
3.3. LIMITED RISK – PERMITTED WITH OVERSIGHT
These uses are generally acceptable but require discretion and periodic audit:
- Using AI to redraft internal or external communications to adjust tone, grammar or clarity (e.g. email, website copy), excluding confidential or identifiable client information.
- Deploying AI-powered chatbots or virtual assistants on client or corporate websites without clear disclosure that users are interacting with an AI system.
- Using generative design tools (e.g. Adobe Firefly, Midjourney) where inputs or prompts are not derived from sensitive client content.
Employees must:
- Ensure outputs are reviewed for accuracy and appropriateness.
- Clearly label AI-generated content when applicable.
3.4. MINIMAL RISK – GENERALLY PERMITTED
These activities are considered low risk and are permitted without prior approval, provided they do not involve confidential or client-specific data:
- Using tools such as Adobe Acrobat’s AI-powered ‘wizard’ to summarise public documents or extract regulatory content with referenced sources.
- AI-based proofreading, sub-editing, grammar checking or document reformatting.
- Automated transcription of publicly available or internal training materials (e.g. via Otter.ai, Whisper or Adobe Sensei).
Employees are encouraged to validate results for completeness and relevance.
4. Governance and oversight
- Lee Sargent, Data Compliance Officer, will review AI tool usage annually.
- Employees must complete mandatory AI Ethics & Compliance training annually.
- Any AI-related incidents or concerns (e.g. data leaks, misleading outputs) must be reported immediately to the Data Compliance Officer.
5. Policy breach
Violations of this policy may result in disciplinary action, including termination of employment, and may carry legal consequences where regulatory or client confidentiality breaches occur.
6. Contacts
For questions or approvals related to AI use, please contact: Neil Duncanson or Lee Sargent.
Appendix A: Approved AI tools
A maintained list of company-approved AI tools and their permitted use cases will be available by request.