Menu

Thought pieces

AI and authenticity: a tool, not a voice

“Although machines are becoming key readers of reports (through UKSEF, GEO, tags and schema), it’s still – for now, anyway – humans who make the final decisions, spend the money and need to trust what they’re reading.”

Project Director

Jen Human

Read time: 3-4 mins

As I try to finish the latest season of Black Mirror, it’s hard not to feel like reality has already lapped fiction. AI is no longer a concept we’re preparing for. It’s here, there, fast and everywhere.

But despite its speed and scale, one thing is clear: AI doesn’t care.

It doesn’t worry about deadlines, client relationships or the reputation that comes from consistent, thoughtful, human work. That’s our job – and we take it seriously at LB.

So, while AI is already changing everything, we think a sprinkle of level-headedness goes a long way in times of disruption (and as we were reminded at the recent IR Society conference – disruption is here to stay, so get used to it!).

Anyway. Before we all retrain as plumbers – let’s not catastrophise. At LB, we don’t panic. We certainly won’t ignore it and hope it all goes away. We’re actively exploring how AI fits into investor communications, storytelling and design (and, just as importantly, where it doesn’t).

Authority and authenticity

It’s easy to fall for the illusion of intelligence. AI sounds authoritative as it churns out content. But it also gets things wrong – often with confidence and sometimes with consequences. There’s no accountability and no skin in the game – it’s not a friend, ally or enthusiastic intern, but a hungry algorithm chewing through prompts and energy (hello, Scope 3 emissions).

We’ve already seen AI-generated copy ‘pass’ AI detectors as human – which is fairly sinister in itself – while human writers are second-guessing their own voice. One of our team was genuinely sad to see someone on LinkedIn say they’d stopped using em-dashes, Oxford commas and bullet points – worried they’d be mistaken for a bot. Hardly progress!

On this point though, we believe authenticity will only become more valuable, as will the human relationships that underpin good communication and decision making. Without delving too deeply into ‘sloptimisation’ and the dead internet theory, we think it’s worth mentioning that there’s an even more important role today for good quality, meaningful, human-written content in the mass of AI-generated articles – words on a page that haven’t ever been written and may not ever be read – by a human. A strange thought indeed.

Slippery legislation

Our role at the moment is to help our clients navigate the grey area before regulation catches up – that is, best practice. The EU are certainly leading the way in trying to get ahead of the inevitable scandals that I expect will plague the next decade with their AI Act (and many other countries are scrambling to do similar – see the fascinating tracker at Herbert Smith Freehills Kramer). In such a fast-moving world, I wonder if anyone’s tried to get ChatGPT to draft any AI regulation. Perhaps one day it will be a self-governing tool (certainly better than the alternative scenario).

We’ve conducted research into how AI is being referenced in annual reports in 2025: mostly as a market opportunity or a financial risk, with a few companies proudly (and sensibly) talking about their policies and due diligence. But we think companies probably aren’t talking or thinking about it enough. Looking at the bigger picture, used badly, AI could pose real threats to each one of the environmental, social and governance pillars companies are trying to uphold, not to mention the enormous financial risks and opportunities up for grabs. It’s important to remember that reputation is fragile and increasingly at stake – and can be compromised in an instant.

Who’s reading? Who’s writing?

Although machines are becoming key readers of reports (through UKSEF, GEO, tags and schema), it’s still – for now, anyway – humans who make the final decisions, spend the money and need to trust what they’re reading. And making sure messages are clear, concise and not misinterpreted is more important than it’s ever been as readers rely increasingly on AI summaries of documents, whether intentionally or through search engines’ tempting summaries that give us answers instead of links (that is GEO doing the work).

In short, AI is bustling its way into all of our lives, like it or not. In our smartphones, on our laptops and in our Google results and apps – it’s the uninvited guest that you can’t really un-invite, as they’ve already cooked a three-course meal, washed up and made the after-dinner cocktails – and you kind of hope they come again.

Our advice is to use and embrace AI – but with intention, policies, ground rules and people who care.

Because shortcuts often cut short the most important parts, like detail, nuance, tone and connection.

Until the next big leap (Artificial Conscientiousness, anyone?), we’ll keep doing what we do best: producing human-readable content for humans, with the machine-readable layer added as an overlay, rather than the primary objective. Content that feels real (because it is).

Signing off.
— Jen (not Gen) Human (not AI)