Close
Blog top banner
Mike Talbot Oct 31, 2025 11:21:54 AM 6 min read

Should we press ‘pause’ on AI?

Share

By 2030, artificial intelligence (AI) could be worth more than £400bn to the UK economy, which is also when tech firms predict they will be ready to unleash ‘Artificial General Intelligence’ (AGI) which can mimic human intelligence.

From then on, it is fast forward towards SuperAI and then the ‘singularity’ first mooted in science fiction films when AI starts to build better versions of itself and…well, let’s not go there for now.

Mike Talbot

So, should we just strap ourselves and enjoy the ride – embracing the opportunities and rewards along the way – or is this already getting a bit out of hand with the risks and threats outweighing the potential rewards?

Currently, we are still working with ‘Narrow AI’ which is designed for specific tasks. It is not smarter than us, but a lot quicker – so we can already see the potential for productivity gains, but AGI would be able to adapt, reason, and solve problems beyond its specific programming.

In other words, it will be smarter too – and this is when things start to get really exciting, but also considerably more alarming. AGI will be much more creative and display ‘common sense’ unlike the current technology which is still fundamentally data-driven so suffers from “rubbish in rubbish out”.

However, Narrow AI is already proving to be a bit of a handful for the building engineering sector, and we are in the grip of a 'triple whammy' of impacts – not all of them positive.

Havoc
Firstly, it is already playing minor havoc with recruitment. Employers are hesitating to recruit for roles that might, in the not-too-distant future, become obsolete thanks to the surge of AI. What does this mean for many engineering jobs, particularly those directly involved with data gathering and analysis?

Secondly, who is going to do the work that AI will create? Global spending on data centre development and associated infrastructure - largely to cope with the vast amounts of computing power required to deliver AI expansion - is forecast by investment bankers at Morgan Stanley to reach $3 trillion by the end of the decade.

Facial recognition biometric technology and artificial intelligence concept.

In the UK, the government has designated data centres as "critical national infrastructure" with another 100 planned over the next five years. The current wrangling over the proposed 500,000 sq m ‘mega project’ at Teesworks in the Northeast gives us a glimpse of just how much physical engineering potential exists in this sector for building engineering firms among others.

What a dilemma: Should we push on with what will be the largest data centre project in the world or favour the alternative plan for the site which is a blue hydrogen plant seen as a key contributor to the government's Net Zero targets? Can we keep pace with whatever demand is unleashed?

Our sector is crucial to both these markets but the potential for specialist data centre engineering is by far the bigger – which brings us, thirdly, to the environmental question.

All this data management requires enormous amounts of energy for powering millions of computers and vast quantities of water to keep them cool, particularly those that continue to use evaporative cooling systems. The power density of AI chips is around ten times higher than that of earlier data servers and as the technology continues to develop, they will become even denser...and hotter.

Again, our industry will be a big part of the discussion around keeping them cool and, ideally, re-purposing waste heat for us in buildings and industry, as well as addressing the power challenge and finding more ways to reduce both energy and water consumption. Many of the solutions already exist, but can we scale them up in time?

There is another wider issue to consider: Is AI also powering the next ‘dotcom-style’ financial market implosion? 54% of US fund managers surveyed by the Bank of America said AI stocks were already in a ‘bubble’ i.e they are over-valued and on course for a major price correction that could crash markets around the world.

Currently, investment in computing power and capacity by the major technology companies alone is $393bn a year and is projected to almost double by the end of the decade, dwarfing other financial market activity.

Most of that is being spent on data centres, the computer chips (largely manufactured by Nvidia, the world's largest company) as well as huge amounts of energy and cooling needed to keep these assets operating.

Potential
So, no argument about the potential, lots of argument about the financial risk, and, at the recent BESA Annual Conference, AI legal specialist May Winfield gave the 300 industry professionals attending an insight into another crucial consideration: The legal and insurance implications.

May Winfield - Headshot

The title of her keynote talk: ‘Implementing AI: Will you get sued?’ was not exactly a promising start and she set out in graphic detail all the areas where AI posed a risk to construction related businesses. [The answer is: ‘Yes, probably’ by the way.]

From having your designs stolen by AI training models to losing clients’ proprietary data to invalidating your insurance cover because your use of AI could be deemed “reckless and negligent” – she painted a bleak picture for anyone who was embedding AI into their business without first putting a specific management policy in place.

The fact that Buro Happold has even created Winfield’s position: global director of commercial, legal and digital risks, suggests this is a big concern for some engineering consultants and their clients, but how seriously is it being taken by contractors?

People will blithely set up meeting recording apps powered by AI without asking permission of those in attendance because it has become habitual. We all work in supply chains, but do we have any clue about what AI tools our partners are using? How many people are even asking?

AI is not trained on GDPR, so what’s happening to people’s data? This can lead to copyright infringement which could get very expensive, particularly if insurers back away from liability.

It was a powerful and eye-opening keynote talk that should have got delegates scurrying back to their offices (or more likely using their AI driven apps) to find out how exposed they are.

She also advised them to “beware BYOAI” (Bring your own AI) i.e. what are your employees bringing into your office and onto your sites on their phones; and “avoid FOMO” – don’t use something just because everyone else is.

“You don’t need to be a risk expert, but you do need to be careful about what is being introduced into your systems by stealth,” she told the conference. “AI uses language that is familiar to us, so it is very easy to accept what it says, but you must question the answers it gives. It is not always right.”

AI is exciting, it has huge potential for engineering projects both by removing a lot of the “grunt work” and, very soon, taking on some of the more creative tasks, so we need to be ready to embrace the opportunities, but not at any cost.

We need to ask ourselves some key questions about how we can best manage the risks so we can reap the full rewards, including putting legal guardrails in place, ensuring our people are trained and aware, and having the right insurance in place.

We don’t want to put a damper on the excitement around AI, but this is potentially the most powerful digital tool our industry has ever had to embrace – so it will pay to be prepared.

For more on this topic and others covered at the BESA Annual Conference click here.