Skip to main content
Currently being edited in London

Click here to discover more from Monocle

AI is not our enemy. That’s why you should still say ‘please’ to Alexa

Artificial intelligence will change the world – but not in the ways that we might fear. Neil D Lawrence, a professor of machine learning at the University of Cambridge, explains what to realistically expect.

Writer
Photographer

In his book The Atomic Human: Understanding Ourselves in the Age of AI, Neil D Lawrence asks what artificial intelligence might mean for our identity as humans. He doesn’t believe that we should try to compete with computers, which are already able to communicate 30 million times faster than we can. He also sees plenty of scope for technology to serve people better; he cites, for example, an AI lab in Uganda that helps to reduce the impact of disasters such as floods and famines by working with the UN to offer better warning systems.

Lawrence, who is the Deepmind Professor of Machine Learning at the University of Cambridge and a senior AI fellow at the Alan Turing Institute, is also dismissive of some of the dystopian visions of where this technology could take us. He questions, for instance, the possibility of artificial general intelligence – the idea that machines could master any intellectual task that a human could.

Here, he sits down with David Phelan, Monocle’s technology correspondent, to survey the road ahead.

First of all, what should I call you?
Though my official title is Deepmind Professor of Machine Learning at the University of Cambridge, I don’t work for Deepmind, the Google-owned AI firm. It endowed the chair and put in a certain amount of money with which the university pays for the position. It’s a lot less money than you might think – academics are pretty cheap. I have a lot of job titles so people normally just call me Neil.

Should we be afraid of AI? In your book, you write that the possibility of machines becoming so clever that they operate beyond our control is slim.
To the extent that it could happen, it has already happened. A simplistic view of AI technologies leads to significant misunderstandings. It also propagates vivid, sci-fi-like narratives, for very human reasons. But machines going beyond our control – in banking, for instance – is an interesting issue because you’re talking about a world in which there was a whole human system of interacting with, editing and controlling data. Think of systems such as double-entry bookkeeping, in which you can train someone relatively quickly to understand what they’re doing and then they’re able to bring their whole human judgement to the task. What you’re seeing with the digital world as it stands is a disempowering of people, whether they’re in banking, law or accounting. The ability of human judgement to influence an individual decision, where the context is well understood, has been sort of eliminated.

Illustration of a robot with AI on its face working with a scientist in a lab

Illustration: Peter Zhao

What are the benefits of AI? How can it help us?
I use the notion of an “artificial general vehicle” in my book to show the absurdity of the idea of “artificial general intelligence”. Is there a vehicle that’s right for all circumstances, whether you want to go to the shop at the end of your road or trying to get to New York? No. There’s no such vehicle. Exactly the same is true for intelligence. Artificial general intelligence is an absurd notion and deeply misleading when it comes to how businesses should proceed.

But you can imagine that, in 10 or 20 years, everyone in a business will be able to interact with a computer and steer it as finely as the best software engineer can today. That would be a major shift in power structures: to go from being restricted to a few hyperscalers [companies with huge data centres with enormous computing resources] that are able to build large-scale software systems to things being distributed throughout organisations. That would be deeply transformative.

Are there dangers connected to AI?
Our current structures in terms of software, hardware and behaviour are now distinctly out of date. The information infrastructure has shifted so profoundly that things are now not being done correctly. The problem is that none of us knows what the correct way is.

Today there is a whole business literature around what you should do but the only thing that we really know is that those who are pronouncing what we should be doing are likely to be wrong. Well, one of them might be right – but which one? What you’re seeing is a period of uncertainty when business leaders are faced with the difficult task of making decisions about technologies with which they’re not always intimately familiar. As a result, their business judgement tends to flip: people are all in or all out. But the truth is that there’s something in between.

If I tell you to pay attention to something, that might focus your thinking but then you won’t be scanning the horizon. We want our business leaders to be horizon-scanning as well as focusing on customer needs. Taking advantage of developments such as AI requires restoring business leaders’ confidence in the fact that their fundamental intuitions about businesses and customers still hold.

However, because the information infrastructure has been roughly constant for years, most businesses have split what they do into different parts for efficiency. Unfortunately, that tends to disconnect the business from its customer base and undermine agility. In short, the opportunities are enormous but the challenge that we tend to face is around re-examining structures and the culture, and how it’s servicing customer needs.

US scientist Roy Amara once said that people tend to overestimate the impact of new technologies in the short term while underestimating their long-term effects. Is that true with AI?
Bill Gates said something similar: that things in the short term happen more slowly than you expect but, in the long term, they happen much more quickly. That’s another challenge; the type of timeframe for making decisions really sits between those two. My advice to business leaders is to refine their communication machine because that would allow them to steer around what will be a complex and evolving landscape. We can talk about making efficiencies but people know about that. It’s about how they deploy and integrate AI in existing infrastructures.

How about sectors such as healthcare and education? What opportunities can AI offer them?
People have wanted to see what benefits that it could bring for a decade but we have delivered virtually nothing because there’s a total separation between macroeconomic interventions and microeconomic need. There are disconnects between companies and their customers, and, even more seriously, disconnects between governments and citizens. To the extent that there’s something dystopian about AI, that’s the root of the issue.

In a company such as Amazon, there’s a notion that you have to dive deep when anecdotes and data are suggesting conflicting things. People lean on data too much and I have seen Jeff Bezos quoted as saying that it’s usually the anecdote that’s right.

The digital systems that we now use give people the impression that things are OK from a data perspective. That’s problematic and reflects a weird desire to centrally control everything. We know that this doesn’t work. That’s what is negatively affecting education and local authorities: the people who are at the coalface have been undermined in their ability to deliver because their tools have become separated from something that, say, a normal teacher can work with. A nurse, like my wife, might spend 30 per cent of her time on data entry.

AI offers an enormous opportunity to put that right. People could sit down with a nurse and look at their day job today and give them a series of tools that could quite quickly be designed to support them in that data entry. We need a world where that nurse is capable of building that type of service themselves. And that’s a fundamental shift in our information infrastructure, which is going to take time. So, if deployed in a different way – not centralised but in smaller pockets – AI could have a significant impact.

Where do things need to change?
The ability to work with all of this innovation in software is currently guarded by a few digital providers. While they’re accruing power, the rest of us aren’t able to grow and benefit in terms of productivity. This has already affected health, education and social care. We are beholden to what large hyperscalers choose to deploy and they’re stuck in that loop. We need innovation by those who understand the role, the job and how to make things better, and AI is an extraordinary route to that.

Finally, a matter of manners. Should we be saying please and thank you to Alexa?
There are differing views on that. My answer is that I do because it’s about my own dignity. Voice services such as Alexa provide a human interface and play on our sense of interacting with other people. I think that means it’s keying deeply into my dignity and I don’t want to demean myself. What we see again and again with social media is that it’s degrading our social interactions. Now, I don’t know whether that’s right or wrong but it’s something that I’m thinking about. I’m choosing to say “please” – at least, when I remember.

This article originally appeared in the Opportunity Edition newspaper 2025, created in collaboration with UBS for its Asian Investment Conference in Hong Kong.

Monocle Cart

You currently have no items in your cart.
  • Subtotal:
  • Shipping:
  • Total:
Checkout

Shipping will be calculated at checkout.

Shipping to the USA? Due to import regulations, we are currently unable to ship orders valued over USD 800 to addresses in the United States.

Not ready to checkout? Continue Shopping