Data conference: Time for us to rethink the thinking machine

Professor Shannon Vallor, an expert in technology and ethics, told The Scotsman’s data conference that the heightened focus on the existential threats of artificial intelligence (AI)was obscuring the positive benefits it could deliver.
Image: Lisa FergusonImage: Lisa Ferguson
Image: Lisa Ferguson

She related how the technology’s late-1950s founding fathers conceived of AI as a “thinking machine”, but said exploration of the field has gone in many different directions since.

“If you just started hearing about AI [now] as a commercial reality, you might imagine it’s been a direct arrow,” she said. “But there has actually been a broad range of scientific approaches to AI, and the engineering techniques that facilitate it.”

Hide Ad
Hide Ad

She suggested the idea of a “thinking machine” was one that “understands the world, much as we do, or at least with a comparable level of reliability and aptitude, that understands the causal forces, bringing together the world, that allows us to understand what might come next in our environment, and which understands the distinction between truth and falsehood”.

Shannon Vallor- Baillie Gifford Professor in the Ethics of Data and Artificial Intelligence, University of Edinburgh. Image: Lisa FergusonShannon Vallor- Baillie Gifford Professor in the Ethics of Data and Artificial Intelligence, University of Edinburgh. Image: Lisa Ferguson
Shannon Vallor- Baillie Gifford Professor in the Ethics of Data and Artificial Intelligence, University of Edinburgh. Image: Lisa Ferguson

But Vallor, director of the Centre for Technomoral Futures at Edinburgh Futures Institute, said AI now was different: “The machines most powerful today, in the domain of commercial artificial intelligence, lack those capacities and understanding. Yet they have very powerful capabilities.

“We’ve moved from the concept of a thinking machine as a cognitive benchmark that would need to be achieved and validated through a programme of rigorous scientific research and verification to narrowly task-based, engineering-driven, behavioural AI.”

She illustrated this by highlighting Open AI’s stated ambition to create “highly autonomous systems that outperform humans at most economically valuable work”.

“If you asked all ofhumanity – or even just any minimally representative chunk – to create a list of our most desired innovations to benefit all of humanity, where would we put on that list ‘a machine that can outperform all economically valuable work’?

“I don’t even think it would be on your list. Maybe in a post-work world, where we’re all freed from the burdens of labour, but that’s also a world where we’re freed from the opportunities to exercise our agency in meaningful ways.”

Vallor continued: “This is a serious concern for us, not just because we’re not adequately planning for AI’s likely impact on the labour force, but because we’re ignoring the need for AI to earn a social licence to operate.”

She stressed she wanted to see AI advance in a positive way: “I want it to develop as a tool that can address many of our most urgent challenges – but getting to the point where we have social permission for that to happen is going to be increasingly endangered by the reckless and single-minded way we’re pursuing AI today.

Hide Ad
Hide Ad

“We need public support and consent for AI to develop its most beneficial applications, particularly since many of those will require public investment and support, not just private capital.

“We need AI to support improvements in the availability and storage of renewable energy, biomedical enhancements to fight disease, ways to make ourselves or communities and our planet more resilient to climate change, and to arrest the acceleration of climate change.”

Vallor asked the audience what might happen if companies pushed forward without regulatory oversight, public safeguards or accountability for “harms created in AI’s wake”?

November’s AI summit at Bletchley Park might be seen by some as an effort to tackle this, to create appropriate regulation, she suggested, before continuing: “The focus of those calls is actually not on the need to create the mechanisms for accountability and safety in the present, but rather to create mechanisms for a highly speculative future. Right now, AI companies are working very hard to convince governments around the world – and largely succeeding – to ignore near-term challenges and push governments’ attention into the future, to focus their attention narrowly on so-called existential risks.

“The large majority of computer scientists and machine learning researchers believe this is a matter of speculative fiction. I’m talking about systems escaping our control or even developing malign intentions and causing human extinction. Some governments are taking those kinds of claims extremely seriously, and diverting all existing plans for AI governance in the present towards a programme of protection against a far future threat.

“There are very real safety challenges relating to the control of AI systems that we have today. But these are not new challenges, and they have nothing to do with futuristic fantasies of superhuman machines.”

The professor stressed that there is “a fair amount of good knowledge” to address the present risks of AI, adding: “We need to keep pushing that knowledge forward. AI isn’t a problem you solve; it will always be there as a challenge.”

Vallor concluded: “We need AI technologies to be safer, but we need them to be safer now – and safety must mean more than just permitting human survival, but a kind of safety that enables human agency and flourishing, which respects human rights and earns public trust.”

Hide Ad
Hide Ad

Later, Chris Williams, professor of machine learning at the University of Edinburgh, gave the assembled delegates an overview of 60 years of AI research at the university, one of the four top centres in early AI research – and the only one outside the US.

He described the variety of directions AI research had taken, and argued that the sharp focus on AI of late had come about because its applications had become useful.

He called for everyone to resist suggestions that AI was “magic”, saying: “Sometimes in the press, it can seem like AI is magic. It’s not –it’s a lot of hard work in engineering and data quality. It’s about developing systems, understanding their limitations, and understanding how building AI tools can be used in association with humans.”

Steph Wright, head of the Scottish AI Alliance, made a similar point: “What you are hearing – particularly from industry – is that AI is magic, it’s incredible.

“AI companies say they are the dictators of this narrative, and everyone must listen. But, ultimately, the drivers for commercial organisations aren’t about furthering humanity.”

She picked up on the “social licence” point made by Shannon Vallor, arguing: “There are lots of AI technologies like ChatGPT currently being afforded trust without earning it. What we’re trying to do at the Scottish AI Alliance is empower people with knowledge to contribute constructively to the conversation around AI.

“I think everyone needs to do a better job communicating outside our own circles about what AI is, what AI isn’t, and the ethics and trustworthiness of AI.”

Both Wright and Vallor told the conference that civil society had to be engaged in the summit at Bletchley Park, but this had to be done in a meaningful way.

Hide Ad
Hide Ad

Vallor said there was a vital role for higher places of learning in the unfolding debate: “Universities have the ability to bring technical and scientific experts together, and effect public engagement much more successfully than an isolated expert who can only speak in the language of their discipline. This is one of the reasons I came to Edinburgh – it was already ahead of the curve.”

The professor used the example of a different industry to show that it was possible to control potentially dangerous technological advances.

“We have successfully governed risky technologies again and again – sometimes in ways that promote co-operation by industry by rewarding that co-operation and earning public trust and profitability and usability at the same time. That’s what civil aviation did.

“It’s tightly regulated, with international co-ordination and harmonisation of regulations to make sure international travel is safe. Operators, manufacturers – even passengers – have obligations and duties of care, and there are consequences for violation.”

In terms of using this model for AI regulation, Vallor suggested this: “You start with establishing a duty of care, so the rights people already have in society are enforced, and with accountability of those responsible for developing, designing and deploying technologies.

“There’s no clear accountability regime [in AI now], which means responsible actors are disincentivised to take risks and to adopt the technology, because there’s no indication of what protection they have if something goes wrong.”

Fellow speaker Stephanie Hare agreed: “The answer appears to be more scrutiny, more regulation, more standardisation, and perhaps even licensing. Maybe you don’t put AI into certain areas of society without it being rigorously tested, just like we wouldn’t release certain drugs onto the market if we didn’t think it was going to be safe for most people.”