Data conference: Look to clear and present dangers first

David Lee hears keynote speaker Stephanie Hare call for less talk of existential risks around AI from Big Tech, and more transparency about the technology’s impact on the planet and its societies.
Keynote speaker Stephanie Hare. Image: Lisa Ferguson





 Keynote: Data is not Neutral

Stephanie Hare, Researcher, broadcaster and author





The Scotsman DataFutures, AI Futures Conference, Royal College of Physicians of Edinburgh



How will data and artificial intelligence (AI) shape the way we live and work in the future? How will they shape future healthcare, future energy supplies, the future workforce and more? And how is Edinburgh leading the way as it seeks to be Europe’s Data Capital of the future?Keynote speaker Stephanie Hare. Image: Lisa Ferguson





 Keynote: Data is not Neutral

Stephanie Hare, Researcher, broadcaster and author





The Scotsman DataFutures, AI Futures Conference, Royal College of Physicians of Edinburgh



How will data and artificial intelligence (AI) shape the way we live and work in the future? How will they shape future healthcare, future energy supplies, the future workforce and more? And how is Edinburgh leading the way as it seeks to be Europe’s Data Capital of the future?
Keynote speaker Stephanie Hare. Image: Lisa Ferguson Keynote: Data is not Neutral Stephanie Hare, Researcher, broadcaster and author The Scotsman DataFutures, AI Futures Conference, Royal College of Physicians of Edinburgh How will data and artificial intelligence (AI) shape the way we live and work in the future? How will they shape future healthcare, future energy supplies, the future workforce and more? And how is Edinburgh leading the way as it seeks to be Europe’s Data Capital of the future?

Big Tech companies are deliberately ramping up public fear of the existential risks around artificial intelligence (AI) to distract from the risks it poses now, according to a leading expert in the field.

Stephanie Hare, author of Technology is Not Neutral, told The Scotsman’s annual data conference that the focus should be on the “now risks” of AI rather than future existential risks, or “X-risks”.

Hide Ad
Hide Ad

Hare focused on the letters signed by tech leaders earlier this year that warned of “profound risks to society and humanity” from AI. This was a deliberate distraction from what was happening now, because big tech firms don’t want to engage in that conversation, Hare said.

She told the conference, Data Futures, AI Futures, hosted by The Scotsman and the Data-Driven Innovation (DDI) initiative, that this could be linked to climate change. While placing the scale of the AI X-risk on a par with pandemics and nuclear war, the letters made no mention of climate change.

Hare suggested this was because large data centres used by Big Tech need vast amounts of water and electricity to operate.

“We think about the cloud – such a beautiful image of nature comes to mind. We’re not thinking about warehouses with thousands and thousands of servers, humming with electricity. It is very energy-intensive and water-intensive [to cool the servers]. And we don’t have transparency around how much water data centres use, because they aren’t required to report it.

“These companies love to hoover up everyone’s data… but don’t like to share about their data centres. They consider it proprietary information – corporate secrets – but there is a water scarcity problem on the planet.”

Hare argued the increasing widespread use of AI models could make things far worse.

“Chat GPT was made available to the public in November, 2022, and became the fastest-starting app in history with 100 million active users by February, 2023,” she said. “It’s hard to verify, but people are now saying up to 200 million.”

Professor Kate Crawford, a leading global researcher on the implications of AI, piqued Hare’s interest in this water use when she said a typical ChatGPT exchange of 20-50 questions [depending on time and location] consumed half a litre of water.

Hide Ad
Hide Ad

“Every time you use ChatGPT, imagine taking half a litre of water and pouring it on the ground,” Hare asked the event audience. “For me, this is a very powerful visual.

“Maybe if people knew what the water cost of ChatGPT was, perhaps they wouldn’t be so quick to use it. Having transparency around data allows you to make different choices. A community or country can decide to site a data centre near sources of water, so people know how much they are consuming.”

Coming back to “the letter”, Hare stressed that it had been signed by many very credible scientists and leaders in the field – who called for a pause on developing AI based on its X-risks for human beings.

“Who’s supposed to pause?” she asked. “The people who signed the letter are not going to stop, so what were they really asking for?

“Two months later they came out with a stronger statement, and when you tell me you’re doing technology on par with nuclear war, you have my full attention.

“They were talking about the risk of extinction from AI, but they were not making the link clearly enough for me. As someone who is fascinated by this field, I like to look at what is said – and what is not said. That’s why I draw attention to the absence of any mention of climate change. They’re worried because climate change is coming at people right now. It’s definitely going to kill people in the future if we don’t seriously change our ways and take action – but the AI X-risks crew just don’t talk about it.”

Hare said there was a group of people pushing back against the X-risks narrative and calling for a focus on the “now risks” from AI – such as the bias and discrimination in facial recognition technology. Hare reminded delegates this has been widely discredited for only working really well on caucasians.

Some people are synthesising the X-risks and now-risks, Hare continued: “They are saying ‘Let’s do the now-risks and the future risks; why can’t we tackle all of them?’ I’m following this debate and paying very close attention to regulators.”

Hide Ad
Hide Ad

She highlighted two powerful regulators ­– Lina Kahn, head of the US Federal Trade Commission, “who has been getting ready to take on Amazon for years”, and Margrethe Vestager, the EU’s Competition Commissioner.

“Neither is particularly persuaded by the X-risks argument, which is interesting, because there’s a lot of money going into X-risks,” Hare observed. “Vestager is looking at banks, social services and how AI might discriminate against you, while Khan is looking at companies. She’s saying if those signing letters about the existential threat of AI also fire their AI ethicists, we’ll start auditing you, and that might not be a good look. That’s American for ‘We’re coming for you.’”

However, despite US Vice President Kamala Harris hosting a group of tech leaders at the White House on the subject, Hare doesn’t expect anything major to happen before the Presidential election next year. But, she does think the UK could take a global leadership role, as it prepares to host an inter-governmental AI summit at Bletchley Park on 1-2 November.

Ian Hogarth, who has highlighted the X-risks of AI, has been appointed by the UK Government to lead a task force looking at AI models, and specifically AI safety.

Westminster had “somehow found £100 million down the back of the sofa” to spend on AI, said Hare. “So what is that £100 million for? We get a two-day conference and Hogarth has been appointing people over the summer. Beyond that, we don’t really know. It’s a bit woolly.”

But Hare thinks the UK could build on historic work at Bletchley Park and current activity at the Cyber Security Centre at GCHQ to become “an international third party broker between the US and EU.” She added: “What if we had something like the International Atomic Energy Agency but for AI governance? What if we had a CERN for AI research? The UK is very well-placed for that.”

Hare went on to say the real action on AI regulation was coming from the EU, which is preparing to pass a landmark piece of legislation, the EU AI Act.

“One big thing they’re looking at that will make the UK uncomfortable is how to govern the use of facial recognition technology,” Hare told delegates. “We use this technology a lot, and not just police and security services. Pubs, shops, museums, are running private watch lists and sharing them with the police. None of this has real oversight.

Hide Ad
Hide Ad

“The EU is taking a very different approach and I hope this will encourage people to stand up for themselves a bit more and get some oversight.”

Hare said that while the two politicians leading thecharge – Dragos Tudorache from Romania and Brando Benifei from Italy – are little-known figures, they could nevertheless have a massive impact on Silicon Valley’s biggest firms.

“There’s a lot of concrete technology regulation coming down the pipe,” she said. “But most of this has nothing to do with carbon footprints or water footprint.”

Tougher regulation on examining climate data could have a huge effect, she argued: “If you don’t have transparency on data, you cannot do effective governance, regulation and policymaking.

“The big thing is who gets priority over the needs for water and electricity when machines needed to run our entire cities are so embedded in human society. If you have a drought situation, do you shut off the water of a city of, say 70,000 with a data centre based there powering the infrastructure for millions?

“We can’t do any of that kind of risk scenario planning if we don’t have the data. So perhaps we can all raise some collective democratic voice to say we would like to have an end to secrecy around the water and carbon footprint of AI models.

“But we don’t want to throw the baby out with the bathwater. There are lots of really exciting benefits that we will get from AI, we just want to do it sustainably, responsibly and ethically.

“The data on water and carbon footprints are knowable, but not publicly available. Companies need to be compelled to share it. Then we can have discussions about hosepipe bans, water restrictions, electricity prices, and energy bills.

Hide Ad
Hide Ad

“All those things are going to be turbo-charged as AI Large Language Models are integrated into our society on a much larger scale. Before we do that, we have a really exciting opportunity to get the building blocks right.”