New technology is posing ethical dilemmas that we must take a stand on – Dr Gina Helfrich

Society currently suffers from a slew of problems generated by the ‘dual use’ dilemma in technology.

Dual use means just what it says: that something can be used in two different ways – for good or ill. Nowadays, thanks to the nature of our digitally networked world, virtually any new tech invention will have the capability to be used for nefarious purposes in addition to its intended use.

Take social media. It can help you keep up with friends and family, but also facilitates the spread of misinformation and propaganda, to the point that Facebook is being accused of facilitating a genocide. Problems of dual use aren’t limited to sophisticated technologies. Take the humble hammer: you could use it to build a birdhouse or smash someone’s window.

Hide Ad
Hide Ad

The fact that new technologies can be used for good or for bad gives rise to a moral dilemma: if you can create something that would potentially benefit a lot of people, but it would also be used to harm a lot of people, should you still create it? Many would say ‘yes’, others ‘no’. It’s a dilemma!

Say you’re in the ‘yes’ camp and you create the thing. Then what? Anyone who manufactures or sells hammers has no real way to prevent you from using one for nefarious purposes (pinkie swear!?). Modern technology differs from hammers not only in complexity but also in scale – technology used badly can harm a lot of people.

Here are three general ways to reduce the chances that new technologies will be used for harm: features built into the product itself – think of parental controls in the app store, or content filters on YouTube; self-regulation by industry – in the wake of George Floyd’s death, for instance, IBM, Microsoft, and Amazon introduced a temporary moratorium on providing facial recognition technologies to police; and regulation by governments – the General Data Protection Regulation (GDPR) is perhaps one of the best-known efforts along these lines.

Regulation is the strongest approach, the most extreme being an outright ban. Many argue that facial recognition technology should receive a blanket ban, for instance. However, regulation tends to be slower than the pace of innovation, which means falling back on companies in the meantime to regulate themselves or design their products in harm-reducing ways.

Self-regulation receives a lot of understandable scepticism, as companies seem only to respond to a moral outcry by either their workers or their customers. Workers forced Google to end a contract with the US Pentagon to develop AI technology that could have been used for lethal purposes.

Google staff prevented the company from developing facial recognition software for the US military (Picture: David McNew/AFP via Getty Images)Google staff prevented the company from developing facial recognition software for the US military (Picture: David McNew/AFP via Getty Images)
Google staff prevented the company from developing facial recognition software for the US military (Picture: David McNew/AFP via Getty Images)

Companies must invest time, effort, and money into exploring how their products could potentially be abused and plan in advance how to head off such cases. At present, it too often feels like there’s a wait-and-see approach, where once the harm happens, then a company might decide to do something about it. Investors should incentivise taking greater care, and the public should demand it. We don’t have to accept dual-use technologies creating harms at scale, but we do have to stand up and demand better.

Dr Gina Helfrich is Baillie Gifford programme manager for the Centre for Technomoral Futures at Edinburgh Futures Institute, University of Edinburgh

Comments

 0 comments

Want to join the conversation? Please or to comment on this article.