Gillian Docherty: what ethical issues will data encounter over the next 10 years?

Picture: John Devlin
Picture: John Devlin
0
Have your say

In July, Jeremy Wright MP, the Secretary of State for Digital, Culture, Media and Sport issued the government’s new Data Ethics Framework, part of their National Data Strategy. This new ethics framework aims to lay out best practices for the use of data in the public sector and was a welcome update to an existing policy. Upon publication of the framework, Wright said, “Ethics and innovation are not mutually exclusive. Thinking carefully about how we use our data can help us be better at innovating when we use it.”

The new framework is timely; whilst it solely relates to the public sector, it helps shine a light on the issues surrounding a growing debate on the application of data innovation, particularly artificial intelligence (AI) in our modern society.

Competing voices in the tech industry, academia and politicians argue over the effects of AI and its potential threats. Proactively discussing these issues helps to ensure ethics and privacy considerations are central to how AI services are developed. The formation of the House of Lords Select Committee on Artificial Intelligence was a positive step, and their public call for evidence should help widen the debate.

In addition, a new Centre for Data Ethics was announced in the UK Government’s 2017 autumn budget, with £9 million funding “to enable and ensure safe, ethical and ground-breaking innovation in AI and data-driven technologies”.

Data and artificial intelligence are inextricably linked and, while AI theory has existed for more than 60 years, the rapid growth of “Big Data” has acted as the fuel required for AI to work in practice. The AI industry’s wider development has been underpinned by technological advances in Cloud Computing – which significantly reduces costs to store and process data, opening the market to rapid innovation. As a result, AI start-ups no longer face such a large barrier to entering the market; large capital expenditure on infrastructure has been replaced by utility computing, where organisations pay for what they use.

The rise of the internet economy – reinforced by high-speed internet, mobile technology and Internet of Things (IoT) – has driven the creation and growth of “big data”. The term did not even exist in 1998 when the current data protection legislation was launched, but global data volumes are now estimated to be doubling every two years, with exponential growth near 2020.

Data has the potential to revolutionise many sectors. One of the most prominent in terms of its current application and potential benefits is in healthcare, where new data-driven inventions are announced regularly.

Yet while innovation and wonderful potential abound, it is a sector that needs to be very considered in how it pursues those new ideas to best protect patient privacy. In the UK, this is certainly an issue on the agenda for the NHS.

Although patient privacy is nothing new to the NHS, the increasing quantity of data alongside the increasing use of data-driven innovation in all sectors does raise 
new questions around ethics and privacy.

There are clear and obvious advantages for greater use of data in healthcare. But what is also apparent is that there are clear areas for improvement in processes and infrastructure required to protect patient privacy adequately.

The other prominent issue in healthcare and data ethics surrounds the use of patient created data. With more and more patients recording their own data and providing it to healthcare professionals, therein lies the possibility for this data to be shared with pharmaceutical companies, researchers and multi-disciplinary teams. The key to the ethical use of patient data in healthcare lies in transparency. Any business involved in the use of the public’s data will live or die by how clear and transparent its use of that data is. Regardless of the proposed applications for AI, debates will exist around its potential to put people out of work. In healthcare, for example, if algorithms can detect cancer, what will it mean for radiographers? That said, there is also an argument to be made for AI’s potential to augment human performance in specialised areas, rather than wholly replacing them.

The prospect of AI causing job losses goes beyond healthcare. The present consensus is that whilst AI will potentially automate some people out of work, it will also create more new jobs than those lost through this process. Across the world, an estimated 1.8 million people are set to be out of work by 2020, according to Gartner; however, it is also estimated that AI will create 2.3 million jobs globally by 2020, driving a net gain of 500,000 new jobs.

Closer to home, whether it is in the shape of drones, robotics, driverless cars, or other innovations, there are some excellent Scottish businesses already harnessing the power of AI. It is creating plenty of cause for optimism around the potential impact of artificial intelligence and automation on Scotland’s economy. PwC’s recently published Economic Outlook projected the net gain of jobs due to be created as a result over the next 20 years, with AI creating 558,000 Scottish posts by 2037 and, over the same period, potentially losing 544,000 jobs as a result of automation - resulting in a net increase of 14,000.

The rise in the testing of, and resulting publicity around, driverless cars means it is probably one of the quickest links the public make with AI and any immediate impact on daily lives. Debate around driverless cars and its growth in production and use often quickly turns to the posing of hypothetical, ethical dilemmas. There have been news reports already about driverless cars killing pedestrians in the United States. How does a car “decide” when to stop and when to make difficult, instantaneous choices that could cause the loss of life? These may be extreme cases, but the debate around how a vehicle prioritises the life of its passengers versus the lives of pedestrians is not likely to go away.

It is likely that government intervention in data innovation and its development will not cease in the coming years and, currently, it appears that UK governments are taking charge in terms of whose responsibility it is to ensure the fair use of driverless cars. The UK government announced in March that they were to conduct a three-year regulatory review “pave the way for self-driving cars”, with the intention of getting driverless cars on the roads by 2021. Large businesses are also starting to take more of an interest in data innovation and its potential impact on business, particularly when it comes to bridging a potential skills gap amongst employees. Enlisting the help of a tech company, M&S is a recent high profile example of this. It has committed to training staff to become data literate by announcing the creation of a data academy to train staff with data skills in every area of the business. It will also see the leadership team embark on a ‘data leadership’ programme, covering areas including machine learning and artificial intelligence.

It is imperative that training frameworks such as these infuse ethical skills within them, reaching data specialists through to the leaders and decision makers that use that data. AI’s development and growing application in society is causing a rise in ethical dilemmas. When it comes to frameworks and policies being developed and put in to practice, there is now a strong need for practical action and sharing of experiences in order to learn from real-life cases.

The data sector is worth an estimated £20bn to the Scottish economy by 2020 and the country already boasts one of the most sophisticated data science landscapes in the world. It is important to maintain and encourage a growing pool
of talented data professionals which to help feed the growing skills demand. Core to delivering such benefits is a commitment
to working with the right morals, the right ethics and the right
regulation.

Gillian Docherty is chief executive of The Data Lab