The importance of showcasing artificial intelligence's ethical integrity - Rachel Aldighieri comment
The global race for AI superiority has continued to gather pace throughout 2023 – with UK tech hubs like Edinburgh looking to attract economy-boosting international investors. Encouragingly, the Scottish capital was recently named the most “AI ready” city in the UK outside London, according to the latest SAS AI Cities Index 2023. It measured metrics like AI-related job ads, the number of AI companies in the city, educational opportunities, and the value of InnovateUK funding granted in each area.
These are all important metrics; however, many would argue that there are other key aspects of being AI ready – ethics and regulation. The worldwide media and public are correctly questioning whether AI is being developed too quickly without the frameworks in place to ensure it is truly used as a force for good.
Under the right circumstances and developed with people’s needs placed front and centre, it can be our man-made best friend. Yet in the wrong hands, AI can create societal bias, generate misleading information on a mass scale, and severely infringe on our human rights. To truly be seen as an international powerhouse across the AI ecosystem, Scotland must showcase its ethical integrity too.
Scottish Government’s vision for AI
Scotland’s AI Strategy launched in early 2021 with the vision for Scotland to become a world leader in the development and use of trustworthy, ethical, and inclusive AI. Hundreds of millions of pounds have been invested in institutions like the University of Edinburgh in recent years to help build Edinburgh’s tech status and, most importantly, to ensure organisations have the talent, education, guidance, and values in place to responsibly innovate.
The Scottish AI Alliance is tasked with the delivery of the vision outlined in Scotland’s AI Strategy. We are striving to play a leading role in ensuring that ethics is at the foundation of all AI technology. While AI regulation is essential for creating a deterrent for rogue actors, where their intentions are deliberately unjust, it can also offer useful guidance and clarity for those intending to develop and use it ethically – so we must continue with its development.
But this is only part of the solution. Ethics and self-governance have a huge role to play to supplement future regulation. Scotland’s AI Strategy places people at its heart and creates ethical and inclusive principles to build trust in AI. It aims to strengthen the AI ecosystem over the next five years through these methods.
For genuine actors in the AI ecosystem, this is where principals-based frameworks can help – but this status can only be achieved with the support of all Scottish businesses. We must ensure that Scotland, spearheaded by Edinburgh, secures its status as a global leader in responsible AI innovation – it will be an integral unique selling point for securing much-needed international investment.
Ethics with people at its heart
We must all support ethical frameworks for AI’s development that place self-governance, transparency, and accountability at the forefront, essentially embedding into the design phase considerations about its potential impact on people – whether that be the end user, those civilians it indirectly affects, the staff maintaining and overseeing it, and even those it could eventually replace in a professional capacity.
Similar principals-based frameworks in my industry already exist. The Data & Marketing Association (DMA UK), the industry body for data-driven marketing, has our own framework for our members – to help marketers ethically navigate through the constantly evolving world of marketing and technology. In 2014, we created the DMA Code to make our industry more accountable, trustworthy, and inclusive. It is designed to transcend all iterations of technological development, and focus on our behaviours and values as individuals and organisations.
To help the creative industries on our path to responsible AI innovation, particularly in areas that affect our industry most like generative AI such as ChatGPT, the DMA recently established a multi-discipline taskforce to develop AI guidance for marketers, which supports the Scottish AI Alliance’s approach.
An AI ethical framework can help us to collaborate around publicly supporting a people-first approach to build consumer trust versus relying on regulation to fill the trust gap – after all, a customer-centred approach is critical for sustainable business growth.
The human-AI team
As part of a people-first approach to AI, it will always require some human intervention to promote our values as moral beings – we must never forget the people it is intended to serve.
For these reasons, the human-AI team is our best future as AI operates more effectively as a tool that humans use to assist and enhance our own abilities. Mike Bugembe of Decidable Global accurately describes it as a scenario where we can use the best of human intuition, strategy, and experience in conjunction with AI’s remarkable machine calculation and memory. This approach will create a range of new job opportunities across a variety of industries – altering the job market as opposed to diminishing human job prospects.
As business leaders, we must not lose our human connection with customers and people-centric values. By supporting ethical frameworks like the Scottish AI Alliance’s Scottish AI Playbook to supplement pending regulation, we can set our own high standards in terms of how we develop and use AI to engage with our customers and the wider world. This will help to attract international investment to supercharge the development of Scotland’s digital economy.
AI can be developed and used as a force for good, but we must now take an ethically minded approach to ensure any regulation created supports organisational infrastructures designed to put people first.
Want to join the conversation? Please or to comment on this article.