AI-powered cybercrime is the next threat to be faced by organisations - Freha Arshad
In a world where technology looks and sounds more human than ever before, fraudulent emails, scams and viruses are no longer easy to spot and avoid. Today’s cybercriminals are armed with a new and formidable weapon: generative AI. This revolutionary technology is helping businesses across sectors become more efficient but can also create more significant threats.
From convincing video deepfakes to voice notes and texts that emulate real speech, phishing content and ransomware have become hyper-realistic. With the ability to purchase malicious large language models on the dark web, attackers can now craft the ultimate deceptive and damaging cyberattacks.
Advertisement
Hide AdAdvertisement
Hide AdWhile AI offers Scottish businesses opportunities to improve their productivity, it is also expanding the attack surface and providing cyber criminals with sophisticated new tools to exploit. According to a report from the World Economic Forum, produced in collaboration with Accenture, more than half of business executives believe attackers will have the upper hand over defenders over the next two years.


There has been a staggering 76 per cent increase in ransomware attacks since the end of 2022, targeting critical sectors such as manufacturing, education, and healthcare. In one case, a Hong Kong-based finance professional mistakenly paid out $25 million to a scammer following a deepfake video call that he believed was with his CFO.
We’re working with clients to help them defend against such attacks, so they can continue exploring and activating generative AI tools without increasing risk.
While generative AI poses a threat, it also holds the solution. Cybersecurity teams can integrate generative AI intelligence into security operations to speed up incident detection and response. Algorithms can be put to work by security analysts to automatically scan code and gain rich insights into malicious scripts for future prediction and detection. Vulnerable entry points can be protected by integrating deepfake detection technologies into email, video conferencing, phone and endpoint monitoring solutions.
Advertisement
Hide AdAdvertisement
Hide AdNew security capabilities are also vital for organisations to scale generative AI solutions effectively, efficiently, and with minimal risk of model disruption, data theft and manipulation.


However, businesses must also be cautious. Technology consolidation is essential as the threat landscape escalates. Organisations should prioritise technologies they trust and carry out thorough risk assessments before deploying new ones.
Clear policies and processes will also be required to integrate generative AI security into governance frameworks. These should be informed by the latest cyber intelligence and aligned with regulations like the EU AI Act. Security awareness training across all teams will help to address vulnerabilities and ensure policies and processes are upheld.
As cybersecurity’s new era is shaped by the acceleration of generative AI, we find ourselves in a fine balancing act. The same technology that holds the promise of revolutionising our defences can also be exploited by cybercriminals against us.
Advertisement
Hide AdAdvertisement
Hide AdCybersecurity specialists must stay ahead by developing new skills, tools, and strategies to protect their organisations from cyber attackers armed with generative AI. As we continue to navigate this evolving landscape and extract value from generative AI, the role of Scottish cybersecurity professionals and cybersecurity education across all professions, is more crucial than ever in safeguarding our future and ensuring the stability of our economy.
Freha Arshad, Scotland Security Lead, Accenture
Comments
Want to join the conversation? Please or to comment on this article.