How machine learning is being used to tackle internal cybercrime threats

Cyberattacks, and particularly stories about hackers, cybercriminals, malware infections, and other external threats, continue to be at the centre of the news agenda. Headlines about the loss of millions of data records as part of a security breach, and the potentially devastating consequences of a cyberattack are a common occurrence.
Picture: ShutterstockPicture: Shutterstock
Picture: Shutterstock

Worryingly, what we see covered in the media does not include smaller, targeted attacks, which were responsible for the majority of the £475 billion lost to cybercrime just last year – a financial figure that is only set to increase as we move towards an integrated digital economy.

According to the US telecoms conglomerate Verizon’s Breach Detection and Investigations Report of 2018, 30 per cent of breaches come from within an organisation rather than from an outside source. Of these, nearly half of these breaches are intentional, while the rest are accidental. The vast majority of cyberattacks still fall below the radar.

From a security perspective, protecting against an insider compromise is quite different from defending against an external attack, where abnormal data flows to an unusual destination, and can therefore be hard to disguise.

Gaining access to vulnerable devices and systems or escalating network privilege are generally much easier to perform from the inside.

Many security systems simply do not pay that much attention to what a known user is doing – especially in an environment built around implicit trust, or one where the majority of security resources are focused solely on perimeter control, via antivirus and firewalls.

Identifying malicious internal risks

An insider can be an employee, former employee, contractor, business associate, or sophisticated attacker pretending to be an employee. Insiders may have legitimate access to computer systems, but what may appear to be authorised access could actually be a user accidentally or intentionally misusing credentials to harm the organisation. A negligent insider could give improper access to others simply due to lack of training or proper control, and a malicious insider could attempt to steal information for financial gain, to benefit another organisation or country, or to exact revenge through malicious software left running by an ex-employee. This is not just a theoretical threat and, whether known or unknown, it is exposing many businesses today.

Insider attacks can result in the theft of valuable data and Intellectual Property (IP), the exposure of potentially embarrassing or proprietary data to the public or competitors, and the hijacking or sabotaging of databases and servers. Customer and employee information, including personally identifiable information (PII) and personal health information (PHI) are favourite targets because they have the highest resale value on the Dark Web. IP and payment card information are the next most popular types of data to steal.

Because insiders already have continuous and trusted access, attacks and data exfiltration can happen over weeks or even months, giving an attacker more time to plan their strategy, cover their tracks, disguise data so it is difficult or impossible for security tools to identify, and keep data movement below the threshold of detection. Many users can also take advantage of inconsistent security enforcement across ecosystems by moving data between core and multi-cloud environments to outrun detection.

The added risk of the negligent insider

It is not unusual for organisations to give certain users more privilege than they have the skill to manage.

An executive who insists on being given escalated privilege to a database, for example, can do something as simple as change a field length and cause critical applications to malfunction. Whether such users are unaware of basic precautions for handling sensitive applications or information, are error-prone, or are simply careless, mostly they do not intend any harm.

Data loss or exposure, however, does not have to be the result of the improper granting of privilege. Losing mobile devices, laptops, or thumb drives, failing to wipe discs and hard drives on discarded hardware, or even giving away business information when chatting on social networks, can result in mistakes that can be as costly as the deliberate attacks of others.

Eliminating security blind spots

The insider threat is not an easy problem to solve, mainly due to unknown unknowns, i.e, blind spots, which require organisations to collect the right type of data at the endpoint. To address this, many are turning to machine learning (ML) to fill security gaps.

Many, if not all, security vendors are incorporating some sort of ML into their latest solutions. This is why, in October 2018, Fortinet, a global leader in broad, integrated and automated cybersecurity solutions, acquired ZoneFox Limited, a privately-held cloud-based insider threat detection and response company headquartered in Edinburgh, which has been described as “a rising star of Scotland’s software sector”.

Grown out of the computing department at Edinburgh Napier University, ZoneFox uses ML to provide continuous endpoint monitoring – across desktops, laptops and servers – even when not connected to an organisational network, providing context and high-fidelity granular visibility about each user, file, device, process and online behaviour.

For Fortinet, integrating ZoneFox’s award-winning ML-based threat-hunting technology meant complementing their endpoint security solution to provide improved detection and response capabilities both on-premises and in the cloud, allowing users to better leverage ML to detect anomalous behaviour and provide an even faster response to insider threats.

ML enables organisations to spot patterns of behaviour, or signals in the noise, that would be all but impossible for humans. By combining accurate and essential user behaviour data with ML capabilities, organisations can more accurately monitor users on an endpoint-by-endpoint basis, and can gain deep visibility into what constitutes normal behaviour, and what does not.

Any time a user does something that the system considers outside of normal bounds, the organisation’s cybersecurity team is alerted. Using ML-based technologies, organisations are able to detect even subtle changes in behaviour, gaining a level of security proactivity that is not possible when relying on traditional prevention and detection systems.

Adding value to cybersecurity

The risk of insider threats is often bigger than we think, especially as networks become larger and more complex. Carelessness and malicious intent are the two major causes, but both can be mitigated. As the ability of ML security solutions to baseline network activity is refined, they can detect anomalous changes in behaviour, as well as provide information identifying and preventing certain behaviours before they occur. And since ML solutions generally provide their own care and feeding, overhead to manage them is reduced to a minimum.

But perhaps most importantly, ML is emerging at an opportune time because the number of analysts required to sift through data is rapidly outpacing the number available. By removing humans from this task, they are free to focus in areas where they can add value, such as further refining organisations’ cybersecurity practice.

Paul Anderson is Regional Director UKI for Fortinet.

Related topics: