A horizon scan
A new (8 October 2024) report from the Parliamentary Office of Science and Technology (POST) on Cyber crime and harm is based on a horizon scan consultation of researchers revealed significant concern about the expanding landscape of cyber crime and its harm, including:
- entities gaining unauthorised access to digital devices or networks, for example to commit fraud or collect and leverage confidential data to extort money. Entities include individuals, organised criminal groups, states and state-aligned groups
- online bullying
- assault
- cyberstalking
- coercive control
- the spread of mis and disinformation
- radicalisation
Largescale survey
The report is based on an April 2024, Cyber Security Breaches Survey by the Home Office and Department for Science, Innovation and Technology of 2000 businesses and 1004 charities which found half of the businesses and a third of the charities experienced cyber crime in the previous 12 months. This was despite organisations increasingly adopting protections against the most common cyber-attacks.
The report sets out the wide range of motivations for cyber crime including financial gain, to gather confidential information, or to influence political discourses. The UK Government has identified China and Russia as the greatest state-based cyber threats, with Iran and North Korea also possessing cyber capabilities.
The report lists a number of key technologies used for cybercrime:
- Cryptocurrencies are a digital means of financial exchange not overseen by a central authority which are increasingly used by criminals for money laundering, investment fraud and in the online trade of illicit goods. Consumer exchanges can be hacked and there are many cryptocurrency scams, such as fraudsters encouraging consumers to invest in non-existent new coins.
- The metaverse is a range of technologies that allow users to interact in believable virtual worlds and each other. Cyber security risks include identity fraud, virtual assaults, online child sexual exploitation and abuse, the recruitment and training of people to extremist organisations and manipulation from users personal and biometric data being collected.
- Attackers can use generative AI to generate realistic images and videos, known as ‘deepfakes’ and realistic texts and responses to victims quickly and to manipulate them into providing access to systems or information for online fraud.
- Social media is increasingly being used by state actors and other organised groups to gather sensitive political and military information, spread fake online information and radicalisation, which presents a profound challenge for democratic institutions.
Challenges
The report lists a number of extremely concerning challenges relating to the growth of cybercrime. Unauthorized access to digital devices and networks could breach sensitive datastores, threaten personal privacy and put individuals at risk from physical harm such as stalking.
Just this June, 300 million pieces of blood test patient data were exposed from two NHS Trusts, attributed to hacker-group Qilin, thought to be located in Russia.
Analysts at the Internet Watch Foundation are particularly concerned about a rise in AI generated child sexual abuse material for sale on the dark web which poses the risk of re-victimisation of known victims, where perpetrators use AI to manipulate existing child sexual abuse material into media featuring famous children and those already known to abusers. Cryptocurrencies are also particularly prevalent in the trade of child sex abuse material.
Similarly, there have been numerous incidents of deepfake pornographic content of individuals, predominantly women, being shared online, leading to harassment, humiliation and distress for individuals. There is also concern that the increase of time we all spend in online interactions have led to new forms of online abuse, such as attackers joining video calls to display violent or pornographic material.
Experts from the horizon scanning consultation also noted the increasing role of online technology in coercive control, emotional abuse, online stalking, and online bullying with a survey by domestic violence charity Refuge finding 72% of service users had experienced online abuse.
County Lines gangs have also increased their online efforts to recruit children into drug gangs via social media.
On a societal scale, many researchers highlighted concerns about potential impacts of mis and disinformation on society and democratic institutions including false claims relating to the coronavirus pandemic, to politicians with the clear aim of influencing elections and to online radicalisation and the spread of extremist ideology.
Potential responses
In addition to identifying the key challenges, the report lists five main opportunities to respond, all of which, in my opinion, demonstrate the scale of the task:
- Employing AI to, for example, identify fraudulent emails.
- Designing technologies to be “secure-by-design and default” which is likely to be absolutely essential for critical UK infrastructure such as telecommunications, supply chains and the energy grid.
- The regulation of AI to try to ensure models are developed safely to protect against malicious action outside of intended purposes, such as generating disinformation/
- Reviewing legislation on a rolling basis to ensure that immersive environments are adequately protected from online harms.
- Limiting the spread of disinformation by attempting to prevent people from engaging with it and producing good information.