PA life
Roccabella
Treat Your Staff
Treat Your Staff

AI and the increased risk of cyberattacks

AI-opportunities-and-risks-of-cyberattacks

The Government has released a report this morning evaluating how AI is likely to increase the risk of cyberattacks by 2025, and UK Prime Minister, Rishi Sunak delivered a speech on this subject this morning. Matt Cooke, Cybersecurity Strategist, Proofpoint and Dr Andrew Rogoyski, Director of Innovation and Partnerships at the Surrey Institute for People-Centred AI, University of Surrey discusses the the messages from the Prime Minister’s speech.

AI will bring “a transformation as far reaching as the Industrial Revolution” according to Rishi Sunak

The Prime Minister added that despite positive innovation, there are also “new dangers and new fears” that come with the introduction of AI. Matt believes while AI tools can be used in nefarious ways, cybercriminals will continue to use tried and tested phishing techniques, because why reinvent the wheel, when they already have something that works?

Cyber threats require human interaction today but AI will to supercharge threats

โ€œTodayโ€™s threat landscape is characterised by attackers preying on human vulnerability. Research shows thatย nearly 99% of all threatsย require some sort of human interaction. Whether it is malware-free threats such as the different types of Business Email Compromise (BEC) or Email Account Compromise (EAC) likeย payroll diversion,ย account takeover, and executive impersonation, or malware-based threats, people are falling victim to these attacks day-in and day-out. And all it takes is one click, from one individual for a cybercriminal to be successful.

โ€œThereโ€™s no doubt that AI in the hands of cybercriminals can supercharge these threats, increasing the ease, speed, and volume of an attack, while also making social engineering attacks seem even more trustworthy. AI tools can assist cybercriminals in drafting convincing phishing emails, engaging in fraudulent phone calls, and even creating fake imagery to make their lures seem even more convincing โ€“ thus, making victims more likely to fall for them. For example, attackers may use ChatGPT to apply writing styles and tone or conduct longer form social engineering attacks.

โ€œAs threats evolve, humans alone can no longer scale to sufficiently secure against such attacks. From a defensive point of view, AI and ML are both critical components of a robust cybersecurity strategy. They provide the much-needed analysis and threat identification at scale that can be used by security professionals to minimise attack risk. They are also significantly faster and more effective than mind-numbing manual analysis and can quickly adapt to new and evolving threats and trends.

โ€œOne final point to make is that while AI tools can be used in nefarious ways, cybercriminals will continue to use tried and tested phishing techniques, because why reinvent the wheel, when they already have something that works?โ€

Put people at the heart of AI and educate them on increased risks of cyberattacks

Reacting to the UK Governmentโ€™s discussion paper on the capabilities and risks of AI, Dr Andrew Rogoyski, Director of Innovation and Partnerships at the Surrey Institute for People-Centred AI, University of Surrey, said:

โ€œAI, as a technology, has been with us for years and serves us daily, from search engines and social media, to drug discovery and combating climate change. We need to ensure that powerful AIs are aligned to human interests โ€“ nobody knows how to do that yet.

โ€œThe new report is a good summary of the state of AI but we need to be careful about mixing messages about existential threat with economic opportunity – we need to be clear on both.

โ€œAs with all powerful technologies, criminals, terrorists and hostile nation states will find ways to misuse it. The problem isnโ€™t AI, itโ€™s bad people.

โ€œThe formation of the AI Safety Institute is a welcome development, especially as they will publish their evaluations publicly. A clear test of its viability will be whether it gets access to the AIs being developed in US, China, and other countries.

โ€œThe importance of education was stressed by the Prime Minister. An understanding of the tremendous potential and pitfalls associated with this powerful technology is fast becoming a life skill. Itโ€™s through education we can get beyond the ages-old meme of AI being about killer robots and job losses.

โ€œThere are literally hundreds of organisations around the world that have published AI principles, describing how AI should be developed and deployed, however, there is very little progress in operationalising these principles, i.e. making them stick. I would hope that the UK AI Safety Institute looks at practical ways it can bring a principled approach to AI development.โ€

โ€œThere are companies investing billions in AI, primarily in the US and China, outpacing and outscaling anything individual governments can do. We need to recognise that the UK has very limited sovereign control over the development and deployment of Frontier AI, so international collaboration is paramount.โ€

You may also be interested in reading about how to futureproof your Assistant job with the rise of AI in the office.