Artificial intelligence is transforming cybercrime. Attacks are faster, more convincing, more personalised, and more scalable than ever before. This guide explains the key AI-powered threats and exactly how to protect yourself.
51%
of common passwords cracked by AI in under a minute
£25m
lost in a single deepfake video call fraud
5 secs
of audio needed to clone a voice with AI
The Threat Landscape Has Changed Fundamentally
The traditional advice of "look for bad spelling and grammar" is now dangerously outdated. AI tools available to anyone for free can produce flawless, persuasive text in any language, clone voices from a few seconds of audio, and generate convincing video. The volume of attacks has also increased dramatically — AI allows criminals to conduct thousands of personalised attacks simultaneously where before they could only manage dozens. Understanding these new threats is the first step to protecting yourself.
AI-Enhanced Phishing Emails
Critical
Large Language Models (LLMs) like ChatGPT can write perfectly grammatical, highly personalised phishing emails at scale. The old warning sign of poor spelling and grammar no longer applies. AI can scrape your LinkedIn, social media, and company website to craft emails that reference your real colleagues, projects, and recent events — making them almost indistinguishable from legitimate communications.
Real-World Example
In 2024, a Hong Kong finance worker was tricked into transferring $25 million after a deepfake video call appeared to show his CFO and other colleagues authorising the payment.
How to protect yourself
Focus on the request itself, not the writing quality. Is it asking you to do something unusual? Does the email address match the claimed sender's domain? When in doubt, call the person directly using a number you already have — never one provided in the email.
Voice Cloning & Vishing
Critical
AI can now clone someone's voice from just a few seconds of audio — easily obtained from a voicemail, a YouTube video, or a social media post. Criminals use this to impersonate family members in distress (the 'grandparent scam'), or to impersonate executives authorising urgent wire transfers ('CEO fraud'). The cloned voice is convincing enough to fool even people who know the person well.
Real-World Example
In 2023, a British energy firm CEO was tricked into transferring €220,000 after receiving a call from what he believed was his parent company's CEO — the voice had been AI-cloned.
How to protect yourself
Establish a family 'safe word' that only you know. If you receive an urgent call from a family member asking for money, ask for the safe word. For business calls, always verify financial requests through a separate, established communication channel — never authorise a payment based solely on a phone call.
Deepfake Video Scams
High
AI-generated video can now convincingly put words in anyone's mouth. Deepfakes have been used in video calls to impersonate executives, in fake investment advertisements featuring celebrities (Elon Musk, Martin Lewis), and in romance scams where criminals create a fake video identity over months of communication. The technology is improving rapidly and becoming accessible to non-technical criminals.
Real-World Example
Fake investment advertisements featuring AI-generated videos of Martin Lewis and other celebrities have defrauded thousands of UK residents of millions of pounds.
How to protect yourself
Look for unnatural blinking, inconsistent lighting on the face, blurry edges around the hairline, and audio that doesn't quite sync with lip movements. For video calls, ask the person to turn sideways or make an unexpected gesture — current deepfake tech struggles with unpredictable movements. Never invest based on a celebrity endorsement seen online.
AI-Powered Password Cracking
High
AI models trained on billions of leaked passwords have learned the patterns humans use when creating passwords. They can intelligently predict that 'P@ssw0rd' is a common substitution, that people add numbers at the end, that they capitalise the first letter, and that they use memorable words with predictable modifications. This makes dictionary-based attacks far more effective against passwords that seem 'clever' to humans.
Real-World Example
Research from Home Security Heroes (2023) found that AI could crack 51% of common passwords in under a minute, and 81% in under a month.
How to protect yourself
The defence is genuine randomness. Three genuinely random words (the NCSC's recommended approach), or a password manager generating a truly random string, defeats AI-powered cracking. Predictable patterns — even complex-looking ones — are vulnerable. Use our password checker to test yours.
AI Chatbot Social Engineering
High
AI chatbots are being deployed to conduct social engineering at scale — engaging potential victims in extended, convincing conversations to build trust before making a fraudulent request. These bots can maintain context over weeks of messaging, making them difficult to distinguish from real people. They are used extensively in romance scams, cryptocurrency fraud ('pig butchering'), and fake job offer scams.
Real-World Example
The 'pig butchering' scam — where criminals build a romantic relationship over months before introducing a fake cryptocurrency investment — has defrauded victims of billions globally, with AI now automating much of the conversation.
How to protect yourself
Be sceptical of any online relationship that moves quickly to requests for money, personal information, or cryptocurrency. Reverse image search profile pictures. Propose a spontaneous video call with unexpected requests (turn sideways, hold up a specific number of fingers). Never invest in anything introduced by someone you've only met online.
AI-Generated Fake News & Investment Scams
High
AI can generate convincing fake news articles, social media posts, and even entire fake news websites at scale. These are used to manipulate stock prices, spread political disinformation, create fake urgency around scams, and fabricate celebrity endorsements for fraudulent investment platforms. The content is often indistinguishable from legitimate journalism.
Real-World Example
Fake news articles claiming celebrities had endorsed a cryptocurrency investment platform have been used to defraud UK investors of millions of pounds.
How to protect yourself
Verify news through multiple established sources before acting on it. Use reverse image search on suspicious photos. Check the publication date and the website's domain registration date (a news site registered last month is suspicious). Never invest based on a news article you found on social media.
AI-Powered CAPTCHA Bypass & Account Takeover
Medium
AI tools can now solve CAPTCHAs and other bot-detection systems at scale, enabling automated credential stuffing attacks. When your username and password from one breach are tested against hundreds of other websites automatically, attackers can take over multiple accounts. AI makes this process faster, cheaper, and more effective than ever before.
Real-World Example
Credential stuffing attacks — using leaked username/password combinations from one breach to access other sites — have compromised millions of accounts at major companies including Netflix, Spotify, and PayPal.
How to protect yourself
Use a unique password for every website — a password manager makes this practical. Enable two-factor authentication (2FA) everywhere it's available. Check if your email has appeared in known breaches using our breach checker.
AI-Assisted Malware & Ransomware
Medium
AI is being used to write and improve malware code, making it easier for less technically skilled criminals to create effective malicious software. AI can also help malware evade detection by antivirus software by automatically modifying its code. Ransomware attacks — where criminals encrypt your files and demand payment — are increasingly targeting individuals, not just businesses.
Real-World Example
Security researchers have demonstrated that AI tools can write functional ransomware code when given the right prompts, lowering the technical barrier for cybercriminals significantly.
How to protect yourself
Keep all software and operating systems updated. Use reputable antivirus software. Back up your important files regularly to an offline or cloud location. Never open unexpected email attachments. Be cautious about what software you download and from where.
Your AI Security Checklist
These steps will protect you against the vast majority of AI-powered attacks.
Establish a family 'safe word' for emergency calls
Defeats voice cloning scams targeting family members.
Verify urgent financial requests via a separate channel
Never authorise a payment based solely on an email or phone call.
Use a password manager with unique passwords
AI can crack predictable passwords. Random strings defeat it.
Enable 2FA on all important accounts
Even if your password is stolen, 2FA blocks access.
Be sceptical of online relationships that escalate quickly
Especially those that introduce investment opportunities.
Reverse image search profile pictures of new contacts
Reveals if the photo is stolen from elsewhere.
Never trust a video call alone for financial authorisation
Deepfakes can convincingly impersonate known individuals.
Keep all software and operating systems updated
Closes vulnerabilities that AI-assisted malware exploits.
Back up important files regularly
Protects against AI-assisted ransomware attacks.
Report AI-generated scams to the NCSC and Action Fraud
Your report helps protect others and enables takedowns.
Phishing Guide
Learn to spot AI-enhanced phishing emails and social engineering.