There’s no denying that artificial intelligence (AI) has significantly impacted the world, and the cybersecurity landscape is certainly no exception. AI has brought about both positive and negative changes, affecting how we protect our data and how cybercriminals conduct their attacks.
Let’s explore some of the ways AI has transformed cybersecurity—for better and for worse.
AI as a Tool for Cybercriminals
Perfectly Curated Content: AI tools like ChatGPT have provided immense benefits across various industries, speeding up tasks from spell-checking to translations and quick calculations. Unfortunately, cybercriminals are no different in their adoption of these tools.
Hackers are now leveraging AI to enhance their malicious activities. Take phishing email campaigns, for example. Creating a well-constructed, believable email used to require a significant amount of time, especially to ensure it was grammatically correct and personalized. With AI, this process can now be done in minutes. Text can be accurately translated, emails personalized, and web content generated quickly, making scams more convincing than ever.
In the past, spotting a fake website was relatively easy—poor grammar, obvious typos, or missing key elements like privacy policies or terms and conditions were common giveaways. Now, AI can generate high-quality content at an incredibly fast pace, making it harder for individuals to discern legitimate sites from fraudulent ones.
Enhanced Hacking Tools and Code: AI can also be weaponized to create illicit content. Hackers can override chatbots to produce content outside the usual restrictions, such as generating ransomware code. Historically, hacking required a certain level of expertise and training. However, with AI, even those with little experience can quickly get up to speed, using these tools to assist in developing and enhancing their malicious capabilities.
Voice and Video Technology: One of the more alarming aspects of AI is its ability to clone an individual’s voice or digital appearance, enabling vishing (voice phishing) scams and deep fakes (videos that appear real but are entirely fabricated).
This technology is particularly concerning due to its increased believability. While many people are aware of phishing emails and texts, receiving a phone call from someone who sounds exactly like a friend, colleague, or boss could easily lead to a successful scam.
For instance, there was a case last year where a woman was nearly tricked into handing over a significant sum of money after receiving a call from someone posing as her daughter in a fake kidnapping scenario¹. In another incident, a worker in Hong Kong was scammed into wiring £20 million during a video call, tricked by deep fake technology. The hacker used pre-downloaded videos and voice cloning to convince the employee that they were speaking with their finance officer².
Deep fake technology is particularly dangerous because the familiarity of the person being impersonated often engenders trust. Hackers also exploit emotional triggers to prompt irrational actions, as seen in the fake kidnapping scenario. Moreover, deep fake ads are on the rise. Hackers use false celebrity endorsements to lure victims into downloading malicious apps or entering fake competitions.
Another concerning aspect of deep fakes and voice cloning is their potential use in online harassment, extortion, and cyberbullying. Hackers can create fake recordings, phone calls, videos, and images that could place someone in a compromising situation, with devastating personal and financial consequences, especially if ransoms are involved.
Finally, AI’s ability to bypass facial and voice recognition checks poses a significant threat. Hackers can use deep fakes and voice cloning to gain unauthorized access to accounts, apps, and devices, especially when combined with compromised information to create a convincing story.
The Positive Side of AI in Cybersecurity
While we’ve focused heavily on the negative impact of AI on the cyber threat landscape, it’s important to acknowledge the positives that AI brings to cybersecurity.
Enhanced Threat Protection: AI-powered cybersecurity tools can increase the efficiency and scope of threat intelligence monitoring. These tools can analyze data, identify trends, and scan for potential threats. AI is also becoming more integrated with antivirus solutions to detect phishing emails and malicious websites more effectively.
Training Support: Just as AI helps hackers create well-crafted content, it can also be used to quickly generate cybersecurity resources, training materials, and advice for businesses and individuals. AI-powered chatbots can offer instant support and guidance in response to cyber threats.
Coding Fixes: While hackers may use AI for coding, legitimate tech teams can also leverage AI to assist with code fixes, enhancements, and troubleshooting, making their work more efficient.
Potential for Scalability: The incorporation of AI into cybersecurity processes can lead to greater scalability, allowing businesses to handle larger volumes of work with improved efficiency.
Tips to Stay Protected
- Be cautious of urgency and emotions: Hackers often use these tactics in their attacks. Always question any requests, even if they appear to come from someone you know.
- Verify through another channel: Contact the person through a different means, or meet in person to confirm the request.
- Call customer service: If a message claims to be from a business, call their official customer service line to verify its legitimacy. Consider the request: If it involves money or sensitive information, treat it as a red flag.
Did you know that Copic’s medical liability insurance policies include embedded cyber liability coverage?
The coverage is designed to offer protection and support against growing cyber risks, and it also provides access to resources that you can utilize to proactively plan for and prevent cyber breaches.
1https://www.theguardian.com/us-news/2023/jun/14/ai-kidnapping-scam-senate-hearing-jennifer-destefano
2https://www.theguardian.com/world/2024/feb/05/hong-kong-company-deepfake-video-conference-call-scam
The claims handling and breach response services are provided by Beazley USA Services, a member of Beazley Group. Beazley USA Services does not underwrite insurance for Copic. Policies purchased through Copic are subject to Copic’s underwriting processes. CIC024_US_01/26 © Beazley plc [2026]. Reprinted with permission.
The information provided herein does not, and is not intended to constitute legal, medical, or other professional advice; instead, this information is for general informational purposes only. The specifics of each state’s laws and the specifics of each circumstance may impact its accuracy and applicability, therefore, the information should not be relied upon for medical, legal, or financial decisions and you should consult an appropriate professional for specific advice that pertains to your situation.
Article originally published in Copic’s Copiscope 1Q26 newsletter.
