Google has published a list of ways AI is currently being used by threat actors to more efficiently hack you

As AI continues to grow and make its way into everyday life, the alleged productivity gains do appear to be showing in some places. It just so happens that hacker groups are one of those places, and Google’s Threat Intelligence has listed some of the many ways they use it. Welcome to the future.
In its latest report, it says, “In the final quarter of 2025, Google Threat Intelligence Group (GTIG) observed threat actors increasingly integrating artificial intelligence (AI) to accelerate the attack lifecycle, achieving productivity gains in reconnaissance, social engineering, and malware development.”
Our latest GTIG AI Threat Tracker report reveals how adversaries are integrating AI into operations.We detail state-sponsored LLM phishing, AI-enabled malware like HONESTCUE, and rising model extraction attacks.Read the report: https://t.co/6GIqxYxNDF pic.twitter.com/2KHXKnhpPqFebruary 12, 2026
One such method for AI use is making hackers seem more reputable in conversation. “Increasingly, threat actors now leverage LLMs to generate hyper-personalized, culturally nuanced lures that can mirror the professional tone of a target organization or local language”
Google has spotted it being used in phishing scams to learn information about potential targets, too. “This activity underscores a shift toward AI-augmented phishing enablement, where the speed and accuracy of LLMs can bypass the manual labor traditionally required for victim profiling.”
This is all before mentioning AI-generated code, with hackers such as APT31 using Gemini to automate analysing vulnerabilities and plans to test said vulnerabilities. It also spotted ‘COINBAIT’, a phishing kit masquerading as a cryptocurrency, “whose construction was likely accelerated by AI code generation tools.”
Though mostly a proof of concept, Google has reportedly spotted a malware that prompts users’ AI bots to create code to generate additional malware. This would make tracking down malware on a machine increasingly hard as it continues to ‘mutate’.
Google says, “The potential of AI, especially generative AI, is immense. As innovation moves forward, the industry needs security standards for building and deploying AI responsibly.”
Just last week, we saw a phishing scam that uses AI to deepfake CEOs of companies, in order to get access to a victim’s cryptocurrency. It seems AI is becoming more than just one tool in a hacker’s toolbelt, and one has to hope counteragents are getting enough data to counteract it.

Best gaming rigs 2026



