Right, so Anthropic just dropped their August threat report and honestly, it's a bit of a wake-up call for those of us in security. The way cybercriminals are using AI now isn't just about getting better phishing emails. We now have evidence they're literally letting AI run their entire operations. Here's what caught my attention: There's this one attacker who used Claude Code to pull off data extortion against 17 organisations in a single month. We're talking hospitals, government agencies, emergency services—the whole enchilada. They weren't just using AI for advice; the AI was actually making strategic decisions about which data to steal, how much to demand (up to $500K in some cases), and even crafting psychologically targeted ransom notes. That's pretty bloody sophisticated. Then you've got North Korean operatives using AI to completely fake their way into Fortune 500 tech companies. They're passing technical interviews and actually doing the work once they're hired! All whilst barely being able to code themselves. The AI is essentially their technical brain. And get this! Criminals who can barely string together basic code are now selling ransomware-as-a-service for a few hundred to a couple thousand dollars. The barrier to entry for serious cybercrime has basically disappeared. What's this mean for us in defence? Well, if they're using AI to move faster and scale their attacks, we need to be thinking about how to use it defensively too. I'm talking about behavioural analytics that can spot these AI-driven attack patterns, automated response systems that can keep pace with machine-speed attacks, and threat intelligence that can connect the dots across an entire campaign in real-time. We need automation and AI/ML enrichment across the whole stack. The thing is, we can't keep thinking about AI threats the same way we think about human-operated ones. These attacks move differently, adapt differently, and frankly, they're getting results. I think we need to get serious about AI-centered attack and incident databases, by the security community and for the security community. Proper taxonomies and ontologies, proper intel sharing, proper defence. 🫵🏽 https://lnkd.in/eQte4nXb
How Hackers Use AI in Cyber Attacks
Explore top LinkedIn content from expert professionals.
Summary
Hackers are increasingly using artificial intelligence to carry out cyber attacks, automating everything from targeted phishing emails to large-scale espionage with minimal human oversight. Artificial intelligence, or AI, refers to computer systems that can mimic human reasoning and decision-making, allowing cybercriminals to launch faster and more complex attacks.
- Strengthen password practices: Use strong, unique passwords and enable multi-factor authentication to reduce the risk of AI-directed attacks on your accounts.
- Review AI integrations: Examine how AI tools are used in your business and ensure they’re properly secured against hidden instructions or misuse by attackers.
- Monitor for unusual activity: Set up systems that can spot unfamiliar patterns or rapid changes, which are often signs of automated AI-driven cyber threats.
-
-
Artificial intelligence ushers in a golden age of hacking, experts say - Washington Post Hackers are using AI’s immense capabilities to find ways into more networks — and turn their victims’ AI against them. While many business sectors are still weighing the pluses and minuses of generative AI, criminal hackers are jumping in with both feet. They have figured out how to turn the artificial intelligence programs proliferating on most computers against users to devastating effect, say cybersecurity experts who express deepening concerns about their ability to fend off cyberattacks. Hackers can now turn AI into a kind of sorcerer’s apprentice, threat analysts say. Something as simple and innocuous as a Google calendar invite or an Outlook email can be used to task connected AI programs with spiriting away sensitive files without tripping any security alarms. Compounding the problem is the rapid and sometimes ill-considered pace of new AI product deployments, whether by executives eager to please investors or employees on their own initiative, even in defiance of their IT departments. “It’s kind of unfair that we’re having AI pushed on us in every single product when it introduces new risks,” said Alex Delamotte, a threat researcher at security company SentinelOne. Demonstrations at last month’s Black Hat security conference in Las Vegas included other attention-getting means of exploiting artificial intelligence. In one, an imagined attacker sent documents by email with hidden instructions aimed at ChatGPT or competitors. If a user asked for a summary or one was made automatically, the program would execute the instructions, even finding digital passwords and sending them out of the network. A similar attack on Google’s Gemini didn’t even need an attachment, just an email with hidden directives. The AI summary falsely told the target an account had been compromised and that they should call the attacker’s number, mimicking successful phishing scams. The threats become more concerning with the rise of agentic AI, which empowers browsers and other tools to conduct transactions and make other decisions without human oversight. Already, security company Guardio has tricked the agentic Comet browser addition from Perplexity into buying a watch from a fake online store and to follow instructions from a fake banking email. #cybersecurity #AI #AIPowered #Hacking #AgenticAI
-
AI Breaks the Barrier: Anthropic Reports First Largely Autonomous Cyberattack Introduction Anthropic has uncovered what may be the first documented cyber espionage operation executed primarily by AI. A sophisticated Chinese-linked campaign hijacked the company’s Claude Code system to automate high-scale hacking operations, signaling a new and dangerous era in global cyber conflict. How the Attack Worked • Attackers used jailbreak techniques to bypass Claude’s protections by disguising their activity as legitimate defensive work. • Claude Code was instructed to map targets, identify vulnerabilities, and generate exploit code. • The AI harvested usernames, passwords, and classified data with minimal human oversight. • It categorized stolen information by intelligence value, created backdoors, and documented the full operation for future use. • Anthropic estimates 80 percent of the campaign was executed autonomously. • The system hallucinated at times, but the scale and accuracy rate were still enough to break into several organizations. Why This Campaign Stands Out • Targets included government agencies, major tech firms, banks, and chemical companies. • This marks the first known case of AI-driven espionage executed at global scale. • The attackers exploited the speed, persistence, and parallelization capabilities of “agentic” AI systems. • Previous incidents involved AI assisting criminals; this one shows AI conducting the majority of the kill chain on its own. • Anthropic immediately strengthened its detection systems after discovering thousands of automated requests per second. Growing Pattern of AI-Enabled Cybercrime • Follows earlier misuse of Claude for ransomware creation and North Korean infiltration schemes. • Highlights how rapidly AI tools are being weaponized despite safeguards. • State-sponsored actors are now demonstrating operational integration of autonomous systems. Conclusion: A Turning Point in Cybersecurity This incident signals a structural shift: AI is no longer just a tool for hackers but an increasingly independent operator capable of executing complex, multi-phase attacks. The implications are profound. Defense systems must evolve toward real-time anomaly detection, continuous threat sharing, and stricter AI safety standards. The pace of offensive innovation is accelerating, and global security frameworks must move just as fast to keep pace. I share daily insights with 33,000+ followers across defense, tech, and policy. If this topic resonates, I invite you to connect and continue the conversation. Keith King https://lnkd.in/gHPvUttw
-
Commercial AI services are no longer just productivity tools—they're becoming force multipliers for threat actors of all skill levels. Our Amazon Threat Intelligence team recently observed this firsthand: a Russian-speaking attacker with limited technical capabilities used off-the-shelf AI to compromise over 600 enterprise security devices across 55+ countries in just five weeks. Poor operational security on their part gave us a rare window into exactly how they worked. This wasn't a sophisticated state-sponsored operation. The attacker used AI like an assembly line for cybercrime—generating custom tools, creating step-by-step attack plans, and automating reconnaissance at a scale that would have previously required an entire team of skilled operators. When they hit well-defended targets, they moved on rather than persisting. Their advantage wasn't technical depth; it was AI-augmented speed and efficiency against organizations with basic security gaps: exposed management interfaces, weak passwords, and missing multi-factor authentication. Here's what matters: the fundamentals still work. Organizations with strong credential hygiene, MFA, and proper network segmentation successfully blocked these attacks. And while AI is lowering the barrier to entry for attackers, it's an equally powerful tool for defenders—helping security teams detect threats faster, automate response at scale, and stay ahead of evolving tactics. As attack volumes grow from both skilled and unskilled adversaries, the same defensive basics that protected against this campaign will remain your most effective countermeasure. Read the full technical analysis to see what AI-aided threat actors look like on the ground and how to defend your organization: https://lnkd.in/gKae33VV
-
New findings from OpenAI reinforce that attackers are actively leveraging GenAI. Palo Alto Networks Unit 42 has observed this firsthand: we've seen threat actors exploiting LLMs for ransomware negotiations, deepfakes in recruitment scams, internal reconnaissance and highly-tailored phishing campaigns. China and other nation-states in particular are accelerating their use of these tools, increasing the speed, scale, and efficacy of attacks. But, we’ve also seen this on the cybercriminal side. Our research uncovered vulnerabilities in LLMs, with one model failing to block 41% of malicious prompts. Unit 42 has jailbroken models with minimal effort, producing everything from malware and phishing lures to even instructions for creating a molotov cocktail. This underscores a critical risk: GenAI empowers attackers, and they are actively using it. Understanding how attackers will leverage AI to advance their attacks but also exploit AI implementations within organizations is crucial. AI adoption and innovation is occurring at breakneck speed and security can’t be ignored. Adapting your organization’s security strategy to address AI-powered attacks is essential.
-
🚨AI in Offensive Cybersecurity:Two Significant Incidents in the Last 24 Hours🚨 1. 𝐓𝐡𝐞 𝐅𝐢𝐫𝐬𝐭 𝐀𝐈-𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐞𝐝 𝐑𝐚𝐧𝐬𝐨𝐦𝐰𝐚𝐫𝐞: #PromptLock ESET researchers Anton Cherepanov and Peter Strycek have uncovered PromptLock, the first known ransomware powered by artificial intelligence. Likely in its proof-of-concept (PoC) or development stage, PromptLock exploits 𝐎𝐩𝐞𝐧𝐀𝐈’𝐬 𝐠𝐩𝐭-𝐨𝐬𝐬:𝟐𝟎𝐛 model via the Ollama API to dynamically generate Lua scripts on the fly. These scripts are used for 𝐫𝐞𝐜𝐨𝐧𝐧𝐚𝐢𝐬𝐬𝐚𝐧𝐜𝐞, 𝐝𝐚𝐭𝐚 𝐞𝐱𝐟𝐢𝐥𝐭𝐫𝐚𝐭𝐢𝐨𝐧, and 𝐟𝐢𝐥𝐞 𝐞𝐧𝐜𝐫𝐲𝐩𝐭𝐢𝐨𝐧, making the malware inherently adaptable across Windows, Linux, and macOS systems. 2. 𝐬𝟏𝐧𝐠𝐮𝐥𝐚𝐫𝐢𝐭𝐲 𝐒𝐮𝐩𝐩𝐥𝐲 𝐂𝐡𝐚𝐢𝐧 𝐀𝐭𝐭𝐚𝐜𝐤: 𝐖𝐞𝐚𝐩𝐨𝐧𝐢𝐳𝐢𝐧𝐠 𝐀𝐈 𝐃𝐞𝐯𝐞𝐥𝐨𝐩𝐞𝐫 𝐓𝐨𝐨𝐥𝐬 Eight malicious versions of the popular 𝐍𝐱 𝐛𝐮𝐢𝐥𝐝 𝐬𝐲𝐬𝐭𝐞𝐦 were pushed to npm, introducing malware that abused AI developer tools like 𝐂𝐥𝐚𝐮𝐝𝐞, 𝐆𝐞𝐦𝐢𝐧𝐢, and 𝐀𝐦𝐚𝐳𝐨𝐧 𝐐 for system 𝐫𝐞𝐜𝐨𝐧𝐧𝐚𝐢𝐬𝐬𝐚𝐧𝐜𝐞 and sensitive 𝐝𝐚𝐭𝐚 𝐞𝐱𝐟𝐢𝐥𝐭𝐫𝐚𝐭𝐢𝐨𝐧. The attack targeted SSH keys, npm tokens, environment variables, and cryptocurrency wallet artifacts, amplifying the threat due to Nx's widespread use in JavaScript and TypeScript ecosystems. 𝐖𝐡𝐲 𝐓𝐡𝐞𝐬𝐞 𝐈𝐧𝐜𝐢𝐝𝐞𝐧𝐭𝐬 𝐌𝐚𝐭𝐭𝐞𝐫 These incidents demonstrate that AI-powered attacks are no longer hypothetical. They are here, actively enabling new levels of automation and adaptability for attackers, while reducing technical barriers for writing malicious code. It is safe to assume that we will see an increase in these types of attacks in the near future. Apart from the prompts you can see below, I’m attaching more context about both attacks (including IOCs and mitigation guidance) in the comments. #Cybersecurity #AISecurity #SoftwareSupplyChainSecurity #OffensiveAI
-
Anthropic just disclosed the first documented case of a large-scale cyberattack executed with minimal human intervention. Claude Code was used to target 30+ organizations, succeeding in several cases. The degree of autonomy and speed of the attack show how cybersecurity is fundamentally shifting in the era of AI agents. Human intervention was still required at 4-6 critical decisions points per hacking campaign. But beyond that the AI performed 80-90% of the campaign autonomously: reconnaissance, vulnerability identification, exploit code generation, credential harvesting, and data exfiltration. At peak activity, it made thousands of requests (multiple per second) - a speed impossible for human teams. The attackers bypassed Claude's safeguards through jailbreaking: breaking operations into seemingly innocent micro-tasks and claiming they were conducting legitimate defensive security testing. Each attack had three phases (see image below): 1. Initial Setup & Jailbreaking Human operators selected targets and built an automated attack framework using Claude Code. They bypassed Claude's safeguards by breaking operations into seemingly innocent micro-tasks and claiming they were conducting legitimate defensive security testing. 2. Reconnaissance & Vulnerability Analysis Claude autonomously inspected target systems, identified high-value databases, and researched security vulnerabilities - in a fraction of the time a human team would need. It then wrote custom exploit code and reported findings back to operators. 3. Exploitation & Exfiltration The AI harvested credentials, identified highest-privilege accounts, created backdoors, extracted and categorized private data by intelligence value, then produced comprehensive attack documentation for future operations. This incident shows that the barriers to sophisticated cyberattacks have dropped dramatically. Less resourced and skilled groups can now potentially execute operations in minutes that previously required entire teams of experienced hackers working for days. It is interesting to see how coding skills, one of the strongest (seemingly) benign capabilities of recent LLMs, is central to these attacks. Of course, the same capabilities that enable these attacks can also be used fro defence. Anthropic's own Threat Intelligence team used Claude extensively to analyze the enormous dataset from this investigation. The fundamental question is how to ensure that defensive capabilities stay ahead and that human teams can use them effectively. Detailed reporting of these incidents is the first step. Check out the blog post and the full report in the comments for more details. #ai #security #genai
-
just confirmed the first real AI-operated cyber attack. I think it’s time to stop saying “AI helps attackers” and acknowledge something less comfortable: AI is now capable of being the operator. Anthropic’s GTG-1002 report makes this clearer than anything we’ve seen so far. Anthropic Not an AI suggesting payloads. Not an AI generating syntax. But an AI model running almost the entire kill chain on its own. Here’s what actually happened: The attackers built an orchestration layer around Claude using MCP servers. Claude executed standard security tools: network scanners, exploit frameworks, DB exploitation kits, password crackers, code-analysis engines. The model decomposed the operation into small tasks and chained them: Recon → vulnerability discovery (including SSRF) → exploit development → exploitation → lateral movement → data exfiltration. It maintained long-term state across days, resumed campaigns, and adapted its actions based on each new finding. Humans stepped in only to approve escalation points. Roughly 80–90% of the tactical work was autonomous. The attack wasn’t about fancy malware. It was about identities, permissions, and access across SaaS and internal systems: harvesting credentials, mapping privilege boundaries, and pivoting using whatever identity gave the most reach. At the same time, organizations are rapidly adopting AI internally copilots, agents, SaaS-to-SaaS automations and without real visibility into who is using what, with which permissions, and why, internal AI activity can become just as risky as an external threat. The disruptive capability here wasn’t the tooling it was the automation of the entire attack process. A model planning, running, validating, and iterating on a cyber campaign by orchestrating commodity tools at machine speed. If until now we talked about “putting AI in the SOC,” this report shows the opposite side of the equation: The offensive world has already stepped into autonomous orchestration. And anyone building modern defense needs to think as if the adversary on the other side already looks like this because now, they do.
-
The AI Cyber Arms Race: What GTG-1002 Means for Your Organization The Watershed Moment We Knew Was Coming Anthropic just confirmed what security experts have been expecting: the first AI-orchestrated cyber attack is now a reality. In mid-September, Chinese state-sponsored actors (GTG-1002) weaponized Claude Code and Model Context Protocol against approximately 30 major organizations spanning tech, finance, chemical manufacturing, and government sectors. Multiple targets were successfully breached. This wasn't AI assisting humans. This was AI doing the work. The attackers used role-play prompts to make Claude believe it was conducting legitimate security testing. From there, the AI executed nearly the entire attack chain: Mapping attack surfaces and scanning infrastructure Identifying vulnerabilities and researching exploits Developing custom payloads and exploit chains Harvesting and validating credentials Escalating privileges and moving laterally Querying systems and sorting valuable data What humans did: Spent 2-10 minutes reviewing each phase before authorizing the next step. That's it. Near-autonomous execution with minimal oversight. Why "We're Not a Target" No Longer Applies The uncomfortable truth is that attacks historically required significant effort, so threat actors focused on high-profile targets. That calculation just changed. When AI can handle the tactical work at scale, attackers can extend sophisticated campaigns to smaller organizations that are typically even less prepared to defend themselves. The old logic: "We're too small to be worth their time." The new reality: Every organization can be an economic target when AI does the heavy lifting. The (Brief) Good News The AI did hallucinate during operations, claiming credentials that didn't exist and flagging public information as critical discoveries. These errors required human validation, slowing the attack process. But this is cold comfort. GTG-1002 still breached multiple high-value targets with minimal human effort. The Only Path Forward: AI vs. AI The critical insight here is that the same technology that enables these attacks also powers the defense. Organizations need AI-driven security capabilities to: ✓ Analyze mission critical volumes of data to ensure integrity ✓ Detect anomalies and threats before they spread ✓ Disrupt attacks actively in progress ✓ Enable rapid recovery to minimize disruption and data loss Attackers are using AI to scale compromises. Defenders need AI to scale resilience. The Bottom Line We've entered the time when AI adoption in cybersecurity is table stakes. Organizations that delay implementing AI-powered detection, response, and recovery capabilities are choosing to fight tomorrow's battles with yesterday's tools. The question isn't whether to use AI in your security operations. It's how quickly you can deploy it effectively. Thoughts?
-
🚨 Autonomous cyberattacks have moved from theory to practice. Anthropic’s latest report details a campaign where the threat group GTG-1002 used Claude to run most of a multi-stage intrusion with almost no human touch. The system mapped vulnerabilities, generated exploits, moved across networks, and exfiltrated data at a pace no human team could match. Two points matter: 1️⃣AI acted as an operational agent, not an assistant 2️⃣Deception, not technical failure, opened the door The attacker simply convinced the model it was running legitimate defensive tests. Once the framing held, guardrails slipped. For security leaders, the message is direct. Defenses built for human-paced attacks will not hold against autonomous execution. Detection must shift toward pattern recognition across phases, assets, and time, not individual requests. I put together a short deck breaking down the GTG-1002 operation and the implications for cyber strategy in an AI-driven threat environment. #Cybersecurity #AI #ThreatIntelligence #AISecuritySafety #CyberEspionage ————- This post is based on analysis of the Anthropic report “Disrupting the first reported AI-orchestrated cyber espionage campaign” (November 2025). The insights shared here are intended to raise awareness and drive discussion about AI-enabled threats and the need for defensive evolution.