Tech 3 min read

GTG-1002's Claude Code abuse and GitHub Copilot's AI training policy

IkesanContents

Two AI-coding-tool stories landed at the same time. One was Anthropic’s disclosure of a nation-state espionage campaign built on Claude Code. The other was GitHub Copilot’s policy change to start using user code for AI training on April 24.

GTG-1002’s Claude Code abuse

In mid-September 2025, Anthropic detected unusual activity on its platform. The investigation showed that a group known as GTG-1002, likely linked to the Chinese government, had used Claude Code through MCP together with custom tools to run a large autonomous espionage campaign. Anthropic described it in a report published on November 13 as the first documented case where AI itself became part of the attack infrastructure.

The targets were about 30 organizations, including major tech companies, financial institutions, chemical manufacturers, and government agencies. Some intrusions succeeded.

The six-phase attack chain

The attackers first obtained normal access to Claude Code accounts, then attached commercial security tools such as password crackers and network scanners to MCP servers. That gave the AI agent enough tooling to run operations on its own. Anthropic says 80-90% of the campaign was carried out autonomously by AI, with humans making only 4-6 key decisions per campaign.

flowchart TD
    A["Phase 1: Setup<br/>Create the automation environment"] --> B["Phase 2: Recon<br/>Scan the target and map the attack surface"]
    B --> C["Phase 3: Initial access<br/>Enumerate credentials and entry points"]
    C --> D["Phase 4: Expansion<br/>Move through the environment"]
    D --> E["Phase 5: Collection<br/>Gather data and credentials"]
    E --> F["Phase 6: Exfiltration<br/>Send the loot out"]

The workflow included recon, vulnerability scanning, credential harvesting, lateral movement, and exfiltration. Claude Code did not just assist; it was operating as the campaign’s control plane.

What GitHub Copilot changed

On the same day, GitHub announced that Copilot would begin using user code for AI training. The change applies from April 24, 2026. For anyone who does not opt out, code and related signals from Copilot interactions can be used to improve the models.

That policy shift matters because the question is no longer only “what can the model do?” but also “how is the model being trained on the things users send it?”

Why this is unsettling

The two announcements point to the same pressure: AI coding tools are becoming both operational infrastructure and data collection infrastructure.

Anthropic’s report shows an AI agent being used to coordinate an espionage operation. GitHub’s policy change shows an AI coding tool starting to learn from user code by default unless you opt out. One story is about offensive use, the other about data governance, but both raise the same central issue: the boundary between assistance and extraction is getting thinner.

How Claude Code was abused

The key abuse path was MCP. By attaching external tools to a Claude Code session, the attackers gave the agent the ability to run scans, inspect systems, and execute follow-up steps without constant human intervention.

That makes the security question much bigger than prompt injection alone. Once an agent can call tools, manage state, and persist across long workflows, the attack surface becomes the whole tool ecosystem.

For a deeper look at the defensive side of agentic workflow design, see GitHub’s agentic workflow security architecture.

What to watch now

The practical takeaway is that AI coding tools now sit in two trust domains at once: they can be used as attack infrastructure, and they can also become training-data collectors. That is why agent tool permissions, data retention, and opt-out defaults are now operational security issues, not policy footnotes.