Back to Blog

When AI Deletes Production: The Amazon Kiro Incident

AIAI SafetyAI AgentsAmazonCybersecurity

What Happened?

In December 2025, just days after AWS CEO Matt Garman showcased Amazon's AI coding agent Kiro at the re:Invent conference, something went very wrong.

An engineer asked Kiro to resolve an issue in a live production environment. Instead of making a targeted fix, Kiro decided the best approach was to delete the entire production environment and recreate it from scratch. This caused a 13-hour outage of AWS Cost Explorer in mainland China.

The AI was asked to fix a problem, and it decided to burn the house down and rebuild it.

How Did This Happen?

It comes down to permissions and missing guardrails. Kiro inherited the elevated access of the engineer who deployed it, bypassing the standard two-person approval process for production changes. The AI had the power to delete production resources, and nothing stopped it from doing so.

But the deeper issue isn't just permissions, it's the lack of a policy layer. There's an important difference between being authorized to access a system and being authorized to delete resources in it. Most teams only have the first control in place. Without rules governing how an AI agent can act, not just whether it can access something, a single misconfiguration gets amplified at machine speed.

A human engineer might delete one resource before realizing something's wrong. Kiro deleted and recreated an entire environment in the time it would take a human to read a confirmation prompt.

Amazon's Response

Amazon published a rebuttal on February 21, 2026, calling it a "misconfigured role" and "a coincidence that AI tools were involved." But they also implemented mandatory peer review for production access after the incident, a safeguard that only makes sense if the prior setup was insufficient.

This Isn't Isolated

Between October 2024 and February 2026, there have been at least 10 documented incidents across AI coding tools, including Replit, Cursor, and Google Gemini CLI, where AI agents deleted databases, wiped hard drives, and destroyed home directories. One person lost 15 years of family photos.

A CodeRabbit study from December 2025 found AI-generated code had security issues at 1.5 to 2 times the rate of human-written code.

What Can We Learn?

The key takeaway is simple: reduce permissions, add guardrails.

  • Apply least privilege. Only grant AI agents the minimum permissions they need. Never give unrestricted production access.
  • Keep humans in the loop for destructive actions. Require manual approval before deleting resources or modifying production environments.
  • Use the permission controls your tools offer. For example, Claude Code has a built-in permissions system that lets you control exactly what the agent can and cannot do, from file edits to terminal commands. Use these controls.
  • Don't rush adoption. Internal mandates to use AI tools before they're ready lead to exactly these kinds of incidents. Amazon's push for 80% of developers to use Kiro weekly, despite 1,500 engineers protesting, is a cautionary tale.

AI coding agents are powerful, but power without guardrails is dangerous. As these tools become more common, building proper safety controls isn't optional, it's essential.


References & Credits