HyperAIHyperAI

Command Palette

Search for a command to run...

a month ago
NVIDIA
LLM

Amazon Denies AI Caused Major Outage, Blames Human Error Instead

Amazon has pushed back against a report published by the Financial Times claiming that an AI system caused a major outage in its cloud services. The tech giant clarified the timeline and root cause of the disruption, firmly rejecting the assertion that its AI tools were responsible. On February 19, 2026, the Financial Times released a story alleging that Amazon Web Services (AWS) suffered a 13-hour outage in mid-December 2025, primarily affecting the Cost Explorer service — a key dashboard used by customers to monitor and manage their cloud spending. The report claimed that an AI-powered “agentic” assistant named Kiro played a central role, allegedly making autonomous decisions to delete an entire environment in an attempt to resolve the issue. The narrative quickly gained traction, stoking widespread concern about the risks of deploying AI systems in critical infrastructure. Critics worried about the potential for autonomous agents to act without oversight, especially in complex systems like cloud computing. However, less than 24 hours after the report’s release, Amazon issued a public correction. The company stated that the entire account of events was inaccurate and that the outage was not caused by AI. Instead, Amazon attributed the disruption to a human error during a routine maintenance task. According to Amazon, the incident occurred when an engineer manually executed a command intended to restart a specific service component. Due to a misconfiguration and a lack of proper safeguards, the command inadvertently triggered a cascading failure across multiple interconnected systems, leading to the prolonged downtime. Amazon emphasized that no AI system — including Kiro — was involved in the decision-making process or execution of any actions during the outage. The company also noted that Kiro and similar AI tools are used in development and testing environments, not in production operations, and are subject to strict review and approval protocols. The company reiterated its commitment to responsible AI use, stating that AI tools are designed to assist engineers, not replace them. It also confirmed that internal reviews are underway to strengthen operational procedures and prevent similar incidents in the future. While the Financial Times report cited four sources, including current Amazon employees, Amazon said that the information provided was incomplete and taken out of context. The company stressed that the incident was a result of human oversight, not autonomous AI behavior. The episode underscores the growing public anxiety around AI’s role in critical systems, even as companies work to ensure transparency and control. Amazon’s swift response highlights the importance of accurate reporting, especially when it comes to high-stakes technology infrastructure.

Related Links