Coinbase Customers Left Hot Under the Collar After AWS Cloud Data Center Overheats, Disrupting Service

A multi-zone AWS disruption highlighted how deeply CX now depends on shared cloud infrastructure resilience

4
Service Management & ConnectivityNews

Published: May 8, 2026

Nicole Willing

An AWS infrastructure issue quickly affected customer experience across cryptocurrency trading, online betting, and financial markets, disrupting Coinbase and FanDuel for several hours, and reportedly affecting CME Group-linked systems.

Coinbase acknowledged the issue publicly through its support channels, stating that “customers may be experiencing degraded performance.”

Coinbase customers attempting to trade on the platform faced disruptions across web and mobile services on May 7 and quickly took to social media platforms like X to express their frustration at the company.

The company later confirmed the disruption was linked to an AWS outage and repeatedly reassured users that “Your funds are safe.”

Coinbase added further context in a post on X, noting that its infrastructure is designed to withstand a single-zone failure, but the incident extended beyond that boundary.

“Coinbase systems are designed to be resilient to a single zone outage, and are designed to recover quickly if this happens. In this case, we observed failures impacting multiple AWS zones, which caused an extended outage of core trading services.”

That design approach reflects standard cloud resiliency planning, but its limits become clear when disruptions span multiple availability zones within the same region, reducing the effectiveness of redundancy strategies intended to isolate single-point failures.

Coinbase said the primary issues tied to the AWS outage were “fully resolved” as of Friday morning, though the company indicated its investigation remains ongoing.

“Details may change as our investigation progresses and more information is received from AWS’s official retrospective, once published,” Coinbase stated on X.

Multi-Zone Disruption Exposes Cloud Dependency Risk

The outage, following widespread disruptions at AWS, Microsoft and Cloudflare in late 2025, again highlighted the growing customer experience risks tied to hyperscale cloud infrastructure failures.

According to status updates from Amazon’s cloud services arm, the disruption originated from “increased temperatures within a single data center” in Northern Virginia, affecting systems inside the US-EAST-1 region. AWS said it was “bringing additional cooling capacity online” while rerouting traffic away from the impacted availability zone.

The disruption continued on May 8, with AWS stating at 8:58AM PDT:

“We continue our efforts to work towards the recovery of the impaired EC2 instances and degraded EBS volumes in a single Availability Zone (use1-az4) in the US-EAST-1 Region. We are making progress towards the restoration of the cooling system capacity that is required to recover the affected hardware in the impacted zone.”

The update added that some customers would continue to see their affected EC2 instances and EBS volumes impaired until the affected racks were brought back online in phases.

“We continue to recommend that customers who require immediate recovery restore from EBS snapshots and/or replace affected resources by launching new replacement resources in one of the unaffected zones.” The timeline for full recovery was still expected to be several hours.

Coinbase eventually restored trading services after around six hours of disruption, although the company temporarily placed markets into “Cancel Only” mode while systems stabilized.

The disruption comes just days after AWS and Coinbase announced a new partnership tied to AWS AgentCore payments in Amazon Bedrock, with the service being developed alongside Coinbase and Stripe. AWS described the offering as “the first managed end-to-end payment infrastructure for autonomous AI agents,” reflecting the expanding operational relationship between major cloud providers and financial platforms.

CX Risk Grows as Cloud Concentration Deepens

For CX leaders, the outage served as another reminder that customer trust increasingly depends on the resilience of invisible third-party systems. Modern customer experience ecosystems are increasingly built on interconnected layers of cloud providers, APIs, identity systems, payments infrastructure and third-party services. Even localized failures can cascade into customer-facing disruptions.

When cloud outages occur, customers rarely distinguish between the platform they use and the infrastructure provider behind it. The failure becomes part of the brand experience.

That creates significant pressure on support operations during incidents. Social media complaints surged as users reported failed transactions, login issues and delayed transfers while searching for updates across status pages and support channels. Reddit users shared concerns about pending purchases and unavailable wallets as Coinbase support redirected customers to AWS health updates.

Observers noted the outage affected additional financial platforms beyond Coinbase, including trading infrastructure tied to CME Group.

AWS described the event as a temperature-related failure inside a single data center zone. The company has previously experienced another overheating-related disruption in recent months, adding to growing scrutiny around data center cooling and resiliency as cloud and AI workloads continue to intensify.

Outages now test several customer-facing disciplines simultaneously, from crisis communications speed and transparency of status updates to omnichannel support coordination and recovery messaging once systems return.

As enterprises continue consolidating operations onto hyperscale cloud platforms, the operational question is shifting from whether outages will happen to how effectively organizations communicate through them.

Trust & Safety
Featured

Share This Post