Cloudflare’s Massive Outage: What Really Happened on November 18, 2025
So, let me walk you through what actually unfolded during Cloudflare’s major outage on November 18, 2025 — because it was a huge moment for the Internet, and the story behind it is surprisingly human, messy, and technical all at once.
Everything began around 11:20 UTC , when Cloudflare’s global network suddenly started failing to deliver basic traffic. For regular users, this showed up everywhere as those frustrating Cloudflare error pages — sites wouldn’t load, apps stalled, and parts of the Internet basically felt “broken.” And naturally, people wondered: Was this a cyberattack? A massive DDoS? But as it turns out, nothing malicious triggered it.
Also Read:- Steve McClaren Resigns as Jamaica Manager After World Cup Hopes Fade
- Macca’s Grinch Meal Brings Festive Chaos to Australia This Summer
Instead, it was a simple-but-devastating internal issue: a change in database permissions. One update in one of Cloudflare’s ClickHouse database systems accidentally caused its Bot Management system to generate a feature configuration file that was double its normal size. That oversized file was automatically pushed to machines all across Cloudflare's global network — and this is where things spiraled.
The software running on Cloudflare’s frontline proxy servers had a strict file-size limit for these bot-detection features. When the oversized file arrived, the system basically choked. It couldn’t load the file, panicked, and began returning HTTP 5xx errors all across the ecosystem. What made this even more confusing is that the file was regenerated every five minutes — and sometimes it was generated correctly, sometimes incorrectly — so the entire network would appear to recover for a moment, and then break again. That fluctuation made the team initially think they might be under attack.
Eventually, every database node was producing the bad data, and the outage locked into a stable failing state. By 14:30 , Cloudflare had stopped the propagation of the faulty file and replaced it with a known-good version. From that moment, traffic began flowing again. But the full recovery wasn’t instant — systems had to be restarted, queues cleared, and heavy load had to be worked through. Total restoration came at 17:06 UTC .
During the outage, major Cloudflare services were hit hard: Turnstile authentication wouldn’t load, Workers KV threw errors, the dashboard login failed for many users, and Access authentication basically crumbled unless someone already had an active session. Even spam detection accuracy dipped briefly. And since Cloudflare is deeply embedded across the entire Internet, the outage echoed everywhere.
Cloudflare acknowledged the severity immediately. They were blunt: this should never have happened. They apologized, and they committed to preventing anything similar in the future — including stronger validation systems, more kill switches, and better fail-safes across their proxy architecture.
In short, one small database permissions change cascaded into one of Cloudflare’s most disruptive outages since 2019. It wasn’t an attack, but it was a reminder of how fragile and interconnected the Internet really is.
Read More:
0 Comments