Skip to Content
18 November, 2025

Cloudflare Down: Many Parts of Internet just stopped working

Cloudflare Down: Many Parts of Internet just stopped working

Table of Content

  • claire vinali
    Author

    Claire Vinali

  • Published

    18 Nov 2025

  • Reading Time

    22 mins

One in five of the world’s top million sites use Cloudflare. Today, many of these sites in Australia have slowed or stopped working. This outage has caused delays and errors in online activities.

We’re keeping an eye on how this outage affects online services. It’s causing problems with logins, payments, and app connections. This is affecting many areas, from shopping to healthcare and government services.

Our team is monitoring the Cloudflare status page and other important signals. We’re also checking updates from major internet providers. We aim to understand the extent of the problem and find ways to fix it quickly.

Our goal is to help leaders and engineers deal with the situation. We want to identify the main issues and find solutions. Recovery will be slow, but we’re working to get things back to normal as soon as possible.

Key Takeaways

  • Wide cloudflare outage impacting sites, apps, and APIs across Australia.
  • Symptoms include failed logins, payment errors, and intermittent timeouts.
  • We’re tracking cloudflare status plus ISP telemetry for verified signals.
  • Impact varies by region, provider, and device; mobile and NBN may differ.
  • Focus now: triage, communicate clearly, and stabilise critical user flows.
  • Expect staggered recovery as routes converge on a steady path.

Breaking: Widespread Cloudflare outage disrupts services across Australia

Reports are flooding in as Cloudflare goes down across major cities. Sites that use Cloudflare are slowing down. This is causing big problems for online stores, media, and software services.

What’s happening right now

Many Australian websites and apps are timing out or showing errors. Pages load slowly, API calls fail, and some domains won’t resolve. It seems like the edge is unstable, not completely down.

  • Frequent Error 502, 522, and 525 on Cloudflare‑proxied sites
  • Intermittent success on refresh, suggesting partial path degradation
  • Checkout and authentication steps failing under load

Cloudflare is experiencing problems across e-commerce, banking, media, and SaaS platforms. This is happening at the edge.

Early reports from users and ISPs

Users are seeing mixed results, pointing to regional issues. Social threads from Telstra, Optus, and Aussie Broadband suggest upstream problems affecting Cloudflare.

  • Elevated timeouts hitting Cloudflare endpoints
  • Some mobile sessions succeed while fixed lines stall
  • ISP advisories cite third‑party network impacts

Where Cloudflare is down, we see more retries and abandoned sessions. This suggests packet loss and SSL handshake failures at busy points.

Initial timelines and first confirmed impacts

The outage started quickly in Sydney, Melbourne, Brisbane, Perth, and Adelaide. The impacts were wide and immediate.

  • E-commerce frontends failing to render product and cart views
  • Media sites returning gateway errors during peak traffic
  • SaaS dashboards unreachable and API‑driven mobile apps degraded
  • Authentication flows affected where Cloudflare Access or Turnstile sits in path

Businesses are seeing more cart abandonment and support tickets as Cloudflare issues continue. Cloudflare problems are also affecting DNS resolution for proxied domains. This makes things worse when Cloudflare is down and traffic is high.

Cloudflare status and official communications

We keep an eye on the cloudflare status portal for updates. When there’s a cloudflare outage, we look at the incident state and region notes. This helps us tell our teams what’s happening and what to do.

Latest updates from the Cloudflare status page

The dashboard shows components like DNS and Network. Each incident has a UTC timestamp and a status like Identified or Resolved.

  • Check component threads to see if they match your stack.
  • Map update times to your alerts to confirm.
  • If issues last after a Resolved note, it might be cache poisoning or origin overload.

We watch the cloudflare status feed and our monitors. This helps us know if the outage is Cloudflare’s or a local issue.

Statements from Cloudflare engineering and PR

Cloudflare’s engineering and PR share updates during outages. They acknowledge the impact, explain what they’re doing to fix it, and when we can expect more news.

  • After things stabilize, they usually share what caused the problem.
  • They talk about routing, DNS paths, and edge reachability.
  • Use their updates to plan your internal comms.

We share these updates with our clients. This way, they see a clear picture of the outage and any server issues.

How to interpret incident severity and components affected

Severity labels tell us how urgent it is. Degraded Performance means slower speeds and occasional timeouts. Partial Outage affects some PoPs or products. Major Outage means widespread errors.

  • DNS: can cause random failures in domain resolution.
  • Network/Edge: affects how well Anycast reaches and routes.
  • Cache/Proxy: can cause 502 or 522 errors when origin paths fail.
  • Zero Trust/Access: can block remote staff from apps.

We match our runbooks to the components on the cloudflare status page. This ensures our responses are up-to-date with the outage and server issues.

Cloudflare Down

When we say Cloudflare Down, it usually doesn’t mean a complete shutdown. Instead, we often see small issues like edge routing problems, DNS glitches, or proxy issues. These can cause cloudflare errors in our browsers. The impact can vary depending on where you are and how your internet service provider connects to Cloudflare’s network.

In the browser, we might see messages saying the gateway is down, failed SSL connections, slow loading, and then timeouts. Apps can have problems too, like API calls failing, webhooks not working, and sign-in issues. If Cloudflare is down for back-office tools, dashboards might not work, and Workers or KV can be slow.

For businesses, this can mean delayed sales, missed opportunities, and pressure on service level agreements. Teams might have to use manual workarounds or phone support while engineers fix the cloudflare error.

Technically, Cloudflare Down often means there’s a problem with how data is routed, links are congested, or something is misconfigured. How fast changes are seen can vary by carrier and city, even when the main fix is done.

Our goal is to keep services running while the network gets stable and reduce risks when cloudflare not working makes it hard to reach us.

Symptom User Impact Likely Layer Business Risk
Gateway messages or timeouts Pages stall or fail to load Edge proxy and routing Lost sessions and cart drop‑off
SSL handshake failures Secure pages rejected TLS termination at the edge Trust issues and support load
API and webhook delays Apps appear “frozen” Network congestion or queuing Order lag and reporting gaps
Dashboard unreachable Ops can’t view metrics Control plane or DNS path Slow response to incidents
Regional inconsistency Works on 4G, fails on NBN ISP peering and Anycast routes Unpredictable customer experience

What services and regions are impacted

We’re watching how a cloudflare outage affects real customers in Australia. When the cloudflare network goes down, it messes with checkout processes, media, and dashboards. We focus on fixing issues by city, ISP, and device to tackle the most urgent problems first.

Major platforms, apps, and sites reporting issues

Retail sites struggle with carts and CDPs when scripts time out. Media pages load text but fail on videos or fonts. SaaS platforms face slow or missing dashboards, and fintech APIs see more errors where Cloudflare handles TLS.

  • Commerce: delayed checkouts, promo engines, and tag managers.
  • Media: article pages degrade as assets from Cloudflare edges fail.
  • SaaS: admin panels and analytics widgets show intermittent timeouts.
  • Finance: webhook and endpoint retries increase under load.

Cloudflare problems seem bigger because of dependency chains. Scripts, fonts, and API aggregators can block a page, even if the main site is fine.

Geographic hotspots: capital cities and regional Australia

Sydney and Melbourne see traffic first, leading to saturation. Then, Brisbane, Perth, Adelaide, Canberra, and Hobart feel the effects. Regional users face different outcomes based on their ISP’s connection to Cloudflare.

  • Sydney and Melbourne: early packet loss and higher latency near peering points.
  • Other capitals: sporadic errors as routes reconverge.
  • Regional Australia: outcomes vary by ISP backhaul and cache locality.

This is why the same cloudflare outage can look severe in one city yet mild in another within minutes.

Differences between mobile, NBN, and enterprise networks

Mobile carriers might use different Cloudflare edges, keeping sessions stable. NBN fixed-line often converges at capital exchanges, which raises sensitivity if those edges strain. Enterprise networks with dedicated transit or private peering can avoid congested paths altogether.

  • Mobile: better continuity if egress hits healthier edges.
  • NBN: higher exposure to city peering hotspots during the cloudflare network down window.
  • Enterprise: distinct behaviour; private routes can sidestep public congestion.
Segment Typical Symptoms Primary Dependency Suggested Immediate Focus
Retail & Ecommerce Cart hangs, payment iframe timeouts CDN assets, tag managers, TLS at edge Protect checkout path; defer non‑critical scripts
Media & Publishing Slow article loads, missing images/video Static asset delivery, font hosting Prioritise core HTML; lazy‑load heavy assets
SaaS Platforms Dashboard latency, widget errors API gateways, WAF rules Cache API reads; reduce third‑party calls
Fintech & Payments Webhook retries, TLS handshakes failing Edge TLS termination, Anycast routing Queue webhooks; monitor retry logic
Mobile Networks Variable but often stable sessions Carrier routing to alternate edges Shift users to mobile fallback where safe
NBN Fixed‑Line Time outs near peak in capitals Peering at city exchanges Throttle non‑essential assets in affected cities
Enterprise Mixed; sometimes unaffected Private peering or dedicated transit Leverage private paths; segment traffic

We measure impact by segment and city, then steer traffic where multi‑CDN or resolver options exist. This helps keep customers moving even while cloudflare problems continue to surface across the stack.

Cloudflare outage causes: what we know so far

We’re tracking technical signals to find out what’s real. When the cloudflare status changes, we look for patterns. This helps businesses plan and engineers fix networks.

Key point: a cloudflare outage rarely has one cause. It often starts with a trigger, then spreads to routing, DNS, and edge capacity. We wait for stabilisation before making conclusions.

Potential triggers: network routing, DNS, or data centre disruptions

First, routing. BGP anomalies can misdirect traffic or block it. This looks like reachability loss. If paths flap, latency and packet loss increase quickly.

Second, DNS. Disruptions at authoritative or resolver layers can cause NXDOMAIN or timeouts. Users see sites as “down” even if they’re not.

Third, data centre or PoP events. Power incidents, fibre cuts, or software rollouts can reduce proxy capacity. This is when cloudflare server issues appear as timeouts or SSL handshake failures.

Historical context: previous Cloudflare problems and patterns

Past incidents often came from configuration rollouts that touched WAF or Firewall rules. We’ve also seen BGP route leaks and DNS resolvers overload during large DDoS mitigation.

These patterns usually settle after rollback, route dampening, and traffic rebalancing. As the cloudflare status improves, error rates plateau and then decrease.

Why edge and Anycast architectures matter during incidents

Cloudflare uses Anycast to announce the same IPs from many sites. This boosts resilience and speed in normal times. During instability, path asymmetry can create pockets of failure by region.

While BGP converges, the “some users fine, others not” effect appears. We look for recovering announcements, lower latency, and fewer 5xx errors to judge when a cloudflare outage is easing.

How to check if Cloudflare is not working for you

When websites are slow or don’t load, we first check if it’s a local issue or a Cloudflare problem. We aim to find out quickly, using clear steps. We watch the cloudflare status and check our own network before taking action.

cloudflare not working

Using the Cloudflare status dashboard and third‑party monitors

We start with the official cloudflare status page and sign up for updates. This gives us detailed information on the problem. We also check Downdetector trends and our APM for real-time error spikes.

This helps us understand if the problem is local or widespread. It also helps us prepare for deeper tests.

Diagnostic steps: DNS lookups, traceroute, and HTTP checks

We use quick network tests to find where requests fail. These tests help us identify if the problem is at the edge, DNS, or origin.

  • DNS: run dig yourdomain.com +trace to see delegation, then dig @1.1.1.1 yourdomain.com to compare resolver behaviour.
  • Traceroute/MTR: trace to the Cloudflare‑proxied hostname and to the origin. Packet loss at the edge suggests Cloudflare problems; loss at the origin points to your server.
  • HTTP: use curl -I https://yourdomain.com for status codes, then curl -svo /dev/null https://yourdomain.com to inspect TLS and timing.

These tests help us find out if the problem is with routing, DNS, or the application. They also tell us if Cloudflare is the main issue.

Determining if it’s a local ISP issue or a global event

Next, we compare across networks. Test via a mobile hotspot and your office NBN. If mobile works and NBN doesn’t, it might be a Cloudflare PoP issue.

  • Switch resolvers: try 1.1.1.1, 8.8.8.8, and 9.9.9.9. Improved results after a swap indicate a DNS path problem.
  • Check multiple sites behind Cloudflare. If many fail, review the cloudflare status for multi‑region incidents.

Use this quick guide: one ISP only implies local; one city or cohort hints regional; broad 5xx errors with multiple components on the cloudflare status suggests global Cloudflare problems.

Common Cloudflare errors and what they mean

When services slow down, a cloudflare error often pops up. These codes help us understand what’s going wrong when Cloudflare is stressed or parts of the network are down. We break them down so teams can quickly respond without getting lost in logs.

Error 502/522/525 explained

502 Bad Gateway means Cloudflare couldn’t get a valid response from the origin. This could be due to an origin outage, unstable edge paths, or a sudden surge. The issue can vary by location.

522 Connection Timed Out indicates a problem with the TCP handshake. This might be due to network filters, busy links, or firewalls at the origin. Even good origins can time out if routes keep changing.

525 SSL Handshake Failed shows a problem with the TLS negotiation between Cloudflare and the origin. This could be due to certificates, ciphers, or an overloaded edge. It might resolve as things settle down.

Gateway, SSL, and timeout issues during network instability

In busy times, edge congestion can cause different errors across regions. A site might show a 502 in Sydney and a 522 in Perth within minutes. This is normal when routes change and queues get busy.

  • Gateway errors often rise first as proxies struggle to reach origins.
  • Timeouts follow when handshakes queue and drop under load.
  • SSL failures spike if TLS retries collide with packet loss.

We see these signals as snapshots, not final verdicts. Patterns in multiple requests tell the real story of a cloudflare error.

What users can safely try without making things worse

  • Wait 30–60 seconds, then refresh once. Rapid retries only add load when parts of Cloudflare are shaky or cloudflare down.
  • Swap to a mobile hotspot or another network to check if the path via your ISP is the culprit.
  • For admins, avoid knee-jerk DNS flips. Lower TTLs, review cache rules, and confirm origin capacity before bypassing the proxy.
  • Explain the code in plain English to customers and offer alternatives like phone orders or email, reducing pressure while cloudflare problems resolve.

Impact on Australian businesses, media, and government services

When Cloudflare goes down, businesses feel it right away. Online sales slow down, and bookings fail. Forms that help with leads and support stop working, causing delays.

SaaS companies risk breaking their service level agreements. This is because they use Cloudflare for key services. It leads to more questions and longer wait times.

Media groups face their own challenges. Slow page loads and missing images are common. Streaming services can freeze or drop, affecting live broadcasts.

Government sites and public info pages can be slow or fail. People trying to access important info face errors. Staff using Cloudflare for security may get stuck in loops, slowing their work.

We handle these issues with calm and transparency. We create incident pages outside the affected area. We pause ads and emails to avoid wasting resources.

We only share updates when we’re sure of the facts. This way, we keep everyone informed without causing confusion.

What helps right now?

  • Set up a simple status page outside Cloudflare to keep people updated.
  • Turn off non-essential site features until Cloudflare is back up.
  • Work with ISPs and hosting providers to confirm the problem before fixing it.

For finance, retail, media, health, and public sector teams, the goal is to keep things running. We focus on keeping key services available. We document any issues for future reference and protect customer trust while we fix the problem.

Impact on Australian businesses, media, and government services

What site owners and developers should do during Cloudflare server issues

When cloudflare not working alerts start to spike, we act quickly and keep things simple. We monitor cloudflare status, protect the origin, and keep customers updated. Our goal is to maintain steady service, even when cloudflare server issues affect the network.

Incident playbook: status pages, failover, and customer comms

  • Host a lightweight status page off your primary stack. Use separate DNS and a non‑Cloudflare CDN so updates remain visible during cloudflare server issues.
  • Enable failover: if you run multi‑CDN, shift traffic; if not, expose a direct origin path for critical routes after security checks.
  • Communicate early. Acknowledge cloudflare not working behaviour, list workarounds, and give the next update time. Keep language plain and timestamps local to AEST/AEDT.

Bypassing or adjusting DNS and CDN configurations temporarily

  • Reduce DNS TTLs to 60–300 seconds for agility while we watch cloudflare status and routing changes.
  • For essential hostnames, toggle to DNS‑only after confirming origin IPs are shielded by a firewall, access rules, and rate controls.
  • Relax strict WAF rules that block legitimate retries, but log events and watch for abuse or credential stuffing.

Rate limiting, cache rules, and origin scaling considerations

  • Expand cache‑everything on static paths once edges stabilise to cut origin load and smooth spikes caused by cloudflare not working symptoms.
  • Adjust rate limits to reduce false positives during retry storms; prefer sliding windows and per‑IP or per‑token logic.
  • Scale origin capacity: add instances, raise connection limits, and verify SSL ciphers and certificates to avoid 525 errors during recovery from cloudflare server issues.
  • After stabilisation, document what worked, update runbooks, and weigh multi‑resolver or multi‑CDN options to reduce single‑vendor risk while tracking cloudflare status over time.

How long will the Cloudflare network be down?

We focus on recovery signs, not guessing times. When Cloudflare goes down, we watch live updates and the status page. This tells us if things are getting better quickly or slowly.

Reading recovery signals from BGP, latency, and error rates

First, we check BGP routes. If they start to stabilize, we’re close to recovery. Then, we look at latency from Australia to Cloudflare edges. Lower times mean traffic is flowing well.

Error rates also matter. We want 5xx errors to go down and TLS handshakes to increase. When these signs move in the right direction, things are getting better.

Typical restoration phases for large‑scale outages

  • Containment: Quick fixes and reroutes limit damage while the team finds the problem.
  • Stabilisation: Errors decrease, but some areas might be slow for a bit.
  • Normalisation: Caches get back to normal, and origin loads balance out, with fewer errors.
  • Post‑mortem: Cloudflare shares what went wrong and how to avoid it, showing things are fixed.

When to escalate and when to wait

Do we wait or act? If Cloudflare is working on the problem and most users can access sites, waiting a bit is wise. This is true if things are getting better.

But, if your area is not improving, even when others are, it’s time to escalate. Share traceroutes, request IDs, and times to help support find the issue.

If downtime is close to your recovery time or affects critical services, act fast. This ensures your services keep running while Cloudflare fixes the problem.

Conclusion

A big Cloudflare Down event has hit users and businesses in Australia. It affected DNS, edge routing, and proxy performance. The impact varied by ISP, city, and network type.

Many saw timeouts, SSL errors, or slow paths. Others experienced regional failovers and inconsistent reachability. These issues are real but can be managed with the right steps.

What should we do now? We keep everyone updated with a clear status page and regular updates. We run detailed diagnostics to find the root cause.

We apply fixes that can be easily undone. This includes lowering TTLs and scaling origin capacity. Each change is simple to reverse when things get back to normal.

What comes next? We expect a gradual recovery as things get back to normal. Cloudflare will share a review of what happened. We’ll use this to improve our services for Australian teams.

We’ll keep an eye on things and help you stay resilient. We’ll guide you on how to handle future issues. Our goal is to support you through this and make your digital operations stronger.

FAQ

What’s happening with Cloudflare right now in Australia?

Cloudflare is experiencing a widespread outage. This is causing slow loads and gateway errors for sites using its CDN, DNS, WAF, and security stack. Many services are intermittently unreachable, leading to issues like checkout failures and login problems.

How do I check the current Cloudflare status?

Visit the official Cloudflare status page for live updates. Look for markers like “Identified,” “Monitoring,” or “Resolved.” Also, check third-party monitors like Downdetector and your own tools.

Are these Cloudflare problems affecting all websites?

No, the impact varies. It depends on the region, ISP peering, and Cloudflare components used. Many Australian users report errors, but some retries succeed.

Which Australian cities and networks are hit the hardest?

Sydney and Melbourne are early hotspots. Ripple effects are seen in Brisbane, Perth, Adelaide, Canberra, and Hobart. Mobile carriers, NBN fixed-line, and enterprise networks are affected differently.

What are the immediate business impacts of the Cloudflare outage?

Expect failed payments and abandoned carts. Support tickets and staff access issues are also common. Media sites and government portals may time out.

What do common Cloudflare errors 502, 522, and 525 mean?

502 means a bad gateway response. 522 signals a TCP timeout. 525 indicates an SSL handshake failure. These errors can occur even with healthy origins due to edge congestion.

How can we tell if it’s Cloudflare down or a local ISP fault?

Compare results across networks. Test via a mobile hotspot and your office NBN. Try different DNS resolvers. If symptoms change, it points to peering or DNS path issues.

What should site owners do during Cloudflare server issues?

Publish a status page off Cloudflare. Reduce DNS TTLs for agility. Consider toggling critical hostnames to DNS-only after validating origin security and capacity.

Is it safe to bypass Cloudflare temporarily?

Only if you’ve prepared. Confirm origin IPs are protected, scaled, and compliant. Lower TTLs first, then selectively switch essential hostnames. Monitor SSL compatibility to avoid 525 errors.

How long do Cloudflare outages usually last?

Large incidents go through phases: containment, stabilisation, normalisation, and post-mortem. We watch BGP stability, latency, and falling 5xx rates. Recovery can be intermittent by region.

What are Cloudflare’s likely root causes during events like this?

Common triggers include BGP routing anomalies, DNS disruptions, software rollouts affecting WAF or proxy layers, and isolated PoP issues. Cloudflare typically mitigates by rolling back configs, rebalancing traffic, and dampening unstable routes.

How does Anycast impact Cloudflare network down events?

Anycast improves resilience by advertising the same IPs from many locations. During an incident, path asymmetry can create regional pockets of failure. That’s why some users report normal performance while others see errors until BGP paths reconverge.

What can users try without making things worse?

Refresh after 30–60 seconds. Test on a different network, such as mobile. If payments fail, try an alternative channel offered by the merchant, like phone orders or delayed payment links. Avoid repeated checkout attempts that may duplicate orders.

Which services are most affected when Cloudflare is not working?

E-commerce checkouts, SaaS dashboards, media assets, fintech APIs, and any site relying on Cloudflare’s TLS termination and caching are most affected. Third-party scripts, tag managers, and fonts hosted via Cloudflare can also stall page rendering even if your origin remains healthy.

How should we communicate with customers during a Cloudflare outage?

Be direct and plain-spoken. Acknowledge the Cloudflare outage, explain known errors, offer alternatives for urgent tasks, and give the time of the next update. Avoid speculative ETAs. Update when Cloudflare status or your telemetry confirms improvement.

When should we escalate and to whom?

If critical journeys remain impaired after global recovery signals improve, escalate to Cloudflare support with traceroutes, request IDs, and timestamps. Internally, activate failover if the outage exceeds your RTO or impacts regulated workflows such as payments or healthcare.

What signals show the Cloudflare outage is easing?

Reduced error rates, faster TLS handshakes, stable BGP routes, and improved RTTs to Cloudflare edges from Australian probes. You’ll also see fewer 502/522/525 errors and fewer timeouts on retries.

How can we reduce risk from future Cloudflare server issues?

Document today’s lessons, refine your runbooks, and consider multi-CDN or multi-resolver strategies. Keep a lightweight status site off Cloudflare, validate origin scale, and set sensible cache rules. These steps help contain impact when the Cloudflare network is down.

Where can we see official Cloudflare outage communications?

Check the Cloudflare status portal and @Cloudflare on X for engineering and PR updates. Expect brief notes during live incidents and a concise root-cause summary after stabilisation.

Are Cloudflare Workers, Pages, or Zero Trust affected?

They can be. Workers and KV may see latency spikes, and Zero Trust or Access can disrupt staff authentication. Watch the status components and correlate with your monitoring to confirm which services drive user-visible failures.

Does switching DNS resolvers help during a Cloudflare error?

Sometimes. Moving to 1.1.1.1, 8.8.8.8, or 9.9.9.9 can bypass a resolver path issue. If errors persist across resolvers, the problem is likely at the Cloudflare edge or in the route to your origin.

Insights

The latest from our knowledge base