A few days after setting up my simple firewall with ferm on Debian 12, everything seemed perfect. The host was quiet, the rules were clean, and I finally had a minimal setup that did exactly what I wanted, blocking everything inbound, keeping outbound open, and allowing SSH only from the local LAN.

It worked flawlessly, until one morning, Cloudflare Tunnel started to crawl.

It wasn’t a full failure. The tunnel connected, but everything behind it became painfully slow. Grafana dashboards that used to appear instantly now took half a minute to load. At first, I thought Cloudflare was having a bad day.But soon I noticed that even local services connected via 127.0.0.1 were lagging. Something deeper was wrong.

I started retracing my steps, beginning with the firewall configuration that I’d written so confidently.

That one line was the silent culprit. By using both ip and ip6, ferm applied the same set of rules to IPv4 and IPv6. It seemed elegant, but IPv6 behaves differently.

It absolutely depends on ICMPv6 for neighbor discovery, router solicitation, and path MTU discovery. If those packets are blocked, IPv6 doesn’t fail loudly; it just hangs, waiting, before reluctantly falling back to IPv4. That’s what caused the slowness.

Modern services like Cloudflared, the daemon behind Cloudflare Tunnel, prefer IPv6 whenever possible. When ferm silently dropped ICMPv6, the tunnel kept trying IPv6, timing out before retrying over IPv4. The connection worked, but only after several seconds of wasted time.

The fix was almost laughably simple.

After reloading ferm, the difference was instant. Cloudflare Tunnel connected smoothly again, no lag, no waiting.

It wasn’t Cloudflare’s fault; it was my firewall quietly blocking a protocol it didn’t understand.

A gentle reminder that IPv6 isn’t optional anymore.

Docker’s Silent Struggle

Just as I was celebrating that fix, another issue surfaced. Docker containers couldn’t reach the internet. Uptime Kuma failed to ping its targets, and PMM exporters couldn’t send their metrics out. Disabling ferm made everything normal again, clear evidence that something in the firewall was cutting Docker off.

Docker’s networking model is built around its own bridge interfaces and internal NAT. It injects iptables rules dynamically and expects to manage the FORWARD chain. By setting a global policy DROP, I had unintentionally overridden those assumptions.

I had written a rule that looked right but wasn’t.

The catch, Docker Compose (and most modern setups) no longer use docker0 exclusively. Each stack creates its own network bridges with names like br-3f1b8e.... Those weren’t covered by my rule, so containers on those bridges were alive but trapped, unable to reach out.

The fix was once again surprisingly small.

The br-+ wildcard matches all user-defined Docker bridges, letting containers communicate freely while keeping the firewall strict. After adding it, Uptime Kuma began pinging again, PMM exporters connected, and everything returned to normal.

The Final Working Configuration

Here’s the final, balanced ferm configuration that works perfectly with Cloudflare Tunnel and Docker, without exposing unnecessary ports:

This setup keeps inbound traffic locked down, allows Cloudflare Tunnel and Docker to operate normally, and still enforces strict control on the host.

It’s a small refinement, but one that makes the difference between a server that “mostly works” and one that works perfectly.

Sometimes, what slows a system isn’t a missing rule, but a missing understanding. IPv6 quietly expects its own rules, and Docker networks are more dynamic than they appear.

Once you account for both, ferm becomes what it’s meant to be, clean, predictable, and fast.

Lesson learned, test both IPv4 and IPv6, and never assume Docker’s world ends at docker0.

Security is about precision, not paranoia.

Leave A Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.