So, there I was. For the second time, staring at a screen full of AdGuard Home logs complaining that it couldn’t reach its upstream DNS servers. The first time this happened, I spent a day battling with the router and with my ISP until I thought I had fixed it. ISP DNS Protections gone, and I was happy! But NOO, it had to come back!
So back in I dove. And I had a CLEVER idea. A truly galaxy-brain moment. “Aha!” I thought, “The default DNS servers are probably just blocked regardless. What if I put a whole lot of a block in there? I did. What errors out? Removed. In the end, I ended up with just the IPv6 DNS servers!” I watched in triumph as all the red error messages vanished. I was soooooo proud. I leaned back in my chair, basking in the glow of my own genius.
That feeling lasted about five minutes, until a tiny, nagging thought crept in: “…for my DNS resolver to use an IPv6 address… doesn’t the resolver itself need an IPv6 address?”
Yes. Yes, it does.
So began the simple, two-minute fix that would consume my life. All I had to do was add ip6=auto to my container’s config and make sure the host was ready. I fired up the shell, tapped in the changes, and Bob’s your uncle I’m done.
What’s the worst that could happen?
It All Goes Wrong
The first sign of trouble? ERR_CONNECTION_REFUSED. Not just on my AdGuard container, but on every single guest on the host. My “simple fix” hadn’t just failed; it had taken my entire network hostage.
And so began the descent into madness.
Suspect #1: The Host’s Config. Maybe the host was the problem. The Internet suggested a weird kernel setting: accept_ra=2. Something about making the host accept IPv6 announcements even when it’s forwarding traffic. Fine. Let’s do it.
sysctl -w net.ipv6.conf.vmbr0.accept_ra=2
And it worked! The host got an IPv6 address! But were the containers fixed? Nope. Still connection refused. FFS.
The Real Villain Reveals Itself (Or So I Thought)
At this point, things were dire. Nothing made sense. I went through every possibility.
An IP Conflict? I checked the MAC addresses and ARP tables. Nope. All correct.
A firewall inside the container? I checked for ufw and nftables. BINGO! A failed nftables service inside AdGuard! This had to be it! I disabled it, rebooted the container, and… nothing. Still connection refused.
This is at my most paranoid, because the firewall was not blocking anything before the IPv6 introduction, so why would it now? Don’t care, firewall off!
Nope. This defied all logic. The service was running. The firewall was off. There was no IP conflict. What was left? I was losing my mind. I threw the glove in: remove all changes, go back to only IPv4.
Everything was taken away, the lines in the config, the interface options, everything. Everything returns to what it was, EVERYTHING!
Defeated, I made a vow.
“never will I ever install ipv6 ever again 😀 Should I ever tackle the 6 shall everyone call me craven!”
But… HOW?!
Rebooted the hypervisor host. Fresh system. I waited. The host came back online. I held my breath and tried to access the AdGuard UI. It loaded. IT LOADED! I was back!
Then, out of sheer curiosity, I SSH’d into the AdGuard container and typed ip a. And I saw it.
inet6 2a00:23c8:10f:901:be24:11ff:fe58:5add/64 scope global
It had an IPv6 address.
HOOOOOOOOOOOOOOOW?!?! We just REMOVED all the IPv6 settings! The .conf file was clean! My brain melted. I thought it was a ghost, a remnant of an old DHCP lease that would vanish and plunge me back into darkness.
And then, (after about two hours on IPv6 DHCP research) it was all explained, the final, mind-blowing truth.
It wasn’t a ghost. It was SLAAC. IPv6 is designed to be automatic. My router was always offering it, and the container’s OS was always listening by default. The only reason it never worked before was that the Proxmox host bridge was in a “DENIED” state, silently swallowing the announcement packets and not moving them on. Remember that command
sysctl -w net.ipv6.conf.vmbr0.accept_ra=2
That one, single command. That wasn’t just a fix for the host. That was the master switch. It “woke” the bridge and allowed the router’s IPv6 announcements to finally flow through to ALL the containers. I checked my other containers. They all had IPv6 addresses.
O……M……G……….
Back to making this permanent!!!
echo "net.ipv6.conf.vmbr0.accept_ra = 2" > /etc/sysctl.d/99-proxmox-ipv6.conf
sysctl -p /etc/sysctl.d/99-proxmox-ipv6.conf
So, in the end, my quest to enable IPv6 broke everything, forced me to fix a deep, hidden problem I never knew I had, and resulted in… a fully working IPv6 deployment.
I guess you can all call me Craven now.