TL;DR: I missed adding “ssl” to the listen [::]:443
statement in nginx. I only did this with listen 443 ssl default_server
. And so it broke all IPv6 connections attempting to do HTTPS on a HTTP port.
Symptoms
Yesterday when doing a bit of work on the IAAN server I suddenly got browser warnings that it cannot connect to my website. I immediately started investigating the issue, changing things here and there and eventually got it back working. What I didn’t know was what had just happened, I just restarted nginx, nomad and some pods until it worked again. Today, I went to the office and noticed on my phone that it also showed errors that it cannot connect to my website, like refuses to connect. But then I checked on my work laptop and it was all fine?
It only started clicking in my head when I was back home, on my home WiFi network and the website worked again on my phone. I switched back to 5G. It stopped working. On my home desktop, I switched to the WiFi network of the CPE of my ISP. It stopped working. I checked what external IPs I was showing up as when it doesn’t work: IPv6. I check what external IPs I show up as when it does work: IPv4.
Ok, but what’s wrong on my server that it cannot connect?
The website not working on my phone.
Verifying my nginx config
… did not notice anything wrong
Was suspecting that the upstream
block had something to do with it:
upstream website {
server 127.0.0.1:8000;
}
So I tried using localhost
so that if a AAAA record lookup was done, you get the IPv6 localhost IP back.
upstream website {
server localhost:8000;
}
I also tried making a specific IPv6 upstream:
upstream website_ipv6 {
server [::1]:8000;
}
Note that all of this works, if only I added SSL
to the listen
statement.
Changing my Hashicorp Nomad config
I added a host network for IPv6:
host_network "lokaal_ipv6" {
cidr = "::1/128"
}
where the CIDR will try to match it to an IP assignment of one of your network cards.
And then in my Job HCL I added:
job "website" {
datacenters = ["dc1"]
group "website" {
network {
mode = "host"
# snip
port "website_port_ipv6" {
static = 8000
to = 80
host_network = "lokaal_ipv6"
}
}
task "deploy_website" {
driver = "podman"
config {
ports = ["website_port", "website_port_ipv6"]
# snip
}
}
}
}
This also worked :)
$ podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b0f559108bfb docker.io/library/nginx:latest nginx -g daemon o... 18 seconds ago Up 18 seconds ago 127.0.0.1:8000->80/tcp, 127.0.0.1:8000->80/udp, ::1:8000->80/tcp, ::1:8000->80/udp deploy_website-5cfd1759-5177-868b-9d80-fd0aa7c4bf98
I was also able to successfully curl the website locally with curl http://[::1]:8000/
.
Changes to podman
I read an article on RedHat on having podman support IPv6 networking. Although it clearly mentioned that containers normally do not get an IP and you access them via port forwarding, I did learn a few things.
Like in Podman v4, netavark is the default and I confirmed this:
$ podman info | grep network
networkBackend: netavark
I also checked the network settings of a running container:
$ podman container inspect 42600fbcd6e5 | grep -i network
"NetworkSettings": {
"NetworkMode": "slirp4netns",
Checked the podman networks configured:
$ podman network inspect podman
[
{
"name": "podman",
"id": "2f259bab93aaaaa2542ba43ef33eb990d0999ee1b9924b557b7be53c0b7a1bb9",
"driver": "bridge",
"network_interface": "podman0",
"created": "2024-10-22T18:40:17.442769338Z",
"subnets": [
{
"subnet": "10.88.0.0/16",
"gateway": "10.88.0.1"
}
],
"ipv6_enabled": false,
"internal": false,
"dns_enabled": false,
"ipam_options": {
"driver": "host-local"
}
}
]
And decided to create an IPv6 enabled one:
$ podman network create --ipv6 podman_ipv6
podman_ipv6
$ podman network inspect podman_ipv6
[
{
"name": "podman_ipv6",
"id": "d6abe0a882fe62c224f825f8e6371ad9218cf75595c49cf08f52874a1ca9111e",
"driver": "bridge",
"network_interface": "podman1",
"created": "2024-10-22T18:40:56.723305273Z",
"subnets": [
{
"subnet": "10.89.0.0/24",
"gateway": "10.89.0.1"
},
{
"subnet": "fd49:8cad:1ae0:49d9::/64",
"gateway": "fd49:8cad:1ae0:49d9::1"
}
],
"ipv6_enabled": true,
"internal": false,
"dns_enabled": true,
"ipam_options": {
"driver": "host-local"
}
}
]
But anyway, that all didn’t matter because it was just “ssl” being forgotten in nginx. Since nginx terminates the SSL connection, it does not really matter if the upstream proxy host is IPv4 or IPv6 and adding IPv6 would be more a gimmick than do anything useful so I reverted all IPv6 config of Nomad and Podman after I figured everything out.