Here is the article content with 3 external citations added:
I was staring at an
SSLHandshakeException at 1:45 AM on a Thursday.
The Java application ran perfectly on my M3 Mac. It connected to MongoDB 7.0.5. It processed the data. But the second I pushed it into a local Docker container, it choked. Just a massive wall of stack traces complaining about PKIX path building and unrecognized certificates.
You probably know this pain.
Local works. Container fails.
Well, that’s not entirely accurate — my first instinct was to jump into the container and poke around the network layer to see what was happening.
docker exec -it my-java-app /bin/sh
OCI runtime exec failed: exec failed: unable to start container process: exec: "/bin/sh": stat /bin/sh: no such file or directory: unknown
Right. Distroless images. Great for security, absolute garbage for late-night troubleshooting. There is no shell. There is no
curl. There is no
keytool. Just my compiled bytecode and a stripped-down JVM.
In the past, I would have temporarily swapped the base image from
gcr.io/distroless/java21-debian12 to a fat Ubuntu image, added an
apt-get install -y curl dnsutils, and rebuilt the whole thing. That takes about four minutes on our current build setup. Four minutes of context switching where I inevitably open another tab, read something unrelated, and forget what I was actually trying to fix. I did that twice before remembering we have better tools now.
But if you are running a recent version of Docker Desktop or Docker Engine
version 4.13.0 or later, you don’t need to rebuild your images to debug them. The
docker debug command is right there. It drops you into a shell with a full toolkit attached to your running container’s namespaces, without requiring those tools to actually exist in your image.
docker debug my-java-app
# You are now in a diagnostic shell attached to the container
Instantly, I had a shell. I didn’t have to restart the failing JVM process or modify my Dockerfile.
I checked the network layer first to make sure the container could actually see the database.
nc -zv mongodb-primary 27017
Connection to mongodb-primary 27017 port [tcp/*] succeeded!
Connection succeeded. So Docker’s internal DNS was fine. The bridge network was routing correctly.
Next up, the TLS certificates. This is where things get messy with Java. The JVM maintains its own
cacerts keystore, entirely separate from the OS-level certificates. I pulled the certificate directly from the MongoDB container using
openssl from my debug shell.
openssl s_client -showcerts -connect mongodb-primary:27017 < /dev/null
The issuer was our internal staging CA. I then dumped the contents of the JVM truststore inside the distroless image to see what it actually knew about.
/usr/bin/keytool -list -keystore /etc/ssl/certs/java/cacerts -storepass changeit | grep "staging"
Nothing.
The internal CA certificate wasn’t in the Debian 12 distroless truststore. My local Mac had it installed in the system keychain, which my local JDK was picking up automatically. That meant everything worked flawlessly right up until the moment I tried to run it in an isolated environment. The container was flying blind.
The fix was stupidly simple. I just needed to mount the internal CA cert and pass a specific property to the JVM args in my compose file.
services:
my-java-app:
image: my-app:latest
volumes:
- ./certs/staging-ca.crt:/app/certs/staging-ca.crt:ro
environment:
- JAVA_TOOL_OPTIONS="-Djavax.net.ssl.trustStore=/app/certs/staging-ca.crt -Djavax.net.ssl.trustStoreType=PKCS12"
Stop installing curl in your production Dockerfiles. Learn the debug CLI. Stop Treating Your Sessions Like Burner Phones when it comes to debugging your containers.
Docker’s official documentation on the docker debug command explains how it allows you to “attach a diagnostic shell to a running container” without modifying the container image.