With all the nice cloud systems out there, it's tempting to just set up an account with Quay or Docker Hub and push/pull everything from there. They're great services and incredibly convenient, it's true, but running your own secure private registry has benefits.
What's so good about a secure private registry?
- Prevents (one avenue for) proprietary server code leaking.
- Improves startup time for workloads because pulling images is a local operation.
- Reduces costs by reducing ingress bandwidth.
- Smaller attack surface by using only approved images from your registry.
What approaches are we NOT tackling?
- Disabling the public registry
Kubernetes does not directly pull images, rather it calls a system tool to pull images. That means it's a per-node change to modify the default registry. That's fairly invasive and requires care when spinning up new nodes, so I don't want to go that route. Plus that change is in a different place for every install flavor out there. If you go decide to lock down public image access, be very aware that running your sole image registry inside Kubernetes can lead to a snake-eating-its-tail problem. If every node doesn't already have a valid image to launch the registry with, it won't be able to serve any other images. My suggestion to completely lock things down, consider running a separate tiny k8s cluster with public image access and only a registry in it. That way there's no single point of failure.
- Pull-through cache
The standard registry actually does not allow for a single instance to be both a private registry you can push to and a pull-through cache at the same time. You can do one or the other, but not both in the same running instance. It is only a useful pull-through cache if it intercepts all the image requests, which also requires modifying every node configuration, as above, so I'm not going to bother.
- Making our private registry the default
I wanted all my images to pull through my private registry, for caching purposes. I also wanted to be able to refer to images that I produce without needing to specify my registry server in every pod declaration. However, like the other features I'd like to support, this comes back to needing to modify the node configuration, which I want to avoid. The other down-side is manifests become less portable if you don't use explicit server prefixes. Maybe that's not a strong reason, but I'm going to accept it and back away slowly.
Configuring a Secure Private Registry
With all the things we're not going to do out of the way, let's get on with it. Each step takes a few commands to get going, but it's not a very complicated process. However, we need a few preconditions to work quickly. I'm running haproxy-ingress and cert-manager, so that incoming connections can use HTTPS (which is why it's secure), but terminate the TLS encryption at the ingress, before it reaches the registry pod. You can let the registry do all the certificate handling, but if you have an ingress, it's cleaner to do it this way. If you use a different ingress or want to provide a manual Certificate, just adjust the manifest slightly.
1. Start Running a Registry
a) You need a subdomain to get an HTTPS certificate, so go create a subdomain for your new registry.
b) Generate a user/password for the registry--preferably something less obvious than testuser:testpassword
.
~$ docker run --entrypoint htpasswd httpd:2 -Bbn testuser testpassword
testuser:$2y$05$TgD7YaW1Ld4EtOy2sJ0PNufxrP48u0NbxMQjAgDWxL7yQ4SwIyBBC
c) Save the following manifest to privateregistry.yaml
. Update the htpasswd at the bottom with the output from above. Fix the Certificate's domain name to match yours. Change the Ingress's class to match yours.
d) Apply this manifest to your cluster:
~$ kubectl apply -f privateregistry.yaml
After a few seconds, you should be able to hit your new endpoint like this, and get the curly braces back, indicating the registry API is awake:
~$ curl --user testuser:testpassword https://registry.example.com/v2/
{}
2. Push Content to the Registry
Great! Now, let's try something simple to test it out. Here's the commands to pull down the classic hello-world
image. I would recommend you tail the logs in the registry container so you can see what's happening over there during these steps.
Let's be extra-thorough and remove the local, cached versions of the image. This only affects the local docker image cache, not the registry server.
~$ docker rmi hello-world registry.example.com/my-hello-world
3. Start Running Workloads that Pull From the Registry
To recap, we have pushed some kind of image to our registry service. It's secured by a username and password, which is encrypted over HTTPS. That's a good foundation. Here's the simplest manifest that will pull the image from that registry. Note the server name in the image
field, which is the critical difference in directing the pull mechanism to your private registry.
apiVersion: v1
kind: Pod
metadata:
name: hello-world
spec:
containers:
- name: hello-world
image: registry.example.com/my-hello-world
Once you apply this to your cluster, you should see this, because Kubernetes has no idea where to pull this image from:
~$ kubectl apply -f test.yaml
pod/hello-world created
~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-world 0/1 ErrImagePull 0 4s
Time to give Kubernetes the password to the registry. First, we need to create a docker-registry
secret, which we can do with this (very long) command:
~$ kubectl create secret docker-registry privateregistry --docker-server=registry.example.com --docker-username=testuser --docker-password=testpassword --docker-email=test@example.com
secret/privateregistry created
I don't like pasting random commands in without seeing what they do. So here's what a secret looks like... not very informative:
apiVersion: v1
data:
.dockerconfigjson: eyJhdXRocyI6eyJyZWdpc3RyeS5leGFtcGxlLmNvbSI6eyJ1c2VybmFtZSI6InRlc3R1c2VyIiwicGFzc3dvcmQiOiJ0ZXN0cGFzc3dvcmQiLCJlbWFpbCI6InRlc3RAZXhhbXBsZS5jb20iLCJhdXRoIjoiZEdWemRIVnpaWEk2ZEdWemRIQmhjM04zYjNKayJ9fX0=
kind: Secret
metadata:
creationTimestamp: null
name: privateregistry
type: kubernetes.io/dockerconfigjson
That magic-looking blob of text is just base64 encoded JSON blob. It's just this:
{"auths":{"registry.example.com":{"username":"testuser","password":"testpassword","email":"test@example.com","auth":"dGVzdHVzZXI6dGVzdHBhc3N3b3Jk"}}}
The gobbledygook in auth
is yet another base64 encoding of testuser:testpassword
. Nothing magical here.
Okay, we're at the finish line here. There's two ways to go. Either 1) paste an imagePullSecrets
snippet into every manifest that uses the new private registry, or 2) add that snippet to the ServiceAccount in your namespace. (Every namespace has its own ServiceAccount/default
which may cause problems if you move pods into a new namespace and forget to set this up. But this can also be used to separate users into namespaces.) Let's just modify the default
namespace, so it will set the imagePullSecrets
automatically. All we do is get the current YAML for the ServiceAccount
, make a couple of small changes, then re-apply the manifest. Here's how you get it:
Simply comment out the resourceVersion
, add the imagePullSecrets
lines, then kubectl apply -f serviceaccount.yaml
to update it. Don't just paste this in! Use your YAML, not mine, or you'll break stuff.
Done!
At this point, as long as you push an image to your private registry, kubernetes will find it when instructed. It's HTTPS encrypted, user/password protected, and running locally on your cluster.
Troubleshooting
- I did find that the registry is sensitive to the host path, and didn't like the folder being put in certain places. The error is "unsupported" when something goes wrong when pushing an image.
- You can ratchet up the
log:
level and see more details, but the default is often enough to figure out issues. - You will also notice that some container systems will access the registry first without auth, then attempt with auth. The registry throws a warning about that first access. Nothing is actually wrong, it's really checking authentication. It's the container system's fault for triggering the warning.
(Photo credits to Olga Zhushman and Andrea Piacquadio)