Setting up a Secure Private Registry in Kubernetes

With all the nice cloud systems out there, it's tempting to just set up an account with Quay or Docker Hub and push/pull everything from there.  They're great services and incredibly convenient, it's true, but running your own secure private registry has benefits.

What's so good about a secure private registry?

  1. Prevents (one avenue for) proprietary server code leaking.
  2. Improves startup time for workloads because pulling images is a local operation.
  3. Reduces costs by reducing ingress bandwidth.
  4. Smaller attack surface by using only approved images from your registry.

What approaches are we NOT tackling?

  • Disabling the public registry

Kubernetes does not directly pull images, rather it calls a system tool to pull images.  That means it's a per-node change to modify the default registry.  That's  fairly invasive and requires care when spinning up new nodes, so I don't want to go that route.  Plus that change is in a different place for every install flavor out there.  If you go decide to lock down public image access, be very aware that running your sole image registry inside Kubernetes can lead to a snake-eating-its-tail problem.  If every node doesn't already have a valid image to launch the registry with, it won't be able to serve any other images.  My suggestion to completely lock things down, consider running a separate tiny k8s cluster with public image access and only a registry in it.  That way there's no single point of failure.

  • Pull-through cache

The standard registry actually does not allow for a single instance to be both a private registry you can push to and a pull-through cache at the same time.  You can do one or the other, but not both in the same running instance.  It is only a useful pull-through cache if it intercepts all the image requests, which also requires modifying every node configuration, as above, so I'm not going to bother.

  • Making our private registry the default

I wanted all my images to pull through my private registry, for caching purposes.  I also wanted to be able to refer to images that I produce without needing to specify my registry server in every pod declaration.  However, like the other features I'd like to support, this comes back to needing to modify the node configuration, which I want to avoid.  The other down-side is manifests become less portable if you don't use explicit server prefixes.  Maybe that's not a strong reason, but I'm going to accept it and back away slowly.

Configuring a Secure Private Registry

With all the things we're not going to do out of the way, let's get on with it.  Each step takes a few commands to get going, but it's not a very complicated process.  However, we need a few preconditions to work quickly.  I'm running haproxy-ingress and cert-manager, so that incoming connections can use HTTPS (which is why it's secure), but terminate the TLS encryption at the ingress, before it reaches the registry pod.  You can let the registry do all the certificate handling, but if you have an ingress, it's cleaner to do it this way.  If you use a different ingress or want to provide a manual Certificate, just adjust the manifest slightly.

1. Start Running a Registry

a) You need a subdomain to get an HTTPS certificate, so go create a subdomain for your new registry.

b) Generate a user/password for the registry--preferably something less obvious than testuser:testpassword.

~$ docker run --entrypoint htpasswd  httpd:2 -Bbn testuser testpassword

testuser:$2y$05$TgD7YaW1Ld4EtOy2sJ0PNufxrP48u0NbxMQjAgDWxL7yQ4SwIyBBC

c) Save the following manifest to privateregistry.yaml.  Update the htpasswd at the bottom with the output from above.  Fix the Certificate's domain name to match yours.  Change the Ingress's class to match yours.

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: registry-example-com
spec:
  commonName: registry.example.com
  dnsNames:
  - registry.example.com
  issuerRef:
    name: letsencrypt
    kind: ClusterIssuer
  secretName: registry-example-com
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: privateregistry
  annotations:
    kubernetes.io/ingress.class: "haproxy"
    ingress.kubernetes.io/ssl-redirect: "true"  # redirect port http -> https
  labels:
    app: privateregistry
spec:
  tls:
  - hosts:
    - registry.example.com
    secretName: registry-example-com
  rules:
  - host: registry.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: privateregistry
            port:
              name: http-web # send traffic to Service/privateregistry's http-web
---
apiVersion: v1
kind: Service
metadata:
  name: privateregistry
  labels:
    app: privateregistry
spec:
  ports:
  - port: 80
    name: http-web
    protocol: TCP
    targetPort: http-web
  selector:
    app: privateregistry  # send traffic to the registry pods
  sessionAffinity: None
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: privateregistry
  labels:
    app: privateregistry
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: privateregistry   # deployment will track pods it generates because of this
  template:
    metadata:
      labels:
        app: privateregistry  # pods have this label, and Service and Deployment depend on it
    spec:
      containers:
      - name: registry
        image: registry:2
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 5000
          name: http-web
        livenessProbe: 
          initialDelaySeconds: 30
          timeoutSeconds: 30
          httpGet:
            path: /
            port: http-web
        readinessProbe: 
          initialDelaySeconds: 30
          timeoutSeconds: 30
          httpGet:
            path: /
            port: http-web
        resources:
          limits:
            cpu: 300m
            memory: 300Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: auth
          mountPath: /etc/docker/registry/htpasswd
          subPath: htpasswd
        - name: auth
          mountPath: /etc/docker/registry/config.yml
          subPath: config
        - name: containers
          mountPath: /var/lib/registry
      volumes:
      - name: containers
        hostPath:
          path: /tmp/privateregistry  # local folder on the host.  
          type: DirectoryOrCreate     # You'll probably want to use NFS or centralized storage instead
      - name: auth
        configMap:
          name: privateregistry-auth
          items:
          - key: htpasswd
            path: htpasswd
          - key: config
            path: config
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: privateregistry-auth
  labels:
    app: privateregistry
data:
  # This defines what user/passwords will be accepted by the registry when talking to its API.
  htpasswd: |
    testuser:$2y$05$TgD7YaW1Ld4EtOy2sJ0PNufxrP48u0NbxMQjAgDWxL7yQ4SwIyBBC  # testuser:testpassword
  config: |
    version: 0.1
    log:
      fields:
        service: registry
    storage:
      cache:
        blobdescriptor: inmemory
      filesystem:
        rootdirectory: /var/lib/registry
    http:
      addr: :5000
      headers:
        X-Content-Type-Options: [nosniff]
      secret: SomeKindOfSecretKeyYouShouldNotReveal  # this can be anything, but it should be set to something
    auth:
      htpasswd:
        realm: basic-realm
        path: /etc/docker/registry/htpasswd  # this is where the htpasswd will be mounted
---
Save as privateregistry.yaml

d) Apply this manifest to your cluster:

~$ kubectl apply -f privateregistry.yaml

After a few seconds, you should be able to hit your new endpoint like this, and get the curly braces back, indicating the registry API is awake:

~$ curl --user testuser:testpassword https://registry.example.com/v2/
{}

2. Push Content to the Registry

Great!  Now, let's try something simple to test it out.  Here's the commands to pull down the classic hello-world image.  I would recommend you tail the logs in the registry container so you can see what's happening over there during these steps.

~$ docker pull hello-world

Using default tag: latest
latest: Pulling from library/hello-world
docker.io/library/hello-world:latest

~$ docker tag hello-world registry.example.com/my-hello-world
~$ docker login -u testuser -p testpassword registry.example.com
~$ docker push registry.example.com/my-hello-world

Using default tag: latest
The push refers to repository [registry.example.com/my-hello-world]
f22b99068db9: Pushed

~$ docker logout registry.example.com
Logout is optional, of course.

Let's be extra-thorough and remove the local, cached versions of the image.  This only affects the local docker image cache, not the registry server.

~$ docker rmi hello-world registry.example.com/my-hello-world

3. Start Running Workloads that Pull From the Registry

To recap, we have pushed some kind of image to our registry service.  It's secured by a username and password, which is encrypted over HTTPS.  That's a good foundation.  Here's the simplest manifest that will pull the image from that registry.  Note the server name in the image field, which is the critical difference in directing the pull mechanism to your private registry.

apiVersion: v1
kind: Pod
metadata:
  name: hello-world
spec:
  containers:
  - name: hello-world
    image: registry.example.com/my-hello-world

Once you apply this to your cluster, you should see this, because Kubernetes has no idea where to pull this image from:

~$ kubectl apply -f test.yaml
pod/hello-world created

~$ kubectl get pods
NAME                               READY   STATUS         RESTARTS   AGE
hello-world                        0/1     ErrImagePull   0          4s

Time to give Kubernetes the password to the registry.  First, we need to create a docker-registry secret, which we can do with this (very long) command:

~$ kubectl create secret docker-registry privateregistry --docker-server=registry.example.com --docker-username=testuser --docker-password=testpassword --docker-email=test@example.com

secret/privateregistry created

I don't like pasting random commands in without seeing what they do.  So here's what a secret looks like... not very informative:

apiVersion: v1
data:
  .dockerconfigjson: eyJhdXRocyI6eyJyZWdpc3RyeS5leGFtcGxlLmNvbSI6eyJ1c2VybmFtZSI6InRlc3R1c2VyIiwicGFzc3dvcmQiOiJ0ZXN0cGFzc3dvcmQiLCJlbWFpbCI6InRlc3RAZXhhbXBsZS5jb20iLCJhdXRoIjoiZEdWemRIVnpaWEk2ZEdWemRIQmhjM04zYjNKayJ9fX0=
kind: Secret
metadata:
  creationTimestamp: null
  name: privateregistry
type: kubernetes.io/dockerconfigjson

That magic-looking blob of text is just base64 encoded JSON blob.  It's just this:

{"auths":{"registry.example.com":{"username":"testuser","password":"testpassword","email":"test@example.com","auth":"dGVzdHVzZXI6dGVzdHBhc3N3b3Jk"}}}

The gobbledygook in auth is yet another base64 encoding of testuser:testpassword .  Nothing magical here.

Okay, we're at the finish line here.  There's two ways to go.  Either 1) paste an imagePullSecrets snippet into every manifest that uses the new private registry, or 2) add that snippet to the ServiceAccount in your namespace.  (Every namespace has its own ServiceAccount/default which may cause problems if you move pods into a new namespace and forget to set this up.  But this can also be used to separate users into namespaces.)  Let's just modify the default namespace, so it will set the imagePullSecrets automatically.  All we do is get the current YAML for the ServiceAccount, make a couple of small changes, then re-apply the manifest.  Here's how you get it:

~$ kubectl get serviceaccount/default -n default -o yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  creationTimestamp: "2021-08-07T07:22:30Z"
  name: default
  namespace: default
  resourceVersion: "262"
  selfLink: /api/v1/namespaces/default/serviceaccounts/default
  uid: 1c94cca0-812e-4d8a-b0a8-70563b705c4d
secrets:
- name: default-token-wmcgg
Save this as serviceaccount.yaml

Simply comment out the resourceVersion, add the imagePullSecrets lines, then kubectl apply -f serviceaccount.yaml to update it.  Don't just paste this in!  Use your YAML, not mine, or you'll break stuff.

apiVersion: v1
kind: ServiceAccount
metadata:
  creationTimestamp: "2021-08-07T07:22:30Z"
  name: default
  namespace: default
#  resourceVersion: "262"
  selfLink: /api/v1/namespaces/default/serviceaccounts/default
  uid: 1c94cca0-812e-4d8a-b0a8-70563b705c4d
secrets:
- name: default-token-wmcgg
imagePullSecrets:
- name: privateregistry
This gives K8s the keys to the car.

Done!

At this point, as long as you push an image to your private registry, kubernetes will find it when instructed.  It's HTTPS encrypted, user/password protected, and running locally on your cluster.

Troubleshooting

  • I did find that the registry is sensitive to the host path, and didn't like the folder being put in certain places.  The error is "unsupported" when something goes wrong when pushing an image.
  • You can ratchet up the log: level and see more details, but the default is often enough to figure out issues.
  • You will also notice that some container systems will access the registry first without auth, then attempt with auth.  The registry throws a warning about that first access.  Nothing is actually wrong, it's really checking authentication.  It's the container system's fault for triggering the warning.

(Photo credits to Olga Zhushman and Andrea Piacquadio)