- RBAC in Kubernetes - Fri, Dec 8 2023
- Kubernetes CoreDNS - Mon, Dec 4 2023
- Update container images with Copa - Mon, Nov 27 2023
A public registry is managed by Docker and is open to the public. Anyone can upload Docker images, making it a great place to share open-source projects. The private registry, on the other hand, is managed by an organization or an individual. It is used to store Docker images that are intended for internal use only.
Prerequisites
For the purposes of the demonstration, I will use two Ubuntu systems. The first system is a VM running Ubuntu 22.04, which will act as a Docker host, where we will create a private Docker registry. The nginx web server will serve as a reverse proxy. For the client system, I will use a Ubuntu distribution with Docker in WSL 2, where we will pull a Docker image from our private Docker registry to launch a Docker container.
Set up a private Docker registry
To set up our private Docker registry, we will use a Docker image called registry. Docker allows us to run a private registry in a container. We can simply run a registry container on a Docker host alongside other containers.
First, connect the Ubuntu system where you want to run the Docker registry, and make sure Docker is installed on it.
sudo apt update && sudo apt install docker.io -y
Now, create a directory called /docker_repo on your Docker host. We will mount this directory as a volume inside a registry container to work with a persistent data store for our Docker registry.
sudo mkdir /docker_repo
Once you have the directory created, launch a Docker registry container with the following command:
sudo docker run --detach \ --restart=always \ --name registry \ --volume /docker_repo:/docker_repo \ --env REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY=/docker_repo \ --publish 5000:5000 \ registry
Let's briefly discuss each option:
- The docker run --detach command tells Docker to launch a new container in the background (daemon mode).
- The –restart=always command tells Docker to always restart this container, no matter what. This ensures that your registry container is automatically started when your host is restarted or if the container exits.
- The --name command gives a name to this container, which could be anything (registry, in our case).
- The --volume /docker_repo:/docker_repo command mounts a directory on the Docker host as a /docker_repo volume inside a Docker container.
- The --env REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY=/docker_repo defines an environment that is used by a Docker container to configure a /docker_repo volume as a root filesystem for storing all the Docker images that we will upload to our private Docker registry. By doing so, we are essentially storing all our Docker images inside the /docker_repo directory on the Docker host itself.
- The --publish 5000:5000 option maps TCP Port 5000 of the Docker host to TCP Port 5000 of the Docker registry container. Docker uses TCP Port 5000 by default, but you can customize it if you want.
- The registry keyword at the end refers to the registry Docker image. This image didn't exist on our Docker host, so it is pulled automatically from the Docker hub.
You can now view the running registry container using the docker ps command, as shown in the screenshot:
sudo docker ps sudo netstat -tlnp | grep :5000
The netstat command shows that your host is now listening on TCP Port 5000. At this point, your private Docker registry is ready for use.
In the next section, we will install nginx to be used as a reverse proxy from the Docker registry. Note that nginx is completely optional. The Docker registry container can be used without nginx. However, we will still use it in this guide, as it gives us more control and better visibility of the private registry. If you want to run the registry container completely without nginx, you can tweak the Docker run command, as shown below:
sudo docker run --detach \ --restart=always \ --name registry \ --volume /certs:/certs \ --volume /docker_repo:/docker_repo \ --env REGISTRY_HTTP_ADDR=0.0.0.0:443 \ --env REGISTRY_HTTP_TLS_CERTIFICATE=/certs/registry.techtuts.local.crt \ --env REGISTRY_HTTP_TLS_KEY=/certs/registry.techtuts.local.key \ --env REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY=/Docker_repo \ --publish 443:443 \ registry
Here, we set most items (such as certificate file path, registry port, and filesystem path) with environment parameters and directly map the TCP 443 ports of the Docker host and container. It will work just fine without nginx.
Install a TLS certificate
Docker, by default, uses an encrypted connection, unless configured otherwise. Since I am doing this as a demo, I will generate a self-signed certificate, which works for testing. However, you must understand that self-signed certificates are prone to man-in-the-middle (MitM) attacks. You might want to purchase and install a valid TLS certificate from a publicly trusted certificate authority.
To generate a self-signed certificate, use the openssl command, as shown below:
sudo mkdir /certs sudo openssl req \ -newkey rsa:4096 -nodes -sha256 -keyout /certs/registry.techtuts.local.key \ -addext "subjectAltName = DNS:registry.techtuts.local" \ -x509 -days 365 -out /certs/registry.techtuts.local.crt
We created a /certs directory on the Docker host and used the openssl command to store the certificate files in it. Make sure you replace the subjectAltName with your own domain name. When prompted, specify the same domain name there, too.
If you are planning to use a domain or subdomain, make sure you configure an A record (or CNAME record), depending upon the certificate verification mechanism used by the certificate authority. I am using a nonroutable internal domain (registry.techtuts.local), so I will add an A record for the registry subdomain in my private DNS in Docker and point it to the IP address of the Docker host.
Install the nginx reverse proxy
We are now ready to install nginx on the Docker host. We will configure it to pass all requests to the registry container running on TCP port 5000. To install nginx, run the following command:
sudo apt update && sudo apt install nginx -y
Once nginx is installed, use the sudo nano /etc/nginx/nginx.conf command to edit the main configuration file, and then add the following line under http block:
client_max_body_size 2048m;
You can set the value to 0 to completely avoid upload limit errors. I set it to 2 GB. If you skip this step, you will most likely run into the 413 Request Entity Too Large error when uploading large Docker images.
Now run the command below to create a new server block for the domain (or subdomain) that you're planning to use with the Docker registry.
sudo nano /etc/nginx/sites-available/registry.techtuts.local.conf
Then add the following lines of code to this file:
server { listen 80; server_name registry.techtuts.local; return 301 https://$host$request_uri; } server { listen 443 ssl http2; server_name registry.techtuts.local; # ssl params ssl_certificate /certs/registry.techtuts.local.crt; ssl_certificate_key /certs/registry.techtuts.local.key; ssl_protocols TLSv1.2; location / { proxy_pass http://localhost:5000; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_read_timeout 600; } }
The additional headers are helpful for preserving the actual source IP address and protocol of the original request as it traverses the proxy server. Be sure to adapt this code to your environment. Once this file is created, enable the new server block, and restart the nginx service:
sudo ln -s /etc/nginx/sites-available/registry.techtuts.local.conf /etc/nginx/sites-enabled/ sudo systemctl restart nginx
To confirm that nginx is working correctly, you can use the curl command with the Docker registry endpoint URL https://registry.techtuts.local/v2/
curl -kIL http://registry.techtuts.local/v2/
If you see an empty JSON string as an output, everything you have configured so far is working properly.
Create and customize a container
I will now launch a new Docker container using an nginx image, which will be pulled straight from the Docker hub:
sudo docker run --name webserver --detach nginx sudo docker ps sudo docker exec -it webserver /bin/bash
You can see in the screenshot that when I connected to the container and tried to run the ip addr, ping, or netstat commands, I received the bash: command not found error each time. This is because the original nginx image, pulled from the Docker hub, doesn't have these tools installed.
We will now install all the necessary tools in our webserver container, build a new image from this container, and upload it to our private Docker registry. You can use the same idea to package a business application in a container and then build and upload the customized image in your private registry. Then, whenever anyone on your team wants to deploy a new container using this image, it already contains everything needed.
To install the missing tools, run the following commands in the webserver container:
apt update apt install iproute2 iputils-ping net-tools -y
You can see in the screenshot that all commands are now working because the iproute2, iputils-ping, and net-tools packages are now installed.
Create a custom Docker image
We will now create a new image from the webserver container with the following commands:
# view currently running containers sudo docker ps # commit the customizations of webserver container to a new image sudo docker commit webserver nginx-local # view Docker images on your host sudo docker image ls # tag your new custom image sudo docker tag nginx-local registry.techtuts.local/nginx
After we run these commands, our first custom Docker image is ready to be pushed to the private Docker registry. At this stage, you can remove the nginx-local tag with the sudo docker rmi nginx-local command. This will not delete the actual image but remove the tag only.
Push an image to a private Docker registry
Before we can actually push the image to the private Docker registry, we need to perform one more step because we are using a self-signed certificate.
If you try to push the image to a private registry running with a self-signed certificate, you will receive this error:
Get "https://registry.techtuts.local/v2/": x509: certificate signed by unknown authority
This error occurs because Docker, by default, works on a secure connection that expects a valid TLS certificate. To get around this error, we will run the sudo nano /etc/Docker/daemon.json command to create/edit a daemon.json file and add our registry URL, as shown below.
Note: If you're using a public certificate, you should skip this step.
{ "insecure-registries" : ["https://registry.techtuts.local"] }
I will now split my terminal window and run the sudo docker logs -f registry command on the right side, while pushing the custom image to the private Docker registry with the following command:
sudo docker push registry.techtuts.local/nginx
In the right-side window, the Docker log shows a successful PUT operation, which essentially means that your image is now successfully uploaded to the private registry.
Using a custom Docker image
I will now switch to my other Ubuntu client. To view the current Docker images in the private registry, you can send a curl request to the registry endpoint URL, as shown below:
curl -kL https://registry.techtuts.local/v2/_catalog
To run a new Docker container using the custom image from the private registry, run the following command:
sudo docker run --name webserver_custom --rm -it registry.techtuts.local/nginx /bin/bash
You can see in the screenshot that as soon as the container is launched with our new image, the ip addr show, ping, and netstat commands worked right away, since these packages are already included in our custom image.
View Docker registry logs
To view the Docker registry logs, you can use the following command:
sudo docker logs -f registry
Since we are using nginx, the nginx access log will contain detailed activity information in our private Docker registry:
sudo tail -f /var/log/nginx/access.log
Conclusion
You just finished setting up the private Docker registry. You can now upload Docker images for internal use. We did not configure any authentication methods, so currently, anyone who knows your private registry URL will be able to pull and push Docker images. Thus, you should consider configuring authentication for your private Docker registry.
Read the latest IT news and community updates!
Join our IT community and read articles without ads!
Do you want to write for 4sysops? We are looking for new authors.
Please use Jfrog Container Registry instead, it has deduplication, replication, proxy and most of all, web interface plus overlaying and inspecting containers and repos. This guide works, but for base understanding is gui better from my opinion.
I am definitely going to try out the Jfrog container registry. Thank you for the hint.