- Dacpac and Bacpac in SQL Server - Tue, May 30 2023
- Proxmox Backup Server: Install and configure - Fri, May 19 2023
- Using Docker with NAS and NFS - Mon, May 15 2023
Benefits of using Docker with NAS and NFS
In production environments with containerized applications, NAS, and NFS for remote storage have several benefits:
- Improved storage scalability: Docker with NAS and NFS allows you to store your containerized applications and their data on centralized storage, which can easily be scaled up or down as needed. This makes it easier to manage storage resources and ensures that you have enough storage capacity to meet your application's needs.
- Simplified storage management: By using NAS and NFS, Docker allows you to store and manage your application data in a central location. This makes it easier to back up, restore, and manage your data across different environments, reducing the complexity of managing storage across multiple systems.
- Increased data availability: With Docker and NAS/NFS, your data is stored in a centralized location, making it easier to share and access data across different systems and environments. This improves data availability and can help ensure that your applications are always running with the latest data.
- Improved application portability: Docker with NAS and NFS allows you to store and manage your containerized applications and their data separately, making it easier to move them between different environments without worrying about data loss or compatibility issues. This improves application portability and reduces the time and effort required to deploy your applications in new environments.
- Enhanced security: By using NAS and NFS with Docker, you can implement security measures such as access controls, encryption, and authentication to ensure that your data is secure and only accessible to authorized users.
Overall, using Docker with NAS and NFS provides a more scalable, manageable, and secure storage solution for containerized applications, making deploying and managing applications across different environments easier.
Configure the NFS share
First, make sure NFSv4 is enabled in TrueNAS since it offers many improvements compared to NFSv3. To do so, click Services, and ensure that the NFS service is enabled (toggled on). Then, click the Edit icon and select the Enable NFSv4 checkbox.
Now create a new dataset for Docker volumes in TrueNAS by going to Pools under the Storage menu. I named it docker-volumes.
Finally, create an NFS share by clicking Unix Shares (NFS) under the Sharing menu. See the following screenshot for reference:
Don't forget to select the root user under the Maproot User field. Without this, the Docker container will not be able to write anything to the NFS share and will get a Permission denied error, as shown in the screenshot below:
In addition, you can add authorized Docker hosts by specifying their IP addresses under Hosts. At this point, your NFS share is ready.
Configure the Docker host for NFS
Now install the nfs-common package to be able to use the NFS share on the Docker host. Since my Docker host is running Ubuntu, I will simply run the following command to install it:
sudo apt update && sudo apt install nfs-common
Create an NFS Docker volume
You can now create an NFS Docker volume:
sudo docker volume create \ --driver local \ --opt type=nfs \ --opt o=addr=192.168.0.45,rw,nfsvers=4 \ --opt device=:/mnt/nfs-datastore/docker-volumes \ public_html
Let's briefly discuss the command options:
- --driver specifies a volume driver plugin for Docker, where local is the default driver. You can use other volume driver plugins to use a different type of remote storage with Docker.
- --opt specifies additional options, where:
- type=nfs denotes the volume type
- o=addr=192.168.0.45,rw,nfsvers=4 denotes the NFS server address, read/write mode, and NFS protocol version
- device=:/mnt/nfs-datastore/docker-volumes specifies the actual path in the NFS share where the Docker volume is created
- The public_html keyword at the end specifies the name of the Docker volume.
You can now use the sudo docker inspect public_html command to inspect your new Docker volume.
Attach the NFS volume to a container
We can now launch a new container using the NGINX image and attach the new volume:
sudo docker run \ --name webapp \ --detach \ --publish 80:80 \ --mount type=volume,source=public_html,target=/usr/share/nginx/html \ nginx
As you can see in the screenshot, I connected to the webapp container and created a new about.html file under /usr/share/nginx/html directory.
On TrueNAS, we can see that the shared directory is now populated with the same HTML files we can see in the webapp container.
That's it. Your Docker container is now storing files in an NFS volume.
If you already have some Docker volumes created on local storage, you can easily move all the content to the new Docker volumes hosted on TrueNAS. Check out my previous post to learn how to move a Docker volume.
Want to write for 4sysops? We are looking for new authors.
Nfs is bad for intense rw operations. For instance if your image is using nfs as storage for database (SQLite is common case). You have risk to get malformed db. It happened for me lots of times. So finally I migrated to longhorn (in case of k8s/k3s) or host mode for local containers.
In my opinion, NFS is not that bad as long as you are using version 4. NFSv3 surely had a lot of problems that are addressed in NFSv4. MySQL documentation also suggests using version 4:
Moreover, you need to tune the NFS settings to make it work for your environment.
MySQL docs doesn’t recommend use Nfs. They mentioned that Nfs together with SAN can be reliable solution, but it’s too much for home level or small office.
From my experience network storage for database is bad idea (except SAN). Host with raid is much more reliable solution, or longhorn with data locality. And both solution are much faster than NFS.
MySQL docs doesn’t recommend to use nfs. They mentioned that only nfs together with SAN can be a reliable solution. But SAN is too much for home or small office. Also host drive or longhorn with data locality would be faster and more reliable than NFS.
I found only one suitable use case for nfs on my home cluster. It’s shared folder for backups.