To access remote storage from Docker containers, you must understand how Docker works with network-attached storage (NAS) and the Network File System (NFS) protocol. In this guide, I will explore the benefits of using Docker with NAS and NFS, and then I will explain how to set up an NFS share to access it from a Docker container. For demonstration, I will use TrueNAS with NFSv4.
Avatar
Latest posts by Surender Kumar (see all)

Benefits of using Docker with NAS and NFS

In production environments with containerized applications, NAS, and NFS for remote storage have several benefits:

  • Improved storage scalability: Docker with NAS and NFS allows you to store your containerized applications and their data on centralized storage, which can easily be scaled up or down as needed. This makes it easier to manage storage resources and ensures that you have enough storage capacity to meet your application's needs.
  • Simplified storage management: By using NAS and NFS, Docker allows you to store and manage your application data in a central location. This makes it easier to back up, restore, and manage your data across different environments, reducing the complexity of managing storage across multiple systems.
  • Increased data availability: With Docker and NAS/NFS, your data is stored in a centralized location, making it easier to share and access data across different systems and environments. This improves data availability and can help ensure that your applications are always running with the latest data.
  • Improved application portability: Docker with NAS and NFS allows you to store and manage your containerized applications and their data separately, making it easier to move them between different environments without worrying about data loss or compatibility issues. This improves application portability and reduces the time and effort required to deploy your applications in new environments.
  • Enhanced security: By using NAS and NFS with Docker, you can implement security measures such as access controls, encryption, and authentication to ensure that your data is secure and only accessible to authorized users.

Overall, using Docker with NAS and NFS provides a more scalable, manageable, and secure storage solution for containerized applications, making deploying and managing applications across different environments easier.

Configure the NFS share

First, make sure NFSv4 is enabled in TrueNAS since it offers many improvements compared to NFSv3. To do so, click Services, and ensure that the NFS service is enabled (toggled on). Then, click the Edit icon and select the Enable NFSv4 checkbox.

Enabling NFSv4 in TrueNAS

Enabling NFSv4 in TrueNAS

Now create a new dataset for Docker volumes in TrueNAS by going to Pools under the Storage menu. I named it docker-volumes.

Creating a dataset for Docker volumes in TrueNAS

Creating a dataset for Docker volumes in TrueNAS

Finally, create an NFS share by clicking Unix Shares (NFS) under the Sharing menu. See the following screenshot for reference:

Creating an NFS share for Docker in TrueNAS

Creating an NFS share for Docker in TrueNAS

Don't forget to select the root user under the Maproot User field. Without this, the Docker container will not be able to write anything to the NFS share and will get a Permission denied error, as shown in the screenshot below:

The Docker container cannot write anything to the NFS share

The Docker container cannot write anything to the NFS share

In addition, you can add authorized Docker hosts by specifying their IP addresses under Hosts. At this point, your NFS share is ready.

Configure the Docker host for NFS

Now install the nfs-common package to be able to use the NFS share on the Docker host. Since my Docker host is running Ubuntu, I will simply run the following command to install it:

sudo apt update && sudo apt install nfs-common
Installing nfs common client tools in the Docker host

Installing nfs common client tools in the Docker host

Create an NFS Docker volume

You can now create an NFS Docker volume:

sudo docker volume create \
    --driver local \
    --opt type=nfs \
    --opt o=addr=192.168.0.45,rw,nfsvers=4 \
    --opt device=:/mnt/nfs-datastore/docker-volumes \
    public_html
Creating an NFS volume in Docker

Creating an NFS volume in Docker

Let's briefly discuss the command options:

  • --driver specifies a volume driver plugin for Docker, where local is the default driver. You can use other volume driver plugins to use a different type of remote storage with Docker.
  • --opt specifies additional options, where:
    • type=nfs denotes the volume type
    • o=addr=192.168.0.45,rw,nfsvers=4 denotes the NFS server address, read/write mode, and NFS protocol version
    • device=:/mnt/nfs-datastore/docker-volumes specifies the actual path in the NFS share where the Docker volume is created
  • The public_html keyword at the end specifies the name of the Docker volume.

You can now use the sudo docker inspect public_html command to inspect your new Docker volume.

Attach the NFS volume to a container

We can now launch a new container using the NGINX image and attach the new volume:

sudo docker run \
--name webapp \
--detach \
--publish 80:80 \
--mount type=volume,source=public_html,target=/usr/share/nginx/html \
nginx
Launch a Docker container with an NFS volume mount

Launch a Docker container with an NFS volume mount

As you can see in the screenshot, I connected to the webapp container and created a new about.html file under /usr/share/nginx/html directory.

On TrueNAS, we can see that the shared directory is now populated with the same HTML files we can see in the webapp container.

Viewing the NFS shared directory in TrueNAS

Viewing the NFS shared directory in TrueNAS

That's it. Your Docker container is now storing files in an NFS volume.

If you already have some Docker volumes created on local storage, you can easily move all the content to the new Docker volumes hosted on TrueNAS. Check out my previous post to learn how to move a Docker volume.

4 Comments
  1. Avatar
    Sergsw 7 months ago

    Nfs is bad for intense rw operations. For instance if your image is using nfs as storage for database (SQLite is common case). You have risk to get malformed db. It happened for me lots of times. So finally I migrated to longhorn (in case of k8s/k3s) or host mode for local containers.

  2. Avatar
    Sergsw 7 months ago

    MySQL docs doesn’t recommend use Nfs. They mentioned that Nfs together with SAN can be reliable solution, but it’s too much for home level or small office.
    From my experience network storage for database is bad idea (except SAN). Host with raid is much more reliable solution, or longhorn with data locality. And both solution are much faster than NFS.

  3. Avatar
    Sergsw 7 months ago

    MySQL docs doesn’t recommend to use nfs. They mentioned that only nfs together with SAN can be a reliable solution. But SAN is too much for home or small office. Also host drive or longhorn with data locality would be faster and more reliable than NFS.
    I found only one suitable use case for nfs on my home cluster. It’s shared folder for backups.

Leave a reply

Please enclose code in pre tags: <pre></pre>

Your email address will not be published. Required fields are marked *

*

© 4sysops 2006 - 2023

CONTACT US

Please ask IT administration questions in the forums. Any other messages are welcome.

Sending

Log in with your credentials

or    

Forgot your details?

Create Account