Skip to main content

Use JuiceFS in Docker

If you simply need to access JuiceFS within a container, you can directly mount JuiceFS into container using -v, assuming the host mount point being /jfs:

docker run -d --name nginx \
-v /jfs/html:/usr/share/nginx/html \
-p 8080:80 \
nginx

Docker also brings many other possibilities with JuiceFS, read below sections for more.

Mount JuiceFS inside Docker container

If it's inconvenient to install JuiceFS Client on the host, you can instead run it inside a container, and then propagate the mount point to the host:

docker run -d --name=jfsmount --restart=always --privileged \
--mount type=bind,source=/mnt,target=/mnt,bind-propagation=shared \
-v /root/.juicefs/:/root/.juicefs \
-v /var/jfsCache:/var/jfsCache \
juicedata/mount:ee-4.9.23 juicefs mount myjfs /mnt/jfs -f

In the above demonstration:

  • The configuration directory /root/.juicefs is mapped inside the container, so that configs are shared with the host.
  • The cache directory /var/jfsCache is mapped inside the container, also shared with the host.
  • The -f option is used in the mount command, so that the process runs in foreground, so you can use docker logs to check JuiceFS Client logs.
  • The myjfs file system is mounted as /mnt/jfs inside the container, and then according to the propagation settings, shared to the host, so that /mnt/jfs can be directly used from the host.

When Ceph is used as the underlying object storage (only supported on-prem), the JuiceFS Client depends on the relevant Ceph libraries, this is another scenario which loves containerization, so that no extra dependencies is required to install on the host:

docker run -d --name=jfsmount --restart=always --privileged \
--mount type=bind,source=/mnt,target=/mnt,bind-propagation=shared \
-v /root/.juicefs/:/root/.juicefs \
-v /var/jfsCache:/var/jfsCache \
-v /etc/ceph:/etc/ceph \
--env BASE_URL=$JUICEFS_CONSOLE_URL/static \
juicedata/mount:ee-4.9.23 juicefs mount myjfs /mnt/jfs -f

Some extra caveats in the above demonstration:

  • The Ceph configuration directory /etc/ceph is mapped inside the container, shared with the host
  • When using JuiceFS on-prem, the web console URL needs to be overwritten via the BASE_URL environment variable

Docker volume plugin

If you wish to control mount points using Docker, so that different application containers may use different JuiceFS file systems, you can use our Docker volume plugin.

Every Docker plugin itself is a Docker image, and JuiceFS Docker volume plugin is packed with JuiceFS Community Edition as well as JuiceFS Cloud Service clients, after installation, you'll be able to run this plugin, and create JuiceFS Volume inside Docker.

Install the plugin with the following command, grant permissions when asked:

docker plugin install juicedata/juicefs

You can manage volume plugin with the following commands:

# Disable the volume plugin
docker plugin disable juicedata/juicefs

# Upgrade plugin (need to disable first)
docker plugin upgrade juicedata/juicefs
docker plugin enable juicedata/juicefs

# Uninstall plugin
docker plugin rm juicedata/juicefs

Create volume

Create a new volume using the following command, if you forget how to obtain relevant credentials, revisit Creating a file system:

docker volume create -d juicedata/juicefs \
-o name=$VOL_NAME -o token=$JFS_TOKEN \
-o accesskey=$ACCESS_KEY -o secretkey=$SECRET_KEY jfsvolume

If you need to pass extra environment variables to the auth/mount process (e.g. Google Cloud Storage), append them as -o env=FOO=bar,SPAM=egg.

Usage and management

# Mount the volume in container
docker run -it -v jfsvolume:/opt busybox ls /opt

# After a volume has been unmounted, delete using the following command
# Deleting a volume only remove the relevant resources from Docker, which doesn't affect data stored in JuiceFS
docker volume rm jfsvolume

Using Docker Compose

Example for creating and mounting JuiceFS volume with docker-compose:

version: '3'
services:
busybox:
image: busybox
command: "ls /jfs"
volumes:
- jfsvolume:/jfs
volumes:
jfsvolume:
driver: juicedata/juicefs
driver_opts:
name: ${VOL_NAME}
token: ${JFS_TOKEN}
access-key: ${ACCESS_KEY}
secret-key: ${SECRET_KEY}
# Pass extra environment variables using env
# env: FOO=bar,SPAM=egg

Common management commands:

# Start the service
docker-compose up

# Shut down the service and remove Docker volumes
docker-compose down --volumes

Use in Docker Swarm

JuiceFS volume can be shared in Docker Swarm. Make sure that volume plugin juicedata/juicefs is installed on every worker node.

Pass options to --mount to create service mounting JuiceFS volume.

docker service create --name nginx --mount \
type=volume,volume-driver=juicedata/juicefs,source=jfsvolume,destination=/jfs, \
volume-opt=name=$VOL_NAME,volume-opt=token=$JFS_TOKEN,volume-opt=accesskey=$ACCESS_KEY,volume-opt=secretkey=$SECRET_KEY nginx:alpine

Troubleshooting

If JuiceFS Docker volume plugin is not working properly, it's recommend to upgrade the volume plugin first, and then check logs to debug.

  • Collect JuiceFS Client logs, which is inside the Docker volume plugin container itself:

    # locate the docker plugins runtime directory, your environment may differ from below example
    # container directories will be printed, directory name is container ID
    ls /run/docker/plugins/runtime-root/plugins.moby

    # print plugin container info
    # if container list is empty, that means plugin container didn't start properly
    # read the next step to continue debugging
    runc --root /run/docker/plugins/runtime-root/plugins.moby list

    # collect log inside plugin container
    runc --root /run/docker/plugins/runtime-root/plugins.moby exec 452d2c0cf3fd45e73a93a2f2b00d03ed28dd2bc0c58669cca9d4039e8866f99f cat /var/log/juicefs.log

    If it is found that the container doesn't exist (ls found that the directory is empty), or that juicefs.log doesn't exist, this usually indicates a bad mount, check plugin logs to further debug.

  • Collect plugin log, for example under systemd:

    journalctl -f -u docker | grep "plugin="

    juicefs is called to perform the actual mount inside the plugin container, if any error occurs, it will be shown in the Docker daemon logs, same when there's error with the volume plugin itself.