Skip to main content

On premise deployment

System Requirements

  1. Linux x86_64 with fuse supported
  2. Python 2.7+, 3.4+

Get offline installation package

Unpack the offline installation package which contains two subdirectories meta and mount, corresponding to the JuiceFS metadata service and FUSE mount client, respectively.

The below commands assume that the meta or mount subdirectory is copied to the /opt/juicedata/ directory on the target machine, but this exact directory path is not a must, you can choose the appropriate directory for your situation.

Deploying metadata service

Copy the meta directory to the /opt/juicedata/ for all metadata server nodes (the nodes corresponding to the IP addresses provided JuiceFS team before) and execute the following commands to setup and start the metadata service.

# cd /opt/juicedata/meta && ./install.sh $IP

$IP needs to be replaced with the IP address of this node.

Deploying mount client

Copy the mount subdirectory to /opt/juicedata/ on the nodes where the JuiceFS filesystem needs to be mounted and execute the following command to install.

# cd /opt/juicedata/mount && ./install.sh $NAME

The $NAME needs to be replaced with the actual filesystem name(provided by JuiceFS team).

After successful execution, we will be prompted to populate the configuration file with the bucket, accesskey, secretkey to access the object storage. The value of bucket key is the virtual host style endpoint URL such as http://<bucket>.<object-storage-endpoint>[:port].

Once the configuration is populated, it is possible to execute mount using the following command

# cd /opt/juicedata/mount && bin/juicefs mount --no-update $NAME /jfs

Please refer to Mount on boot for setting the auto-mount.

Please refer to Official Document for more information about command line usage.

Upgrade metadata service

Get the new offline installation package from JuiceFS team.

Copy the meta subdirectory of the unpacked installation package to all metadata server nodes, place it under /opt/juicedata/, remember to backup current meta directory on all metadata server nodes first:

# mv /opt/juicedata/meta /opt/juicedata/meta-$(date +%Y%m%d%H%M%S)

Execute the below command on each metadata server node, one by one, don\'t execute parallelly as this would interrupt the mounted client.

# cd /opt/juicedata/meta && ./install.sh $IP

Upgrade mount client

Copy the mount subdirectory from the unpacked installation package to all client nodes /tmp/ directory, backup current mount directory on all client nodes first, then place new mount to /opt/juicedata/:

# mv /opt/juicedata/mount /opt/juicedata/mount-$(date +%Y%m%d%H%M%S)
# mv /tmp/mount /opt/juicedata/
# cd /opt/juicedata
# chmod a+x bin/jfsmount bin/juicefs
# cp bin/jfsmount /root/.juicefs
# bin/juicefs mount --no-update $NAME /jfs

Quota

JuiceFS supports setting the data size and inode limit for directories.

A quota rule needs to provide Path, Inodes, Capacity 3 values.

  • Path: absolute path to the directory, can use * wildcard for glob match.
  • Inodes: the total number of inodes in a directory and its subdirectories. When this limit is reached, no new files will be added to the directory and its subdirectories.
  • Capacity: the total data capacity of the directory and its subdirectories is limited, after reaching the limit, it will be impossible to write data.

After setting a subdirectory\'s quota, we can mount this subdirectory with option --subdir, the display size of df command is the quota we\'ve set.

Note

  • The path needs to be an absolute path starting with /, e.g. /subdir1 matches the directory subdir1 under that filesystem, /subdir2/* matches all directories in subdir2 such as /subdir2/aaa, /subdir2/bbb, not match subdirectories such as /subdir2/aaa/ccc.
  • Inodes suffix can be K, M, G, T, P, and so on, such as 10K, 20M, 5G, and so on.
  • Capacity suffix can be KB, MB, GB, TB, KiB, MiB, GiB, TiB, such as 50GB, 100GiB, 1TiB etc.

Troubleshooting

  1. View log first. The log path of metadata service is /var/log/meta.log, the log path of mount client is /var/log/juicefs.log
  2. The metadata service on each node has two processes: a supervisor process meta.py and meta process(subprocess of meta.py process). The running binary and configuration resides in /root/.juicefs except the meta.py script which resides in /opt/juicedata/meta/bin. The running data directory of metadata service is /var/lib/jfs/<meta-id>/, the *.jfs files in this directory are metadata snapshot(metadata.<version>.jfs) and changlog(changelog.<version>.jfs).
  3. The mount client on each node has two processes: a supervisor process juicefs and mount process(subprocess of juicefs process). The running binary and configuration resides in /root/.juicefs except the juicefs script which resides in /opt/juicedata/mount/bin. Read our troubleshooting doc for more infomation.