Skip to main content

Command Reference

Running juicefs by itself and it will print all available commands. In addition, you can add -h/--help flag after each command to get more information, e.g., juicefs format -h.

NAME:
juicefs - A POSIX file system built on Redis and object storage.

USAGE:
juicefs [global options] command [command options] [arguments...]

VERSION:
1.2.0

COMMANDS:
ADMIN:
format Format a volume
config Change configuration of a volume
quota Manage directory quotas
destroy Destroy an existing volume
gc Garbage collector of objects in data storage
fsck Check consistency of a volume
restore restore files from trash
dump Dump metadata into a JSON file
load Load metadata from a previously dumped JSON file
version Show version
INSPECTOR:
status Show status of a volume
stats Show real time performance statistics of JuiceFS
profile Show profiling of operations completed in JuiceFS
info Show internal information of a path or inode
debug Collect and display system static and runtime information
summary Show tree summary of a directory
SERVICE:
mount Mount a volume
umount Unmount a volume
gateway Start an S3-compatible gateway
webdav Start a WebDAV server
TOOL:
bench Run benchmarks on a path
objbench Run benchmarks on an object storage
warmup Build cache for target directories/files
rmr Remove directories recursively
sync Sync between two storages
clone clone a file or directory without copying the underlying data
compact Trigger compaction of chunks

GLOBAL OPTIONS:
--verbose, --debug, -v enable debug log (default: false)
--quiet, -q show warning and errors only (default: false)
--trace enable trace log (default: false)
--log-id value append the given log id in log, use "random" to use random uuid
--no-agent disable pprof (:6060) agent (default: false)
--pyroscope value pyroscope address
--no-color disable colors (default: false)
--help, -h show help (default: false)
--version, -V print version only (default: false)

COPYRIGHT:
Apache License 2.0

Auto completion

To enable commands completion, simply source the script provided within hack/autocomplete directory. For example:

source hack/autocomplete/bash_autocomplete

Please note the auto-completion is only enabled for the current session. If you want to apply it for all new sessions, add the source command to .bashrc or .zshrc:

echo "source path/to/bash_autocomplete" >> ~/.bashrc

Alternatively, if you are using bash on a Linux system, you may just copy the script to /etc/bash_completion.d and rename it to juicefs:

cp hack/autocomplete/bash_autocomplete /etc/bash_completion.d/juicefs
source /etc/bash_completion.d/juicefs

Admin

juicefs format

Create and format a file system, if a volume already exists with the same META-URL, this command will skip the format step. To adjust configurations for existing volumes, use juicefs config.

Synopsis

juicefs format [command options] META-URL NAME

# Create a simple test volume (data will be stored in a local directory)
juicefs format sqlite3://myjfs.db myjfs

# Create a volume with Redis and S3
juicefs format redis://localhost myjfs --storage=s3 --bucket=https://mybucket.s3.us-east-2.amazonaws.com

# Create a volume with password protected MySQL
juicefs format mysql://jfs:mypassword@(127.0.0.1:3306)/juicefs myjfs
# A safer alternative
META_PASSWORD=mypassword juicefs format mysql://jfs:@(127.0.0.1:3306)/juicefs myjfs

# Create a volume with quota enabled
juicefs format sqlite3://myjfs.db myjfs --inodes=1000000 --capacity=102400

# Create a volume with trash disabled
juicefs format sqlite3://myjfs.db myjfs --trash-days=0

Options

ItemsDescription
META-URLDatabase URL for metadata storage, see JuiceFS supported metadata engines for details.
NAMEName of the file system
--forceoverwrite existing format (default: false)
--no-updatedon't update existing volume (default: false)

Data storage options

ItemsDescription
--storage=fileObject storage type (e.g. s3, gs, oss, cos) (default: file, refer to documentation for all supported object storage types)
--bucket=/var/jfsA bucket URL to store data (default: $HOME/.juicefs/local or /var/jfs)
--access-key=valueAccess Key for object storage (can also be set via the environment variable ACCESS_KEY), see How to Set Up Object Storage for more.
--secret-key valueSecret Key for object storage (can also be set via the environment variable SECRET_KEY), see How to Set Up Object Storage for more.
--session-token=valuesession token for object storage, see How to Set Up Object Storage for more.
--storage-class value Added in v1.1the default storage class

Data format options

ItemsDescription
--block-size=4Msize of block in KiB (default: 4M). 4M is usually a better default value because many object storage services use 4M as their internal block size, thus using the same block size in JuiceFS usually yields better performance.
--compress=nonecompression algorithm, choose from lz4, zstd, none (default). Enabling compression will inevitably affect performance. Among the two supported algorithms, lz4 offers a better performance, while zstd comes with a higher compression ratio, Google for their detailed comparison.
--encrypt-rsa-key=valueA path to RSA private key (PEM)
--encrypt-algo=aes256gcm-rsaencrypt algorithm (aes256gcm-rsa, chacha20-rsa) (default: "aes256gcm-rsa")
--hash-prefixFor most object storages, if object storage blocks are sequentially named, they will also be closely stored in the underlying physical regions. When loaded with intensive concurrent consecutive reads, this can cause hotspots and hinder object storage performance.

Enabling --hash-prefix will add a hash prefix to name of the blocks (slice ID mod 256, see internal implementation), this distributes data blocks evenly across actual object storage regions, offering more consistent performance. Obviously, this option dictates object naming pattern and should be specified when a file system is created, and cannot be changed on-the-fly.

Currently, AWS S3 had already made improvements and no longer require application side optimization, but for other types of object storages, this option still recommended for large scale scenarios.
--shards=0If your object storage limit speed in a bucket level (or you're using a self-hosted object storage with limited performance), you can store the blocks into N buckets by hash of key (default: 0), when N is greater than 0, bucket should to be in the form of %d, e.g. --bucket "juicefs-%d". --shards cannot be changed afterwards and must be planned carefully ahead.

Management options

ItemsDescription
--capacity=0storage space limit in GiB, default to 0 which means no limit. Capacity will include trash files, if trash is enabled.
--inodes=0Limit the number of inodes, default to 0 which means no limit.
--trash-days=1By default, delete files are put into trash, this option controls the number of days before trash files are expired, default to 1, set to 0 to disable trash.
--enable-acl=true Added in v1.2enable POSIX ACL,it is irreversible.

juicefs config

Change config of a volume. Note that after updating some settings, the client may not take effect immediately, and it needs to wait for a certain period of time. The specific waiting time can be controlled by the --heartbeat option.

Synopsis

juicefs config [command options] META-URL

# Show the current configurations
juicefs config redis://localhost

# Change volume "quota"
juicefs config redis://localhost --inodes 10000000 --capacity 1048576

# Change maximum days before files in trash are deleted
juicefs config redis://localhost --trash-days 7

# Limit client version that is allowed to connect
juicefs config redis://localhost --min-client-version 1.0.0 --max-client-version 1.1.0

Options

ItemsDescription
--yes, -yautomatically answer 'yes' to all prompts and run non-interactively (default: false)
--forceskip sanity check and force update the configurations (default: false)

Data storage options

ItemsDescription
--storage=file Added in v1.1Object storage type (e.g. s3, gs, oss, cos) (default: "file", refer to documentation for all supported object storage types).
--bucket=/var/jfsA bucket URL to store data (default: $HOME/.juicefs/local or /var/jfs)
--access-key=valueAccess Key for object storage (can also be set via the environment variable ACCESS_KEY), see How to Set Up Object Storage for more.
--secret-key valueSecret Key for object storage (can also be set via the environment variable SECRET_KEY), see How to Set Up Object Storage for more.
--session-token=valuesession token for object storage, see How to Set Up Object Storage for more.
--storage-class value Added in v1.1the default storage class
--upload-limit=0bandwidth limit for upload in Mbps (default: 0)
--download-limit=0bandwidth limit for download in Mbps (default: 0)

Management options

ItemsDescription
--capacity valuelimit for space in GiB
--inodes valuelimit for number of inodes
--trash-days valuenumber of days after which removed files will be permanently deleted
--enable-acl Added in v1.2enable POSIX ACL (irreversible), at the same time, the minimum client version allowed to connect will be upgraded to v1.2
--encrypt-secretencrypt the secret key if it was previously stored in plain format (default: false)
--min-client-version value Added in v1.1minimum client version allowed to connect
--max-client-version value Added in v1.1maximum client version allowed to connect
--dir-stats Added in v1.1enable dir stats, which is necessary for fast summary and dir quota (default: false)

juicefs quota Added in v1.1

Manage directory quotas

Synopsis

juicefs quota command [command options] META-URL

# Set quota to a directory
juicefs quota set redis://localhost --path /dir1 --capacity 1 --inodes 100

# Get quota of a directory
juicefs quota get redis://localhost --path /dir1

# List all directory quotas
juicefs quota list redis://localhost

# Delete quota of a directory
juicefs quota delete redis://localhost --path /dir1

# Check quota consistency of a directory
juicefs quota check redis://localhost

Options

ItemsDescription
META-URLDatabase URL for metadata storage, see "JuiceFS supported metadata engines" for details.
--path valuefull path of the directory within the volume
--capacity valuehard quota of the directory limiting its usage of space in GiB (default: 0)
--inodes valuehard quota of the directory limiting its number of inodes (default: 0)
--repairrepair inconsistent quota (default: false)
--strictcalculate total usage of directory in strict mode (NOTE: may be slow for huge directory) (default: false)

juicefs destroy

Destroy an existing volume, will delete relevant data in metadata engine and object storage. See How to destroy a file system.

Synopsis

juicefs destroy [command options] META-URL UUID

juicefs destroy redis://localhost e94d66a8-2339-4abd-b8d8-6812df737892

Options

ItemsDescription
--yes, -y Added in v1.1automatically answer 'yes' to all prompts and run non-interactively (default: false)
--forceskip sanity check and force destroy the volume (default: false)

juicefs gc

If for some reason, a object storage block escape JuiceFS management completely, i.e. the metadata is gone, but the block still persists in the object storage, and cannot be released, this is called an "object leak". If this happens without any special file system manipulation, it could well indicate a bug within JuiceFS, file a GitHub Issue to let us know.

Meanwhile, you can run this command to deal with leaked objects. It also deletes stale slices produced by file overwrites. See Status Check & Maintenance.

Synopsis

juicefs gc [command options] META-URL

# Check only, no writable change
juicefs gc redis://localhost

# Trigger compaction of all slices
juicefs gc redis://localhost --compact

# Delete leaked objects
juicefs gc redis://localhost --delete

Options

ItemsDescription
--compactcompact all chunks with more than 1 slices (default: false).
--deletedelete leaked objects (default: false)
--threads=10number of threads to delete leaked objects (default: 10)

juicefs fsck

Check consistency of file system.

Synopsis

juicefs fsck [command options] META-URL

juicefs fsck redis://localhost

Options

ItemsDescription
--path value Added in v1.1absolute path within JuiceFS to check
--repair Added in v1.1repair specified path if it's broken (default: false)
--recursive, -r Added in v1.1recursively check or repair (default: false)
--sync-dir-stat Added in v1.1sync stat of all directories, even if they are existed and not broken (NOTE: it may take a long time for huge trees) (default: false)

juicefs restore Added in v1.1

Rebuild the tree structure for trash files, and put them back to original directories.

Synopsis

juicefs restore [command options] META HOUR ...

juicefs restore redis://localhost/1 2023-05-10-01

Options

ItemsDescription
--put-back valuemove the recovered files into original directory (default: false)
--threads valuenumber of threads (default: 10)

juicefs dump

Dump metadata into a JSON file. Refer to "Metadata backup" for more information.

Synopsis

juicefs dump [command options] META-URL [FILE]

# Export metadata to meta-dump.json
juicefs dump redis://localhost meta-dump.json

# Export metadata for only one subdirectory of the file system
juicefs dump redis://localhost sub-meta-dump.json --subdir /dir/in/jfs

Options

ItemsDescription
META-URLDatabase URL for metadata storage, see JuiceFS supported metadata engines for details.
FILEExport file path, if not specified, it will be exported to standard output. If the filename ends with .gz, it will be automatically compressed.
--subdir=pathOnly export metadata for the specified subdirectory.
--keep-secret-key Added in v1.1Export object storage authentication information, the default is false. Since it is exported in plain text, pay attention to data security when using it. If the export file does not contain object storage authentication information, you need to use juicefs config to reconfigure object storage authentication information after the subsequent import is completed.
--threads=10 Added in v1.2number of threads to dump metadata. (default: 10)
--fast Added in v1.2Use more memory to speedup dump.
--skip-trash Added in v1.2Skip files and directories in trash.

juicefs load

Load metadata from a previously dumped JSON file. Read "Metadata recovery and migration" to learn more.

Synopsis

juicefs load [command options] META-URL [FILE]

# Import the metadata backup file meta-dump.json to the database
juicefs load redis://127.0.0.1:6379/1 meta-dump.json

Options

ItemsDescription
META-URLDatabase URL for metadata storage, see JuiceFS supported metadata engines for details.
FILEImport file path, if not specified, it will be imported from standard input. If the filename ends with .gz, it will be automatically decompressed.
--encrypt-rsa-key=path Added in v1.0.4The path to the RSA private key file used for encryption.
--encrypt-alg=aes256gcm-rsa Added in v1.0.4Encryption algorithm, the default is aes256gcm-rsa.

Inspector

juicefs status

Show status of JuiceFS.

Synopsis

juicefs status [command options] META-URL

juicefs status redis://localhost

Options

ItemsDescription
--session=0, -s 0show detailed information (sustained inodes, locks) of the specified session (SID) (default: 0)
--more, -m Added in v1.1show more statistic information, may take a long time (default: false)

juicefs stats

Show runtime statistics, read Real-time performance monitoring for more.

Synopsis

juicefs stats [command options] MOUNTPOINT

juicefs stats /mnt/jfs

# More metrics
juicefs stats /mnt/jfs -l 1

Options

ItemsDescription
--schema=ufmcoschema string that controls the output sections (u: usage, f: FUSE, m: metadata, c: block cache, o: object storage, g: Go) (default: ufmco)
--interval=1interval in seconds between each update (default: 1)
--verbosity=0verbosity level, 0 or 1 is enough for most cases (default: 0)

juicefs profile

Show profiling of operations completed in JuiceFS, based on access log. read Real-time performance monitoring for more.

Synopsis

juicefs profile [command options] MOUNTPOINT/LOGFILE

# Monitor real time operations
juicefs profile /mnt/jfs

# Replay an access log
cat /mnt/jfs/.accesslog > /tmp/jfs.alog
# Press Ctrl-C to stop the "cat" command after some time
juicefs profile /tmp/jfs.alog

# Analyze an access log and print the total statistics immediately
juicefs profile /tmp/jfs.alog --interval 0

Options

ItemsDescription
--uid=value, -u valueonly track specified UIDs (separated by comma)
--gid=value, -g valueonly track specified GIDs (separated by comma)
--pid=value, -p valueonly track specified PIDs (separated by comma)
--interval=2flush interval in seconds; set it to 0 when replaying a log file to get an immediate result (default: 2)

juicefs info

Show internal information for given paths or inodes.

Synopsis

juicefs info [command options] PATH or INODE

# Check a path
juicefs info /mnt/jfs/foo

# Check an inode
cd /mnt/jfs
juicefs info -i 100

Options

ItemsDescription
--inode, -iuse inode instead of path (current dir should be inside JuiceFS) (default: false)
--recursive, -rget summary of directories recursively (NOTE: it may take a long time for huge trees) (default: false)
--strict Added in v1.1get accurate summary of directories (NOTE: it may take a long time for huge trees) (default: false)
--rawshow internal raw information (default: false)

juicefs debug Added in v1.1

It collects and displays information from multiple dimensions such as the operating environment and system logs to help better locate errors

Synopsis

juicefs debug [command options] MOUNTPOINT

# Collect and display information about the mount point /mnt/jfs
juicefs debug /mnt/jfs

# Specify the output directory as /var/log
juicefs debug --out-dir=/var/log /mnt/jfs

# Get the last up to 1000 log entries
juicefs debug --out-dir=/var/log --limit=1000 /mnt/jfs

Options

ItemsDescription
--out-dir=./debug/The output directory of the results, automatically created if the directory does not exist (default: ./debug/)
--limit=valueThe number of log entries collected, from newest to oldest, if not specified, all entries will be collected
--stats-sec=5The number of seconds to sample .stats file (default: 5)
--trace-sec=5The number of seconds to sample trace metrics (default: 5)
--profile-sec=30The number of seconds to sample profile metrics (default: 30)

juicefs summary Added in v1.1

It is used to show tree summary of target directory.

Synopsis

juicefs summary [command options] PATH

# Show with path
juicefs summary /mnt/jfs/foo

# Show max depth of 5
juicefs summary --depth 5 /mnt/jfs/foo

# Show top 20 entries
juicefs summary --entries 20 /mnt/jfs/foo

# Show accurate result
juicefs summary --strict /mnt/jfs/foo

Options

ItemsDescription
--depth value, -d valuedepth of tree to show (zero means only show root) (default: 2)
--entries value, -e valueshow top N entries (sort by size) (default: 10)
--strictshow accurate summary, including directories and files (may be slow) (default: false)
--csvprint summary in csv format (default: false)

Service

juicefs mount

Mount a volume. The volume must be formatted in advance.

JuiceFS can be mounted by root or normal user, but due to their privilege differences, cache directory and log path will vary, read below descriptions for more.

Synopsis

juicefs mount [command options] META-URL MOUNTPOINT

# Mount in foreground
juicefs mount redis://localhost /mnt/jfs

# Mount in background with password protected Redis
juicefs mount redis://:mypassword@localhost /mnt/jfs -d
# A safer alternative
META_PASSWORD=mypassword juicefs mount redis://localhost /mnt/jfs -d

# Mount with a sub-directory as root
juicefs mount redis://localhost /mnt/jfs --subdir /dir/in/jfs

# Enable "writeback" mode, which improves performance at the risk of losing objects
juicefs mount redis://localhost /mnt/jfs -d --writeback

# Enable "read-only" mode
juicefs mount redis://localhost /mnt/jfs -d --read-only

# Disable metadata backup
juicefs mount redis://localhost /mnt/jfs --backup-meta 0

Options

ItemsDescription
META-URLDatabase URL for metadata storage, see JuiceFS supported metadata engines for details.
MOUNTPOINTfile system mount point, e.g. /mnt/jfs, Z:.
-d, --backgroundrun in background (default: false)
--no-syslogdisable syslog (default: false)
--log=pathpath of log file when running in background (default: $HOME/.juicefs/juicefs.log or /var/log/juicefs.log)
--forceforce to mount even if the mount point is already mounted by the same filesystem.
--update-fstab Added in v1.1add / update entry in /etc/fstab, will create a symlink from /sbin/mount.juicefs to JuiceFS executable if not existing (default: false)

FUSE related options

ItemsDescription
--enable-xattrenable extended attributes (xattr) (default: false)
--enable-ioctl Added in v1.1enable ioctl (support GETFLAGS/SETFLAGS only) (default: false)
--root-squash value Added in v1.1mapping local root user (UID = 0) to another one specified as UID:GID
--prefix-internal Added in v1.1add '.jfs' prefix to all internal files (default: false)
-o valueother FUSE options, see FUSE Mount Options

Metadata related options

ItemsDescription
--subdir=valuemount a sub-directory as root (default: "")
--backup-meta=3600interval (in seconds) to automatically backup metadata in the object storage (0 means disable backup) (default: "3600")
--backup-skip-trash Added in v1.2skip files and directories in trash when backup metadata.
--heartbeat=12interval (in seconds) to send heartbeat; it's recommended that all clients use the same heartbeat value (default: "12")
--read-onlyallow lookup/read operations only (default: false)
--no-bgjobDisable background jobs, default to false, which means clients by default carry out background jobs, including:
  • Clean up expired files in Trash (look for cleanupDeletedFiles, cleanupTrash in pkg/meta/base.go)
  • Delete slices that's not referenced (look for cleanupSlices in pkg/meta/base.go)
  • Clean up stale client sessions (look for CleanStaleSessions in pkg/meta/base.go)
Note that compaction isn't affected by this option, it happens automatically with file reads and writes, client will check if compaction is in need, and run in background (take Redis for example, look for compactChunk in pkg/meta/base.go).
--atime-mode=noatime Added in v1.1Control atime (last time the file was accessed) behavior, support the following modes:
  • noatime (default): set when the file is created or when SetAttr is explicitly called. Accessing and modifying the file will not affect atime, tracking atime comes at a performance cost, so this is the default behavior
  • relatime: update inode access times relative to mtime (last time when the file data was modified) or ctime (last time when file metadata was changed). Only update atime if atime was earlier than the current mtime or ctime, or the file's atime is more than 1 day old
  • strictatime: always update atime on access
--skip-dir-nlink=20 Added in v1.1number of retries after which the update of directory nlink will be skipped (used for tkv only, 0 means never) (default: 20)
--skip-dir-mtime=100ms Added in v1.2skip updating attribute of a directory if the mtime difference is smaller than this value (default: 100ms)

Metadata cache related options

For metadata cache description and usage, refer to Kernel metadata cache and Client memory metadata cache.

ItemsDescription
--attr-cache=1attributes cache timeout in seconds (default: 1), read Kernel metadata cache
--entry-cache=1file entry cache timeout in seconds (default: 1), read Kernel metadata cache
--dir-entry-cache=1dir entry cache timeout in seconds (default: 1), read Kernel metadata cache
--open-cache=0open file cache timeout in seconds (0 means disable this feature) (default: 0)
--open-cache-limit value Added in v1.1max number of open files to cache (soft limit, 0 means unlimited) (default: 10000)

Data storage related options

ItemsDescription
--storage=fileObject storage type (e.g. s3, gs, oss, cos) (default: "file", refer to documentation for all supported object storage types).
--bucket=valuecustomized endpoint to access object storage
--storage-class value Added in v1.1the storage class for data written by current client
--get-timeout=60the max number of seconds to download an object (default: 60)
--put-timeout=60the max number of seconds to upload an object (default: 60)
--io-retries=10The number of retries when the network is abnormal and the number of retries for metadata requests are also controlled by this option. If the number of retries is exceeded, an EIO Input/output error error will be returned. (default: 10)
--max-uploads=20Upload concurrency, defaults to 20. This is already a reasonably high value for 4M writes, with such write pattern, increasing upload concurrency usually demands higher --buffer-size, learn more at Read/Write Buffer. But for random writes around 100K, 20 might not be enough and can cause congestion at high load, consider using a larger upload concurrency, or try to consolidate small writes in the application end.
--max-stage-write=0 Added in v1.2The maximum number of concurrent writes of data blocks to the cache disk asynchronously. If the maximum number of concurrent writes is reached, the object storage will be uploaded directly (this option is only valid when "Client write data cache" is enabled) (default value: 0, that is, no concurrency limit)
--max-deletes=10number of threads to delete objects (default: 10)
--upload-limit=0bandwidth limit for upload in Mbps (default: 0)
--download-limit=0bandwidth limit for download in Mbps (default: 0)

Data cache related options

ItemsDescription
--buffer-size=300total read/write buffering in MiB (default: 300), see Read/Write buffer
--prefetch=1prefetch N blocks in parallel (default: 1), see Client read data cache
--writebackupload objects in background (default: false), see Client write data cache
--upload-delay=0When --writeback is enabled, you can use this option to add a delay to object storage upload, default to 0, meaning that upload will begin immediately after write. Different units are supported, including s (second), m (minute), h (hour). If files are deleted during this delay, upload will be skipped entirely, when using JuiceFS for temporary storage, use this option to reduce resource usage. Refer to Client write data cache.
--upload-hours Added in v1.2When --writeback is enabled, data blocks are only uploaded during the specified time of day. The format of the parameter is <start hour>,<end hour> (including "start hour", but not including "end hour", "start hour" must be less than or greater than "end hour"), where <hour> can range from 0 to 23. For example, 0,6 means that data blocks are only uploaded between 0:00 and 5:59 every day, and 23,3 means that data blocks are only uploaded between 23:00 every day and 2:59 the next day.
--cache-dir=valuedirectory paths of local cache, use : (Linux, macOS) or ; (Windows) to separate multiple paths (default: $HOME/.juicefs/cache or /var/jfsCache), see Client read data cache
--cache-mode value Added in v1.1file permissions for cached blocks (default: "0600")
--cache-size=102400size of cached object for read in MiB (default: 102400), see Client read data cache
--free-space-ratio=0.1min free space ratio (default: 0.1), if Client write data cache is enabled, this option also controls write cache size, see Client read data cache
--cache-partial-onlycache random/small read only (default: false), see Client read data cache
--verify-cache-checksum=full Added in v1.1Checksum level for cache data. After enabled, checksum will be calculated on divided parts of the cache blocks and stored on disks, which are used for verification during reads. The following strategies are supported:
  • none: Disable checksum verification, if local cache data is tampered, bad data will be read;
  • full (default): Perform verification when reading the full block, use this for sequential read scenarios;
  • shrink: Perform verification on parts that's fully included within the read range, use this for random read scenarios;
  • extend: Perform verification on parts that fully include the read range, this causes read amplifications and is only used for random read scenarios demanding absolute data integrity.
--cache-eviction=2-random Added in v1.1cache eviction policy (none or 2-random) (default: "2-random")
--cache-scan-interval=1h Added in v1.1interval (in seconds) to scan cache-dir to rebuild in-memory index (default: "1h")
--cache-expire=0 Added in v1.2Cache blocks that have not been accessed for more than the set time, in seconds, will be automatically cleared (even if the value of --cache-eviction is none, these cache blocks will be deleted). A value of 0 means never expires (default: 0)

Metrics related options

||Items|Description| |-|-| |--metrics=127.0.0.1:9567|address to export metrics (default: 127.0.0.1:9567)| |--custom-labels|custom labels for metrics, format: key1:value1;key2:value2 (default: "")| |--consul=127.0.0.1:8500|Consul address to register (default: 127.0.0.1:8500)| |--no-usage-report|do not send usage report (default: false)|

juicefs umount

Unmount a volume.

Synopsis

juicefs umount [command options] MOUNTPOINT

juicefs umount /mnt/jfs

Options

ItemsDescription
-f, --forceforce unmount a busy mount point (default: false)
--flush Added in v1.1wait for all staging chunks to be flushed (default: false)

juicefs gateway

Start an S3-compatible gateway, read Deploy JuiceFS S3 Gateway for more.

Synopsis

juicefs gateway [command options] META-URL ADDRESS

export MINIO_ROOT_USER=admin
export MINIO_ROOT_PASSWORD=12345678
juicefs gateway redis://localhost localhost:9000

Options

ItemsDescription
META-URLDatabase URL for metadata storage, see JuiceFS supported metadata engines for details.
ADDRESSS3 gateway address and listening port, for example: localhost:9000
--log value Added in v1.2path for gateway log
--access-log=pathpath for JuiceFS access log.
--background, -d Added in v1.2run in background (default: false)
--no-bannerdisable MinIO startup information (default: false)
--multi-bucketsuse top level of directories as buckets (default: false)
--keep-etagsave the ETag for uploaded objects (default: false)
--umask=022umask for new file and directory in octal (default: 022)
--object-tag Added in v1.2enable object tagging API
--domain value Added in v1.2domain for virtual-host-style requests
--refresh-iam-interval=5m Added in v1.2interval to reload gateway IAM from configuration (default: 5m)

Metadata related options

ItemsDescription
--subdir=valuemount a sub-directory as root (default: "")
--backup-meta=3600interval (in seconds) to automatically backup metadata in the object storage (0 means disable backup) (default: "3600")
--backup-skip-trash Added in v1.2skip files and directories in trash when backup metadata.
--heartbeat=12interval (in seconds) to send heartbeat; it's recommended that all clients use the same heartbeat value (default: "12")
--read-onlyallow lookup/read operations only (default: false)
--no-bgjobDisable background jobs, default to false, which means clients by default carry out background jobs, including:
  • Clean up expired files in Trash (look for cleanupDeletedFiles, cleanupTrash in pkg/meta/base.go)
  • Delete slices that's not referenced (look for cleanupSlices in pkg/meta/base.go)
  • Clean up stale client sessions (look for CleanStaleSessions in pkg/meta/base.go)
Note that compaction isn't affected by this option, it happens automatically with file reads and writes, client will check if compaction is in need, and run in background (take Redis for example, look for compactChunk in pkg/meta/base.go).
--atime-mode=noatime Added in v1.1Control atime (last time the file was accessed) behavior, support the following modes:
  • noatime (default): set when the file is created or when SetAttr is explicitly called. Accessing and modifying the file will not affect atime, tracking atime comes at a performance cost, so this is the default behavior
  • relatime: update inode access times relative to mtime (last time when the file data was modified) or ctime (last time when file metadata was changed). Only update atime if atime was earlier than the current mtime or ctime, or the file's atime is more than 1 day old
  • strictatime: always update atime on access
--skip-dir-nlink=20 Added in v1.1number of retries after which the update of directory nlink will be skipped (used for tkv only, 0 means never) (default: 20)
--skip-dir-mtime=100ms Added in v1.2skip updating attribute of a directory if the mtime difference is smaller than this value (default: 100ms)

Metadata cache related options

For metadata cache description and usage, refer to Kernel metadata cache and Client memory metadata cache.

ItemsDescription
--attr-cache=1attributes cache timeout in seconds (default: 1), read Kernel metadata cache
--entry-cache=1file entry cache timeout in seconds (default: 1), read Kernel metadata cache
--dir-entry-cache=1dir entry cache timeout in seconds (default: 1), read Kernel metadata cache
--open-cache=0open file cache timeout in seconds (0 means disable this feature) (default: 0)
--open-cache-limit value Added in v1.1max number of open files to cache (soft limit, 0 means unlimited) (default: 10000)

Data storage related options

ItemsDescription
--storage=fileObject storage type (e.g. s3, gs, oss, cos) (default: "file", refer to documentation for all supported object storage types).
--bucket=valuecustomized endpoint to access object storage
--storage-class value Added in v1.1the storage class for data written by current client
--get-timeout=60the max number of seconds to download an object (default: 60)
--put-timeout=60the max number of seconds to upload an object (default: 60)
--io-retries=10The number of retries when the network is abnormal and the number of retries for metadata requests are also controlled by this option. If the number of retries is exceeded, an EIO Input/output error error will be returned. (default: 10)
--max-uploads=20Upload concurrency, defaults to 20. This is already a reasonably high value for 4M writes, with such write pattern, increasing upload concurrency usually demands higher --buffer-size, learn more at Read/Write Buffer. But for random writes around 100K, 20 might not be enough and can cause congestion at high load, consider using a larger upload concurrency, or try to consolidate small writes in the application end.
--max-stage-write=0 Added in v1.2The maximum number of concurrent writes of data blocks to the cache disk asynchronously. If the maximum number of concurrent writes is reached, the object storage will be uploaded directly (this option is only valid when "Client write data cache" is enabled) (default value: 0, that is, no concurrency limit)
--max-deletes=10number of threads to delete objects (default: 10)
--upload-limit=0bandwidth limit for upload in Mbps (default: 0)
--download-limit=0bandwidth limit for download in Mbps (default: 0)

Data cache related options

ItemsDescription
--buffer-size=300total read/write buffering in MiB (default: 300), see Read/Write buffer
--prefetch=1prefetch N blocks in parallel (default: 1), see Client read data cache
--writebackupload objects in background (default: false), see Client write data cache
--upload-delay=0When --writeback is enabled, you can use this option to add a delay to object storage upload, default to 0, meaning that upload will begin immediately after write. Different units are supported, including s (second), m (minute), h (hour). If files are deleted during this delay, upload will be skipped entirely, when using JuiceFS for temporary storage, use this option to reduce resource usage. Refer to Client write data cache.
--upload-hours Added in v1.2When --writeback is enabled, data blocks are only uploaded during the specified time of day. The format of the parameter is <start hour>,<end hour> (including "start hour", but not including "end hour", "start hour" must be less than or greater than "end hour"), where <hour> can range from 0 to 23. For example, 0,6 means that data blocks are only uploaded between 0:00 and 5:59 every day, and 23,3 means that data blocks are only uploaded between 23:00 every day and 2:59 the next day.
--cache-dir=valuedirectory paths of local cache, use : (Linux, macOS) or ; (Windows) to separate multiple paths (default: $HOME/.juicefs/cache or /var/jfsCache), see Client read data cache
--cache-mode value Added in v1.1file permissions for cached blocks (default: "0600")
--cache-size=102400size of cached object for read in MiB (default: 102400), see Client read data cache
--free-space-ratio=0.1min free space ratio (default: 0.1), if Client write data cache is enabled, this option also controls write cache size, see Client read data cache
--cache-partial-onlycache random/small read only (default: false), see Client read data cache
--verify-cache-checksum=full Added in v1.1Checksum level for cache data. After enabled, checksum will be calculated on divided parts of the cache blocks and stored on disks, which are used for verification during reads. The following strategies are supported:
  • none: Disable checksum verification, if local cache data is tampered, bad data will be read;
  • full (default): Perform verification when reading the full block, use this for sequential read scenarios;
  • shrink: Perform verification on parts that's fully included within the read range, use this for random read scenarios;
  • extend: Perform verification on parts that fully include the read range, this causes read amplifications and is only used for random read scenarios demanding absolute data integrity.
--cache-eviction=2-random Added in v1.1cache eviction policy (none or 2-random) (default: "2-random")
--cache-scan-interval=1h Added in v1.1interval (in seconds) to scan cache-dir to rebuild in-memory index (default: "1h")
--cache-expire=0 Added in v1.2Cache blocks that have not been accessed for more than the set time, in seconds, will be automatically cleared (even if the value of --cache-eviction is none, these cache blocks will be deleted). A value of 0 means never expires (default: 0)

Metrics related options

||Items|Description| |-|-| |--metrics=127.0.0.1:9567|address to export metrics (default: 127.0.0.1:9567)| |--custom-labels|custom labels for metrics, format: key1:value1;key2:value2 (default: "")| |--consul=127.0.0.1:8500|Consul address to register (default: 127.0.0.1:8500)| |--no-usage-report|do not send usage report (default: false)|

juicefs webdav

Start a WebDAV server, refer to Deploy WebDAV Server for more.

Synopsis

juicefs webdav [command options] META-URL ADDRESS

juicefs webdav redis://localhost localhost:9007

Options

ItemsDescription
META-URLDatabase URL for metadata storage, see JuiceFS supported metadata engines for details.
ADDRESSWebDAV address and listening port, for example: localhost:9007.
--cert-file Added in v1.1certificate file for HTTPS
--key-file Added in v1.1key file for HTTPS
--gzipcompress served files via gzip (default: false)
--disallowListdisallow list a directory (default: false)
--log value Added in v1.2path for WebDAV log
--access-log=pathpath for JuiceFS access log
--background, -d Added in v1.2run in background (default: false)

Metadata related options

ItemsDescription
--subdir=valuemount a sub-directory as root (default: "")
--backup-meta=3600interval (in seconds) to automatically backup metadata in the object storage (0 means disable backup) (default: "3600")
--backup-skip-trash Added in v1.2skip files and directories in trash when backup metadata.
--heartbeat=12interval (in seconds) to send heartbeat; it's recommended that all clients use the same heartbeat value (default: "12")
--read-onlyallow lookup/read operations only (default: false)
--no-bgjobDisable background jobs, default to false, which means clients by default carry out background jobs, including:
  • Clean up expired files in Trash (look for cleanupDeletedFiles, cleanupTrash in pkg/meta/base.go)
  • Delete slices that's not referenced (look for cleanupSlices in pkg/meta/base.go)
  • Clean up stale client sessions (look for CleanStaleSessions in pkg/meta/base.go)
Note that compaction isn't affected by this option, it happens automatically with file reads and writes, client will check if compaction is in need, and run in background (take Redis for example, look for compactChunk in pkg/meta/base.go).
--atime-mode=noatime Added in v1.1Control atime (last time the file was accessed) behavior, support the following modes:
  • noatime (default): set when the file is created or when SetAttr is explicitly called. Accessing and modifying the file will not affect atime, tracking atime comes at a performance cost, so this is the default behavior
  • relatime: update inode access times relative to mtime (last time when the file data was modified) or ctime (last time when file metadata was changed). Only update atime if atime was earlier than the current mtime or ctime, or the file's atime is more than 1 day old
  • strictatime: always update atime on access
--skip-dir-nlink=20 Added in v1.1number of retries after which the update of directory nlink will be skipped (used for tkv only, 0 means never) (default: 20)
--skip-dir-mtime=100ms Added in v1.2skip updating attribute of a directory if the mtime difference is smaller than this value (default: 100ms)

Metadata cache related options

For metadata cache description and usage, refer to Kernel metadata cache and Client memory metadata cache.

ItemsDescription
--attr-cache=1attributes cache timeout in seconds (default: 1), read Kernel metadata cache
--entry-cache=1file entry cache timeout in seconds (default: 1), read Kernel metadata cache
--dir-entry-cache=1dir entry cache timeout in seconds (default: 1), read Kernel metadata cache
--open-cache=0open file cache timeout in seconds (0 means disable this feature) (default: 0)
--open-cache-limit value Added in v1.1max number of open files to cache (soft limit, 0 means unlimited) (default: 10000)

Data storage related options

ItemsDescription
--storage=fileObject storage type (e.g. s3, gs, oss, cos) (default: "file", refer to documentation for all supported object storage types).
--bucket=valuecustomized endpoint to access object storage
--storage-class value Added in v1.1the storage class for data written by current client
--get-timeout=60the max number of seconds to download an object (default: 60)
--put-timeout=60the max number of seconds to upload an object (default: 60)
--io-retries=10The number of retries when the network is abnormal and the number of retries for metadata requests are also controlled by this option. If the number of retries is exceeded, an EIO Input/output error error will be returned. (default: 10)
--max-uploads=20Upload concurrency, defaults to 20. This is already a reasonably high value for 4M writes, with such write pattern, increasing upload concurrency usually demands higher --buffer-size, learn more at Read/Write Buffer. But for random writes around 100K, 20 might not be enough and can cause congestion at high load, consider using a larger upload concurrency, or try to consolidate small writes in the application end.
--max-stage-write=0 Added in v1.2The maximum number of concurrent writes of data blocks to the cache disk asynchronously. If the maximum number of concurrent writes is reached, the object storage will be uploaded directly (this option is only valid when "Client write data cache" is enabled) (default value: 0, that is, no concurrency limit)
--max-deletes=10number of threads to delete objects (default: 10)
--upload-limit=0bandwidth limit for upload in Mbps (default: 0)
--download-limit=0bandwidth limit for download in Mbps (default: 0)

Data cache related options

ItemsDescription
--buffer-size=300total read/write buffering in MiB (default: 300), see Read/Write buffer
--prefetch=1prefetch N blocks in parallel (default: 1), see Client read data cache
--writebackupload objects in background (default: false), see Client write data cache
--upload-delay=0When --writeback is enabled, you can use this option to add a delay to object storage upload, default to 0, meaning that upload will begin immediately after write. Different units are supported, including s (second), m (minute), h (hour). If files are deleted during this delay, upload will be skipped entirely, when using JuiceFS for temporary storage, use this option to reduce resource usage. Refer to Client write data cache.
--upload-hours Added in v1.2When --writeback is enabled, data blocks are only uploaded during the specified time of day. The format of the parameter is <start hour>,<end hour> (including "start hour", but not including "end hour", "start hour" must be less than or greater than "end hour"), where <hour> can range from 0 to 23. For example, 0,6 means that data blocks are only uploaded between 0:00 and 5:59 every day, and 23,3 means that data blocks are only uploaded between 23:00 every day and 2:59 the next day.
--cache-dir=valuedirectory paths of local cache, use : (Linux, macOS) or ; (Windows) to separate multiple paths (default: $HOME/.juicefs/cache or /var/jfsCache), see Client read data cache
--cache-mode value Added in v1.1file permissions for cached blocks (default: "0600")
--cache-size=102400size of cached object for read in MiB (default: 102400), see Client read data cache
--free-space-ratio=0.1min free space ratio (default: 0.1), if Client write data cache is enabled, this option also controls write cache size, see Client read data cache
--cache-partial-onlycache random/small read only (default: false), see Client read data cache
--verify-cache-checksum=full Added in v1.1Checksum level for cache data. After enabled, checksum will be calculated on divided parts of the cache blocks and stored on disks, which are used for verification during reads. The following strategies are supported:
  • none: Disable checksum verification, if local cache data is tampered, bad data will be read;
  • full (default): Perform verification when reading the full block, use this for sequential read scenarios;
  • shrink: Perform verification on parts that's fully included within the read range, use this for random read scenarios;
  • extend: Perform verification on parts that fully include the read range, this causes read amplifications and is only used for random read scenarios demanding absolute data integrity.
--cache-eviction=2-random Added in v1.1cache eviction policy (none or 2-random) (default: "2-random")
--cache-scan-interval=1h Added in v1.1interval (in seconds) to scan cache-dir to rebuild in-memory index (default: "1h")
--cache-expire=0 Added in v1.2Cache blocks that have not been accessed for more than the set time, in seconds, will be automatically cleared (even if the value of --cache-eviction is none, these cache blocks will be deleted). A value of 0 means never expires (default: 0)

Metrics related options

||Items|Description| |-|-| |--metrics=127.0.0.1:9567|address to export metrics (default: 127.0.0.1:9567)| |--custom-labels|custom labels for metrics, format: key1:value1;key2:value2 (default: "")| |--consul=127.0.0.1:8500|Consul address to register (default: 127.0.0.1:8500)| |--no-usage-report|do not send usage report (default: false)|

Tool

juicefs bench

Run benchmark, including read/write/stat for big and small files. For a detailed introduction to the bench subcommand, refer to the documentation.

Synopsis

juicefs bench [command options] PATH

# Run benchmarks with 4 threads
juicefs bench /mnt/jfs -p 4

# Run benchmarks of only small files
juicefs bench /mnt/jfs --big-file-size 0

Options

ItemsDescription
--block-size=1block size in MiB (default: 1)
--big-file-size=1024size of big file in MiB (default: 1024)
--small-file-size=128size of small file in KiB (default: 128)
--small-file-count=100number of small files (default: 100)
--threads=1, -p 1number of concurrent threads (default: 1)

juicefs objbench

Run basic benchmarks on the target object storage to test if it works as expected. Read documentation for more.

Synopsis

juicefs objbench [command options] BUCKET

# Run benchmarks on S3
ACCESS_KEY=myAccessKey SECRET_KEY=mySecretKey juicefs objbench --storage=s3 https://mybucket.s3.us-east-2.amazonaws.com -p 6

Options

ItemsDescription
--storage=fileObject storage type (e.g. s3, gs, oss, cos) (default: file, refer to documentation for all supported object storage types)
--access-key=valueAccess Key for object storage (can also be set via the environment variable ACCESS_KEY), see How to Set Up Object Storage for more.
--secret-key valueSecret Key for object storage (can also be set via the environment variable SECRET_KEY), see How to Set Up Object Storage for more.
--session-token value Added in v1.0session token for object storage
--block-size=4096size of each IO block in KiB (default: 4096)
--big-object-size=1024size of each big object in MiB (default: 1024)
--small-object-size=128size of each small object in KiB (default: 128)
--small-objects=100number of small objects (default: 100)
--skip-functional-testsskip functional tests (default: false)
--threads=4, -p 4number of concurrent threads (default: 4)

juicefs warmup

Download data to local cache in advance, to achieve better performance on application's first read. You can specify a mount point path to recursively warm-up all files under this path. You can also specify a file through the --file option to only warm-up the files contained in it.

If the files needing warming up resides in many different directories, you should specify their names in a text file, and pass to the warmup command using the --file option, allowing juicefs warmup to download concurrently, which is significantly faster than calling juicefs warmup multiple times, each with a single file.

Synopsis

juicefs warmup [command options] [PATH ...]

# Warm up all files in datadir
juicefs warmup /mnt/jfs/datadir

# Warm up selected files
echo '/jfs/f1
/jfs/f2
/jfs/f3' > /tmp/filelist.txt
juicefs warmup -f /tmp/filelist.txt

Options

ItemsDescription
--file=path, -f pathfile containing a list of paths (each line is a file path)
--threads=50, -p 50number of concurrent workers, default to 50. Reduce this number in low bandwidth environment to avoid download timeouts
--background, -brun in background (default: false)
--evict Added in v1.2evict cached blocks
--check Added in v1.2check whether the data blocks are cached or not

juicefs rmr

Remove all the files and subdirectories, similar to rm -rf, except this command deals with metadata directly (bypassing kernel), thus is much faster.

If trash is enabled, deleted files are moved into trash. Read more at Trash.

Synopsis

juicefs rmr PATH ...

juicefs rmr /mnt/jfs/foo

juicefs sync

Sync between two storage, read Data migration for more.

Synopsis

juicefs sync [command options] SRC DST

# Sync object from OSS to S3
juicefs sync oss://mybucket.oss-cn-shanghai.aliyuncs.com s3://mybucket.s3.us-east-2.amazonaws.com

# Sync objects from S3 to JuiceFS
juicefs sync s3://mybucket.s3.us-east-2.amazonaws.com/ jfs://META-URL/

# SRC: a1/b1,a2/b2,aaa/b1 DST: empty sync result: aaa/b1
juicefs sync --exclude='a?/b*' s3://mybucket.s3.us-east-2.amazonaws.com/ jfs://META-URL/

# SRC: a1/b1,a2/b2,aaa/b1 DST: empty sync result: a1/b1,aaa/b1
juicefs sync --include='a1/b1' --exclude='a[1-9]/b*' s3://mybucket.s3.us-east-2.amazonaws.com/ jfs://META-URL/

# SRC: a1/b1,a2/b2,aaa/b1,b1,b2 DST: empty sync result: b2
juicefs sync --include='a1/b1' --exclude='a*' --include='b2' --exclude='b?' s3://mybucket.s3.us-east-2.amazonaws.com/ jfs://META-URL/

As shown in the examples, the format of both source (SRC) and destination (DST) paths is:

[NAME://][ACCESS_KEY:SECRET_KEY[:TOKEN]@]BUCKET[.ENDPOINT][/PREFIX]

In which:

  • NAME: JuiceFS supported data storage types like s3, oss, refer to this document for a full list.
  • ACCESS_KEY and SECRET_KEY: The credential required to access the data storage, refer to this document.
  • TOKEN token used to access the object storage, as some object storage supports the use of temporary token to obtain permission for a limited time
  • BUCKET[.ENDPOINT]: The access address of the data storage service. The format may be different for different storage types, and refer to the document.
  • [/PREFIX]: Optional, a prefix for the source and destination paths that can be used to limit synchronization of data only in certain paths.
ItemsDescription
--start=KEY, -s KEY, --end=KEY, -e KEYProvide object storage key range for syncing.
--end KEY, -e KEYthe last KEY to sync
--exclude=PATTERNExclude keys matching PATTERN. Refer to the "Filtering" document to learn how to use it.
--include=PATTERNInclude keys matching PATTERN, need to be used with --exclude. Refer to the "Filtering" document to learn how to use it.
--match-full-path Added in v1.2Use "Full path filtering mode", default is false. Refer to the "Filtering modes" document to learn how to use it.
--max-size-SIZE Added in v1.2skip files larger than SIZE
--min-size-SIZE Added in v1.2skip files smaller than SIZE
--max-age=DURATION Added in v1.2Skip files whose last modification time exceeds DURATION, in seconds. For example, --max-age=3600 means to synchronize only files that have been modified within 1 hour.
--min-age=DURATION Added in v1.2Skip files whose last modification time is no more than DURATION, in seconds. For example, --min-age=3600 means to synchronize only files whose last modification time is more than 1 hour from the current time.
--limit=-1Limit the number of objects that will be processed, default to -1 which means unlimited.
--update, -uUpdate existing files if the source files' mtime is newer, default to false.
--force-update, -fAlways update existing file, default to false.
--existing, --ignore-non-existing Added in v1.1Skip creating new files on destination, default to false.
--ignore-existing Added in v1.1Skip updating files that already exist on destination, default to false.
ItemsDescription
--dirsSync empty directories as well.
--permsPreserve permissions, default to false.
--links, -lCopy symlinks as symlinks default to false.
--inplace Added in v1.2When a file in the source path is modified, directly modify the file with the same name in the destination path instead of first writing a temporary file in the destination path and then atomically renaming the temporary file to the real file name. This option only makes sense when the --update option is enabled and the storage system of the destination path supports in-place modification of files (such as JuiceFS, HDFS, NFS). That is to say, if the storage system of the destination path is object storage, enable this option is invalid. (default: false)
--delete-src, --deleteSrcDelete objects that already exist in destination. Different from rsync, files won't be deleted at the first run, instead they will be deleted at the next run, after files are successfully copied to the destination.
--delete-dst, --deleteDstDelete extraneous objects from destination.
--check-allVerify the integrity of all files in source and destination, default to false. Comparison is done on byte streams, which comes at a performance cost.
--check-newVerify the integrity of newly copied files, default to false. Comparison is done on byte streams, which comes at a performance cost.
--dryDon't actually copy any file.
ItemsDescription
--threads=10, -p 10Number of concurrent threads, default to 10.
--list-threads=1 Added in v1.1Number of list threads, default to 1. Read concurrent list to learn its usage.
--list-depth=1 Added in v1.1Depth of concurrent list operation, default to 1. Read concurrent list to learn its usage.
--no-httpsDo not use HTTPS, default to false.
--storage-class value Added in v1.1the storage class for destination
--bwlimit=0Limit bandwidth in Mbps default to 0 which means unlimited.
ItemsDescription
--manager-addr=ADDRThe listening address of the Manager node in distributed synchronization mode in the format: <IP>:[port]. If not specified, it listens on a random port. If this option is omitted, it listens on a random local IPv4 address and a random port.
--worker=ADDR,ADDRWorker node addresses used in distributed syncing, comma separated.
ItemsDescription
--metrics value Added in v1.2address to export metrics (default: "127.0.0.1:9567")
--consul value Added in v1.2Consul address to register (default: "127.0.0.1:8500")

juicefs clone Added in v1.1

Quickly clone directories or files within a single JuiceFS mount point. The cloning process involves copying only the metadata without copying the data blocks, making it extremely fast. Read Clone Files or Directories for more.

Synopsis

juicefs clone [command options] SRC DST

# Clone a file
juicefs clone /mnt/jfs/file1 /mnt/jfs/file2

# Clone a directory
juicefs clone /mnt/jfs/dir1 /mnt/jfs/dir2

# Preserve the UID, GID, and mode of the file
juicefs clone -p /mnt/jfs/file1 /mnt/jfs/file2

Options

ItemsDescription
--preserve, -pBy default, the executor's UID and GID are used for the clone result, and the mode is recalculated based on the user's umask. Use this option to preserve the UID, GID, and mode of the file.

juicefs compact Added in v1.2

Performs fragmentation optimization, merging, or cleaning of non-contiguous slices in the given directory to improve read performance. For detailed information, refer to 「Status Check and Maintenance」.

Overview

juicefs compact [command options] PATH

# Perform fragmentation optimization on the specified directory
juicefs compact /mnt/jfs

Parameters

ItemDescription
--threads, -pNumber of threads to concurrently execute tasks (default: 10)