Frequent Asked Questions

Is JuiceFS ready for production?

Absolutely! Since its official commercial release in 2017, JuiceFS has been running in production environments of a variety of internet and high-tech enterprise, carrying more than 500 days of different workloads and over 1PB of data. At the same time, each version of JuiceFS needs to pass extensive testing before release.

Besides, JuiceFS is designed to be a high available service (replicated in multiple available zones), with targeted uptime SLA as 99.95% per month. The availability of JuiceFS is also determined by the availability of underlying object storage, which is usually claimed to be high available, for actual SLA, please check the docs for the public cloud you are using.

In addition, JuiceFS supports automatic replication of data to another object storage in a different public cloud or region for outstanding availability and reliability.

Which public cloud and region is supported?

Currently, all regions from Amazon Web Service (AWS), Google Cloud Platform (GCP), Microsoft Azure, Aliyun, TencentCloud, UCloud, QingCloud, Qiniu, Baidu Cloud, KSYun, NeteaseCloud, JD Cloud, and Digital Ocean are supported (many region may not open for public).

Any public cloud, which supports Linux instance and object storage, could be easily supported by JuiceFS. If a public cloud or a region is not listed in web console, please contact us.

JuiceFS also supports private deployment in the enterprise’s own public cloud VPC and data center, please contact us.

Which operating system is supported?

JuiceFS is implemented via FUSE and can be used on Linux, BSD, and macOS that support FUSE. Windows client is under development and will release soon. Most Linux and BSD distributions have built-in FUSE modules. You need to install or compile the FUSE module if not exist. FUSE for macOS needs to be installed on macOS.

How is the performance of JuiceFS?

JuiceFS is a distributed file system, the latency of metedata is determined by 1 (reading) or 2 (writing) round trip(s) between client and metadata service (usually 1-3ms within same region). The latency of first byte is determined by the performance of underlying object storage (20-100ms). Thrughput of sequential read/write could be 50MB/s - 400MB/s, depends on network bandwidth and how the data could be compressed.

JuiceFS is built with multiple layers of caching (invalidated automatically), once the caching is warmed up, the latency and throughput of JuiceFS could be close to local filesystem (having the overhead of FUSE).

Does JuiceFS support random read/write?

Yes, including those issued using mmap. Currently JuiceFS is optimized for sequential reading/writing, and optimized for random reading/writing is work in progress.

When my update will be visible to other clients?

All the metadata updates are inmediately visible to all others. The new data written by write() will be buffered in kernel or client, visible to other processes on the same machine, not visible to other machines. Once flush(), datasync() or close() is called, the buffered data will be commited (upload to object storage and update metadata), will be visible to all others once the call returns.

After a certain time, call fdatasync() or close() to force upload the data to the object storage and update the metadata, other clients can visit the updates. It is also the strategy adopted by the vast majority of distributed file systems.

How can I speedup moving small files into JuiceFS?

You could mount JuiceFS with --writeback parameter, which will write the small files into local disks first, then upload them to object storage in background, this could speedup coping many small files into JuiceFS.

Where is my existing files in object storage?

Existing files in the object storage are not accessible throught JuiceFS, you can import them into JuiceFS very quickly using juicefs import command, check out the user guide for details.

Can I mount JuiceFS without root?

Yes, JuiceFS could be mounted using juicefs without root. The default directory for caching is `/var/jfsCache/JFS_NAME`, you should change that to a directory which you have write permission.

How is the size of JuiceFS calculated?

The size of JuiceFS is the sum of size of all the objects, each file or directory has a minimum billable size of 4KB (same as the Azure Data Lake Store billing method), we recommend storing data in larger files to save costs and improve performance.

The realtime size of each directory (including all the files and directories in it) could be checked in the web console.

How to upgrade JuiceFS clients?

Please run juicefs version --upgrade, checkout the Command Reference for more detail.

How to unmount JuiceFS?

Please checkout the Getting Started .

How to contact us?

You could reach us using the live chat at the right bottom from any page, or send an email to hello AT juicefs.com.