Skip to main content

Set up object storage

Access Key and Secret Key

If your cloud service provider supports configuring bucket access policy for virtual machines, and achieve access to object storage without credentials (like AWS IAM), you can omit those keys during juicefs auth or juicefs mount (provide empty value), see juicefs auth for details.

For users that prefer ease of use, we recommend granting full read/write and CreateBucket permission to the API keys, so that JuiceFS Client can help you create the bucket on the first successful mount.

But for environments that comes with strict security policies, our minimum permission requirements are GetObject, PutObject and DeleteObject. You can restrict access to the corresponding bucket, under the file system prefix path (default to juicefs-<VOL_NAME>). When running with minimum permissions, JuiceFS will not be able to create object storage bucket for you, create them manually in advance.

tip
  1. Since JuiceFS clients runs background jobs, like compaction, even if it's mounted with a read-only token, client still needs PutObjectDeleteObject permission to run these background jobs. If you'd like read-only clients to be truly read-only against object storage, disable its background job in client access token.
  2. ListObjects is required for importing and replication.

Common object storage

Amazon S3

Refer to "AWS security credentials". Moreover, if you have used an IAM role to grant permissions to applications running on Amazon EC2 instances, you can omit credentials during juicefs mount.

A bucket policy example:

{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:DeleteObject",
"s3:GetObject",
"s3:HeadObject",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::juicefs-example/example/*"
]
}
]
}

Google Cloud Storage

First, you should create a project in the console of Google Cloud Platform, remember your Project ID:

GCP-project-ID

Download and install Cloud SDK:

curl https://sdk.cloud.google.com | bash

Run the following command after installation:

gcloud auth application-default login

Congratulation, you have done the authentication job that would be executed only once.

Finally, you could run juicefs mount to mount your JuiceFS file system, the Project ID will be requested (you could set it as GOOGLE_CLOUD_PROJECT in environment variable).

When you mount a file system with sudo, you also should run gcloud auth with sudo. Otherwise, the JuiceFS may not load the credential.

If JuiceFS is used inside Compute Engine, it's recommended to grant the virtual machines full access to Storage API.

Azure Blob Storage

Currently, service is only available at Microsoft Azure Chinese Region, contact us and other regions can be supported.

When the JuiceFS use the Azure Blob Storage as the underlying storage, you should create a storage account. Find Storage Accounts from the navigation of left panel.

Azure-storage-account

Create a new account in Storage accounts, the name will be requested at mounting the JuiceFS file system, the account kind should be "Blob storage".

Azure-create-storage-account

Enter the Access key from your storage account, there're two keys available.

Azure-storage-access-key

Backblaze B2

Create an application key with read and write permission on Application Keys.

The master application key is required to create a bucket by JuiceFS. It's recommended to create a bucket manually, using a name like juicefs-NAME, then create an application key with read-write access for JuiceFS.

IBM Cloud Object Storage

It requires API Key and Resource Instance ID to access Cloud Object Storage, refer to Retrieving your instance ID.

DigitalOcean Spaces

Refer to How To Create a DigitalOcean Space and API Key.

Wasabi

Refer to Creating a Root Access Key and Secret Key.

Alibaba Cloud OSS

Obtain Access Key in the object storage console:

aliyun-oss-key-1

Create a key for JuiceFS mount:

aliyun-oss-key-2

A bucket policy example:

{
"Statement": [
{
"Action": [
"oss:DeleteObject",
"oss:GetObject",
"oss:HeadObject",
"oss:PutObject"
],
"Effect": "Allow",
"Resource": [
"acs:oss:*:*:juicefs-example/example/*"
]
}
],
"Version": "1"
}

Tencent Cloud COS

When using Tencent Cloud COS, mounting JuiceFS requires a Tencent APPID in addition, so we recommend fill in the APPID into Bucket when creating the file system, using format {bucket}-{APPID}. If you didn't specify an APPID when creating the file system, JuiceFS will ask for APPID interactively during mount. Moreover, you can specify APPID in juicefs auth, by the --bucket parameter, using the same format {bucket}-{APPID}.

APPID is in the Account Info.

tencent-account-appid

Secret ID and Secret Key are managed in the API Key Management, you need to create a pair if it's empty.

tencent-keys

A bucket policy example:

{
"Statement": [
{
"Effect": "Allow",
"Action": [
"cos:DeleteObject",
"cos:GetObject",
"cos:HeadObject",
"cos:PutObject"
],
"Resource": [
"qcs::cos:ap-guangzhou:uid/1250000000:juicefs-example-1250000000/example/*"
]
}
],
"Version": "2.0"
}

Huawei Cloud OBS

Refer to How Do I Manage Access Keys?.

Baidu Cloud BOS

Login the Baidu Cloud Console, enter the Security Authentication in the dropdown menu of the account at the right-upper corner of the page.

baidu-bos-key

Kingsoft Cloud KS3

Refer to User Access Key Management.

QingCloud QingStor

Login the Qingcloud Console, you'll find Access Keys in the dropdown menu of your account at the right corner.

qingcloud-key

Qiniu Kodo

Refer to How to get Access Key and Secret Key.

UCloud US3

Login the UCloud console, you'll find your API key in UAPI in the Monitoring management of Product and service.

ucloud-key

Ceph

Ceph provides two sets of APIs: RADOS and RGW. RADOS is the underlying protocol provided by Ceph, while RGW is a S3 gateway, exposing standard S3 APIs. Connecting via RADOS is recommended as it bypasses RGW and achieves better latency. If you decide to use Ceph via S3, use it like any other S3 object storage services.

If the RADOS client protocol is used, JuiceFS uses librados2, which supports Ceph >= 12.2. You'll need to provide the cluster name (e.g. ceph), and a user name (e.g. client.admin).