Tresorit's Amazon S3-compatible API server, designed for direct REST API calls over HTTP(S) or integration with rclone. This tool enables customers to securely sync, migrate, and manage data in Tresorit using familiar, industry-standard workflows. The actual server is hosted as a Docker image, this repository contains configuration files and scripts required for a deployment.
- S3-compatible REST API — list, create, and delete tresors; upload, download, and delete files; list folder contents
- Rclone integration — sync, mount, and transfer files using Rclone CLI
- Cloud migration — migrate data from other S3-compatible providers (Dropbox, AWS S3, Google Cloud Storage) to Tresorit
- Local-to-cloud sync — migrate from local drives or set up hybrid storage (on-prem NAS + Tresorit)
- Managed backups — automated backup of databases or VMs to Tresorit
- Archival storage — long-term archival for compliance or audit requirements
- S3 drop-in replacement — use Tresorit as the storage layer for tools like Apache Airflow, Spark, or Snowflake
- DevOps artifact storage — store CI/CD artifacts from GitHub Actions, GitLab CI, Azure Pipelines, etc.
- Workflow automation — integrate with Zapier, IFTTT, or n8n for event-driven document workflows
- Protocol bridging — use Rclone to expose Tresorit as FTP, SFTP, or WebDAV
- Docker and Docker Compose installed
- OpenSSL for running scripts/setup_credentials.sh
- Access to a Tresorit account with an enabled S3 API client access
docker-compose.yaml.
Why SSL/TLS is Required:
- API credentials are transmitted in request headers
- AWS SigV4 signatures are sent with every request
- Without TLS, credentials can be intercepted over the network
- Never expose port 3000 directly to the internet
Your reverse proxy should forward HTTPS traffic to http://localhost:3000 (or the container's internal address).
./scripts/setup_credentials.shThe script creates:
credentials.json- API credentials (project root, human-readable backup — not used by Docker)secrets/postgres_user.txt- Generated PostgreSQL usernamesecrets/postgres_password.txt- Generated PostgreSQL passwordsecrets/database_url.txt- Database connection URLsecrets/credentials_json.txt- API credentials mounted as a Docker secret
Note: It is highly recommended to set the
ENFORCE_PAYLOAD_VERIFICATIONenvironment variable totrueif you encounter issues with payload hashes, use the signing proxy example provided inRCLONE_CONFIGURATION.md. To find that flag, examine the docker-compose.yaml file and search forENFORCE_PAYLOAD_VERIFICATION.
docker-compose up -dAfter the deployment is running, authenticate using the login script:
./scripts/login.shCheck the containers are running:
docker-compose ps(Optional) Check the logs have no errors:
docker-compose logs -f(Optional) Confirm the S3 API is working (requires AWS CLI and jq):
AWS_ACCESS_KEY_ID=$(jq -r '.[0].client_id' secrets/credentials_json.txt) \
AWS_SECRET_ACCESS_KEY=$(jq -r '.[0].client_secret' secrets/credentials_json.txt) \
aws s3api --ca-bundle ./root.crt list-buckets --endpoint-url https://localhostNote: If this returns a list of buckets (or an empty
Bucketsarray), your production deployment is working correctly. The./root.crtfile is created during the execution ofscripts/login.sh.
# View logs
docker-compose logs -f
# Check status
docker-compose ps
# Restart
docker-compose restart
# Stop
docker-compose down-
Always use HTTPS in production
- Configure your reverse proxy with SSL/TLS
- Never expose API over plain HTTP
-
Backup credentials securely
- Store in password manager
- Encrypt backups:
gpg -c credentials.json
-
Monitor for unauthorized access
docker logs tresorit-s3-api 2>&1 | grep -E '"status":(401|403)'
-
Serve this API on your own on-premise infrastructure
- Tresorit's end-to-end encryption is only preserved when running the server on your own on-premise infrastructure. If you deploy to a third-party cloud provider (e.g. AWS, Azure, GCP), data is protected in transit by TLS between the client and your server, but it will be present in unencrypted form on the cloud provider's host
-
Root folder naming rules for full S3 compatibility —
RcloneREST APIAmong the available S3 SDKs, there are quite significant differences in how strictly they enforce S3-conform bucket naming. Some SDKs validate the bucket name (i.e. the vault name) before sending the request and reject the call if the name is invalid, while others allow it to go through.
If a user wants a fully S3-conform solution, they need to follow the General purpose bucket naming rules when choosing the bucket (tresor) name.
Note: This also means that errors may occur when querying tresors that were previously created with non-S3-conform bucket names.
-
By default, the API and Rclone are only aware of entries uploaded through the API —
RcloneREST APIWhen listing folder entries, only the files uploaded via the API are shown. The reason for this is that object listing exposes an MD5 hash, which the API doesn't get from Tresorit.
Workarounds:
- Use
HeadObjectandGetObjecton those files — once these endpoints have seen a file at least once, it will then appear in the file list. - Use the non-standard HTTP header
X-Return-Missing-Metadata, which can be used with Rclone as well:In the latter case, the API can automatically traverse the given directory structure and return files that were not previously added via the API.rclone mount tresorit-demo:justsometesting ./mounted --header "X-Return-Missing-Metadata: true"
Note: Depending on the depth of the folder structure and the number of unknown files, this can take several minutes. Not every S3-conform tool supports defining custom headers, and even if it does, it may still time out when parsing a deeper directory tree.
- Use
-
Upload file size limited to 5 GB per file —
RcloneREST APIFile uploads are limited to a maximum of 5 GB per file, but downloads can be larger — multi-part upload for
PutObjectis not supported yet. -
Multipart upload is not supported —
RcloneREST APIAmazon S3 has two common upload styles:
- Single PUT (
PutObject): one request, simpler. - Multipart upload:
CreateMultipartUpload→UploadPart(N times) →CompleteMultipartUpload
MinIO's SDK will often switch to multipart automatically above a size threshold, or when it thinks it's beneficial.
Note: The current state of the Tresorit API implementation does NOT implement the multipart endpoints, so if the SDK tries to use them, uploads will fail.
Practical effect:
- Large files that would normally upload via multipart must upload as a single PUT instead.
- This can cause:
- Size limits depending on your server/proxy (see TL3)
- Worse resilience (no resume per-part)
- Worse performance for big files
- No parallel part uploads
- Single PUT (
-
AWS S3 SDK requests must address a bucket in Path style —
RcloneREST APIYou must use
UsePathStyle=true.- Virtual-hosted style (default on AWS):
https://my-bucket.s3.amazonaws.com/key - Path style:
https://s3.amazonaws.com/my-bucket/key(or for custom endpoints:https://endpoint/my-bucket/key)
Many S3-compatible servers / custom endpoints (MinIO, self-hosted gateways, local dev, IP-based endpoints, non-wildcard TLS certs) don't work well with virtual-hosted style because:
- DNS for
bucket.endpointmay not exist or - TLS certificate might not match
bucket.endpoint.
Rclone often talks to non-AWS endpoints, so path-style is the safer interoperability mode than virtual-hosted style.
Practical effect: you must use URLs where the bucket is in the path, not in the hostname.
- Virtual-hosted style (default on AWS):
-
BucketLookuphas to be set toBucketLookupPathin MinIO SDK config —RcloneREST APIMinIO itself is an open-source object storage server which implements the Amazon S3 API (at least a large practical subset of it), where MinIO SDK (e.g.
minio-go) is the client side of that ecosystem.This is the same concept as in TL4, but expressed in MinIO's Go SDK settings.
BucketLookupPath= bucket in the URL path (path-style)
This flag must be set, because the S3 compatibility layer only supports path-style bucket addressing, not bucket-as-subdomain.
-
There is no way to directly interact with subfolders —
RcloneREST APIIt's not possible to create an empty folder (without files) via the API. However, it is possible to upload a file into a non-existent folder structure, which will automatically create the corresponding directory structure in Tresorit as well.
This also means, if all the files are deleted from a folder, the folder itself will also be deleted.
-
The file version history is lost when a file is moved or copied —
RcloneBoth move and copy operations create a completely new file object, so all previous file versions of the moved or copied file (stored in Tresorit) will be lost.
-
SSO is not supported —
RcloneREST APISSO is not supported, but 2-Step Verification with TOTP can be set up.
-
Cannot interact with special hidden folders —
RcloneREST APIRight now hidden tresors (e.g. Engage rooms, Email Encryption) are not accessible via the API or Rclone.
-
It's not suitable for high-volume media streaming or low-latency storage —
RcloneREST APIDue to the end-to-end encryption overhead, neither Tresorit nor the API built on top of it is really suitable for low-latency use.
-
The REST API cannot list the files of a selected subfolder —
REST APIThe REST API can only list all files under the root tresor (bucket), recursively.
However, Rclone can filter the response down to the files in the selected folder as well (as long as the API is aware of the files inside it, see #2).
-
The REST API interface is limited to CRUD operations —
REST APIThe REST API exposes a limited set of file/bucket operations:
- Listing, creating, and deleting tresors (buckets)
- Uploading, downloading, and deleting files
- Listing all files within a tresor (bucket)
The REST API isn't designed for full file system management. So, it does NOT support:
- Renaming files or folders
- Moving files or folders
- Accessing or downloading previous file versions
For advanced file management or synchronization use cases, customers should use Tresorit desktop/mobile/web clients.
Executing scripts/login.sh results in:
{"error":"LoginBlocked","message":"Login was unsuccessful","restriction_state":"ClientPlatformIsForbidden"}It means that the usage of this tool is not enabled on your account. If this is a mistake, please contact our support.
Please visit our website for contact information.