Skip to content

tresorit/s3-api

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Tresorit API

Tresorit's Amazon S3-compatible API server, designed for direct REST API calls over HTTP(S) or integration with rclone. This tool enables customers to securely sync, migrate, and manage data in Tresorit using familiar, industry-standard workflows. The actual server is hosted as a Docker image, this repository contains configuration files and scripts required for a deployment.


Features

  • S3-compatible REST API — list, create, and delete tresors; upload, download, and delete files; list folder contents
  • Rclone integration — sync, mount, and transfer files using Rclone CLI
  • Cloud migration — migrate data from other S3-compatible providers (Dropbox, AWS S3, Google Cloud Storage) to Tresorit
  • Local-to-cloud sync — migrate from local drives or set up hybrid storage (on-prem NAS + Tresorit)
  • Managed backups — automated backup of databases or VMs to Tresorit
  • Archival storage — long-term archival for compliance or audit requirements
  • S3 drop-in replacement — use Tresorit as the storage layer for tools like Apache Airflow, Spark, or Snowflake
  • DevOps artifact storage — store CI/CD artifacts from GitHub Actions, GitLab CI, Azure Pipelines, etc.
  • Workflow automation — integrate with Zapier, IFTTT, or n8n for event-driven document workflows
  • Protocol bridging — use Rclone to expose Tresorit as FTP, SFTP, or WebDAV

Prerequisites

System Requirements

  • Docker and Docker Compose installed
  • OpenSSL for running scripts/setup_credentials.sh
  • Access to a Tresorit account with an enabled S3 API client access

SSL/TLS Configuration Required

⚠️ You are responsible for configuring SSL/TLS using a reverse proxy or load balancer of your choice. This application runs on HTTP (port 3000) by default, however a Caddy setup is included in our deployment configuration as an example. When you setup your own SSL/TLS, remove the caddy service from the docker-compose.yaml.

Why SSL/TLS is Required:

  • API credentials are transmitted in request headers
  • AWS SigV4 signatures are sent with every request
  • Without TLS, credentials can be intercepted over the network
  • Never expose port 3000 directly to the internet

Your reverse proxy should forward HTTPS traffic to http://localhost:3000 (or the container's internal address).


Deployment Steps

Step 1: Run Setup Script

./scripts/setup_credentials.sh

The script creates:

  • credentials.json - API credentials (project root, human-readable backup — not used by Docker)
  • secrets/postgres_user.txt - Generated PostgreSQL username
  • secrets/postgres_password.txt - Generated PostgreSQL password
  • secrets/database_url.txt - Database connection URL
  • secrets/credentials_json.txt - API credentials mounted as a Docker secret

Step 2: Start Production Deployment

Note: It is highly recommended to set the ENFORCE_PAYLOAD_VERIFICATION environment variable to true if you encounter issues with payload hashes, use the signing proxy example provided in RCLONE_CONFIGURATION.md. To find that flag, examine the docker-compose.yaml file and search for ENFORCE_PAYLOAD_VERIFICATION.

docker-compose up -d

Step 3: Login

After the deployment is running, authenticate using the login script:

./scripts/login.sh

Step 4: Verify Deployment

Check the containers are running:

docker-compose ps

(Optional) Check the logs have no errors:

docker-compose logs -f

(Optional) Confirm the S3 API is working (requires AWS CLI and jq):

AWS_ACCESS_KEY_ID=$(jq -r '.[0].client_id' secrets/credentials_json.txt) \
AWS_SECRET_ACCESS_KEY=$(jq -r '.[0].client_secret' secrets/credentials_json.txt) \
aws s3api --ca-bundle ./root.crt list-buckets --endpoint-url https://localhost

Note: If this returns a list of buckets (or an empty Buckets array), your production deployment is working correctly. The ./root.crt file is created during the execution of scripts/login.sh.


Production Commands

# View logs
docker-compose logs -f

# Check status
docker-compose ps

# Restart
docker-compose restart

# Stop
docker-compose down

Security Best Practices

  1. Always use HTTPS in production

    • Configure your reverse proxy with SSL/TLS
    • Never expose API over plain HTTP
  2. Backup credentials securely

    • Store in password manager
    • Encrypt backups: gpg -c credentials.json
  3. Monitor for unauthorized access

    docker logs tresorit-s3-api 2>&1 | grep -E '"status":(401|403)'
  4. Serve this API on your own on-premise infrastructure

    • Tresorit's end-to-end encryption is only preserved when running the server on your own on-premise infrastructure. If you deploy to a third-party cloud provider (e.g. AWS, Azure, GCP), data is protected in transit by TLS between the client and your server, but it will be present in unencrypted form on the cloud provider's host

Technical Limitations

  1. Root folder naming rules for full S3 compatibilityRclone REST API

    Among the available S3 SDKs, there are quite significant differences in how strictly they enforce S3-conform bucket naming. Some SDKs validate the bucket name (i.e. the vault name) before sending the request and reject the call if the name is invalid, while others allow it to go through.

    If a user wants a fully S3-conform solution, they need to follow the General purpose bucket naming rules when choosing the bucket (tresor) name.

    Note: This also means that errors may occur when querying tresors that were previously created with non-S3-conform bucket names.

  2. By default, the API and Rclone are only aware of entries uploaded through the APIRclone REST API

    When listing folder entries, only the files uploaded via the API are shown. The reason for this is that object listing exposes an MD5 hash, which the API doesn't get from Tresorit.

    Workarounds:

    • Use HeadObject and GetObject on those files — once these endpoints have seen a file at least once, it will then appear in the file list.
    • Use the non-standard HTTP header X-Return-Missing-Metadata, which can be used with Rclone as well:
      rclone mount tresorit-demo:justsometesting ./mounted --header "X-Return-Missing-Metadata: true"
      In the latter case, the API can automatically traverse the given directory structure and return files that were not previously added via the API.

    Note: Depending on the depth of the folder structure and the number of unknown files, this can take several minutes. Not every S3-conform tool supports defining custom headers, and even if it does, it may still time out when parsing a deeper directory tree.

  3. Upload file size limited to 5 GB per fileRclone REST API

    File uploads are limited to a maximum of 5 GB per file, but downloads can be larger — multi-part upload for PutObject is not supported yet.

  4. Multipart upload is not supportedRclone REST API

    Amazon S3 has two common upload styles:

    • Single PUT (PutObject): one request, simpler.
    • Multipart upload: CreateMultipartUploadUploadPart (N times) → CompleteMultipartUpload

    MinIO's SDK will often switch to multipart automatically above a size threshold, or when it thinks it's beneficial.

    Note: The current state of the Tresorit API implementation does NOT implement the multipart endpoints, so if the SDK tries to use them, uploads will fail.

    Practical effect:

    • Large files that would normally upload via multipart must upload as a single PUT instead.
    • This can cause:
      • Size limits depending on your server/proxy (see TL3)
      • Worse resilience (no resume per-part)
      • Worse performance for big files
      • No parallel part uploads
  5. AWS S3 SDK requests must address a bucket in Path styleRclone REST API

    You must use UsePathStyle=true.

    • Virtual-hosted style (default on AWS): https://my-bucket.s3.amazonaws.com/key
    • Path style: https://s3.amazonaws.com/my-bucket/key (or for custom endpoints: https://endpoint/my-bucket/key)

    Many S3-compatible servers / custom endpoints (MinIO, self-hosted gateways, local dev, IP-based endpoints, non-wildcard TLS certs) don't work well with virtual-hosted style because:

    • DNS for bucket.endpoint may not exist or
    • TLS certificate might not match bucket.endpoint.

    Rclone often talks to non-AWS endpoints, so path-style is the safer interoperability mode than virtual-hosted style.

    Practical effect: you must use URLs where the bucket is in the path, not in the hostname.

  6. BucketLookup has to be set to BucketLookupPath in MinIO SDK configRclone REST API

    MinIO itself is an open-source object storage server which implements the Amazon S3 API (at least a large practical subset of it), where MinIO SDK (e.g. minio-go) is the client side of that ecosystem.

    This is the same concept as in TL4, but expressed in MinIO's Go SDK settings.

    • BucketLookupPath = bucket in the URL path (path-style)

    This flag must be set, because the S3 compatibility layer only supports path-style bucket addressing, not bucket-as-subdomain.

  7. There is no way to directly interact with subfoldersRclone REST API

    It's not possible to create an empty folder (without files) via the API. However, it is possible to upload a file into a non-existent folder structure, which will automatically create the corresponding directory structure in Tresorit as well.

    This also means, if all the files are deleted from a folder, the folder itself will also be deleted.

  8. The file version history is lost when a file is moved or copiedRclone

    Both move and copy operations create a completely new file object, so all previous file versions of the moved or copied file (stored in Tresorit) will be lost.

  9. SSO is not supportedRclone REST API

    SSO is not supported, but 2-Step Verification with TOTP can be set up.

  10. Cannot interact with special hidden foldersRclone REST API

    Right now hidden tresors (e.g. Engage rooms, Email Encryption) are not accessible via the API or Rclone.

  11. It's not suitable for high-volume media streaming or low-latency storageRclone REST API

    Due to the end-to-end encryption overhead, neither Tresorit nor the API built on top of it is really suitable for low-latency use.

  12. The REST API cannot list the files of a selected subfolderREST API

    The REST API can only list all files under the root tresor (bucket), recursively.

    However, Rclone can filter the response down to the files in the selected folder as well (as long as the API is aware of the files inside it, see #2).

  13. The REST API interface is limited to CRUD operationsREST API

    The REST API exposes a limited set of file/bucket operations:

    • Listing, creating, and deleting tresors (buckets)
    • Uploading, downloading, and deleting files
    • Listing all files within a tresor (bucket)

    The REST API isn't designed for full file system management. So, it does NOT support:

    • Renaming files or folders
    • Moving files or folders
    • Accessing or downloading previous file versions

    For advanced file management or synchronization use cases, customers should use Tresorit desktop/mobile/web clients.

Troubleshooting

Login fails with LoginBlocked / ClientPlatformIsForbidden

Executing scripts/login.sh results in:

{"error":"LoginBlocked","message":"Login was unsuccessful","restriction_state":"ClientPlatformIsForbidden"}

It means that the usage of this tool is not enabled on your account. If this is a mistake, please contact our support.

Contact

Please visit our website for contact information.

About

An S3-compatible file management API for easy integration with existing tools such as rclone, and with Zapier/n8n via a REST interface.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages