S3-Compatible Object Storage

Store unlimited files, backups, and media with AWS S3 API compatibility.

Overview

DanubeData Object Storage provides fully managed, S3-compatible storage powered by MinIO. It offers industry-standard S3 API compatibility, GDPR-compliant EU data residency, and simple transparent pricing.

Key Benefits

  • 100% S3 API Compatible: Use any S3 SDK, CLI, or tool without modifications
  • GDPR Compliant: All data stored in Germany datacenters
  • Simple Pricing: €3.99/month includes 1TB storage and 1TB egress
  • Secure by Default: AES-256 encryption at rest, TLS 1.3 in transit
  • Built-in Browser: Manage files directly from the dashboard

Features

Core Storage

  • Unlimited Buckets: Up to 10 buckets per team
  • Large File Support: Objects up to 5TB each
  • Multipart Uploads: Chunked uploads for large files with automatic recovery
  • Presigned URLs: Generate temporary access links for sharing

Security & Access Control

  • Encryption: AES-256 encryption at rest, TLS 1.3 in transit
  • Access Keys: Per-bucket credentials with granular permissions
  • Public Access Control: Enable/disable public read access per bucket
  • Bucket Policies: Fine-grained access control with JSON policies

Advanced Features

  • Object Versioning: Protect against accidental deletion
  • Lifecycle Rules: Automatic object expiration and cleanup
  • CORS Support: Configure cross-origin access for web applications
  • Object Tagging: Organize objects with key-value tags

Monitoring & Management

  • Real-time Metrics: Storage size, object count, monthly costs
  • Object Browser: Built-in file management in dashboard
  • Usage Tracking: Per-bucket storage and traffic statistics

Use Cases

Media & File Storage

Store images, videos, documents, and user uploads for web and mobile applications.

Database Backups

Automatically back up your MySQL, PostgreSQL, and MariaDB databases to durable object storage.

Static Website Hosting

Host static websites and single-page applications with high availability.

Data Archives

Archive infrequently accessed data with lifecycle rules for cost optimization.

CDN Origins

Use as origin storage for content delivery networks.

Application Logs

Store and archive application logs for compliance and debugging.

Access Methods

Path-Style Access

Text
https://s3.danubedata.ro/bucket-name/object-key

Virtual-Host Style Access

Text
https://bucket-name.s3.danubedata.ro/object-key

Both access styles are fully supported. Use whichever works best with your tools.

Pricing

Simple, transparent pricing with generous included quotas.

Base Subscription

PlanPriceIncluded StorageIncluded Egress
Object Storage€3.99/month1 TB1 TB

Overage Pricing

ResourcePrice
Additional Storage€3.85/TB/month
Additional Egress€0.80/TB
Ingress (uploads)Always free

Example Costs

ScenarioMonthly Cost
500 GB storage, 200 GB egress€3.99 (within included quota)
2 TB storage, 1 TB egress€7.84 (€3.99 + €3.85 overage)
5 TB storage, 3 TB egress€23.14 (€3.99 + €15.40 + €1.60 overage)

Note: Minimum billable object size is 64 KB. Smaller objects are billed as 64 KB.

Technical Specifications

SpecificationValue
S3 API VersionAWS S3 (2006-03-01)
Maximum Object Size5 TB
Maximum Buckets per Team10
Bucket Name Length3-63 characters
Encryption at RestAES-256
Encryption in TransitTLS 1.3
Data LocationGermany (EU)
Availability99.9% SLA

S3 API Compatibility

DanubeData Object Storage supports all common S3 operations:

Bucket Operations

  • CreateBucket / DeleteBucket
  • ListBuckets
  • GetBucketLocation
  • GetBucketVersioning / PutBucketVersioning
  • GetBucketPolicy / PutBucketPolicy
  • GetBucketCors / PutBucketCors
  • GetBucketLifecycle / PutBucketLifecycle

Object Operations

  • PutObject / GetObject / DeleteObject
  • ListObjects / ListObjectsV2
  • CopyObject
  • HeadObject
  • GetObjectTagging / PutObjectTagging

Multipart Upload

  • CreateMultipartUpload
  • UploadPart
  • CompleteMultipartUpload
  • AbortMultipartUpload
  • ListMultipartUploads

Presigned URLs

  • Generate temporary download/upload URLs
  • Configurable expiration (default: 60 minutes)

Comparison

DanubeData vs AWS S3

FeatureDanubeDataAWS S3
Pricing€3.99/month (1TB included)Pay per request + storage
Egress1TB included, then €0.80/TB€0.09/GB (90€/TB)
Data LocationGermany onlyMultiple regions
ComplexitySimpleComplex IAM & policies
GDPRCompliant by defaultRequires configuration

DanubeData vs Hetzner Object Storage

FeatureDanubeDataHetzner
DashboardIntegrated with DanubeDataSeparate Hetzner Cloud
BillingUnified with other servicesSeparate
SupportSingle providerHetzner support
FeaturesVersioning, lifecycle, CORSBasic S3

Integration Examples

AWS CLI

Bash
# Configure AWS CLI
aws configure set aws_access_key_id YOUR_ACCESS_KEY
aws configure set aws_secret_access_key YOUR_SECRET_KEY

# List buckets
aws --endpoint-url https://s3.danubedata.ro s3 ls

# Upload a file
aws --endpoint-url https://s3.danubedata.ro s3 cp file.txt s3://my-bucket/

# Download a file
aws --endpoint-url https://s3.danubedata.ro s3 cp s3://my-bucket/file.txt ./

Python (boto3)

Python
import boto3

s3 = boto3.client(
    's3',
    endpoint_url='https://s3.danubedata.ro',
    aws_access_key_id='YOUR_ACCESS_KEY',
    aws_secret_access_key='YOUR_SECRET_KEY'
)

# Upload file
s3.upload_file('local-file.txt', 'my-bucket', 'remote-file.txt')

# Download file
s3.download_file('my-bucket', 'remote-file.txt', 'local-file.txt')

# List objects
response = s3.list_objects_v2(Bucket='my-bucket')
for obj in response.get('Contents', []):
    print(obj['Key'])

Node.js (AWS SDK v3)

JavaScript
import { S3Client, PutObjectCommand, GetObjectCommand } from '@aws-sdk/client-s3';

const s3 = new S3Client({
  endpoint: 'https://s3.danubedata.ro',
  region: 'fsn1',
  credentials: {
    accessKeyId: 'YOUR_ACCESS_KEY',
    secretAccessKey: 'YOUR_SECRET_KEY',
  },
  forcePathStyle: true,
});

// Upload file
await s3.send(new PutObjectCommand({
  Bucket: 'my-bucket',
  Key: 'file.txt',
  Body: 'Hello, World!',
}));

// Download file
const response = await s3.send(new GetObjectCommand({
  Bucket: 'my-bucket',
  Key: 'file.txt',
}));
const content = await response.Body.transformToString();

PHP (Laravel)

PHP
// config/filesystems.php
'disks' => [
    's3' => [
        'driver' => 's3',
        'key' => env('DANUBEDATA_S3_KEY'),
        'secret' => env('DANUBEDATA_S3_SECRET'),
        'region' => 'fsn1',
        'bucket' => env('DANUBEDATA_S3_BUCKET'),
        'url' => env('DANUBEDATA_S3_URL'),
        'endpoint' => 'https://s3.danubedata.ro',
        'use_path_style_endpoint' => true,
    ],
],

// Usage
Storage::disk('s3')->put('file.txt', 'Hello, World!');
$content = Storage::disk('s3')->get('file.txt');
$url = Storage::disk('s3')->temporaryUrl('file.txt', now()->addHour());

Go

Go
package main

import (
    "context"
    "github.com/aws/aws-sdk-go-v2/aws"
    "github.com/aws/aws-sdk-go-v2/config"
    "github.com/aws/aws-sdk-go-v2/credentials"
    "github.com/aws/aws-sdk-go-v2/service/s3"
)

func main() {
    cfg, _ := config.LoadDefaultConfig(context.TODO(),
        config.WithCredentialsProvider(credentials.NewStaticCredentialsProvider(
            "YOUR_ACCESS_KEY",
            "YOUR_SECRET_KEY",
            "",
        )),
        config.WithRegion("fsn1"),
    )

    client := s3.NewFromConfig(cfg, func(o *s3.Options) {
        o.BaseEndpoint = aws.String("https://s3.danubedata.ro")
        o.UsePathStyle = true
    })

    // Use client for S3 operations
}

FAQ

Is it really 100% S3 compatible?

Yes! We use MinIO which provides complete AWS S3 API compatibility. Any tool, SDK, or application that works with AWS S3 will work with DanubeData Object Storage.

Can I access my data from anywhere?

Yes. Object Storage is accessible from anywhere on the internet. Use access keys to authenticate, or generate presigned URLs for temporary public access.

What happens if I exceed my included quota?

You're automatically billed for overage usage at the rates shown above. There are no service interruptions - your storage continues to work normally.

How is billing calculated?

Storage is billed based on the average GB stored per hour. Traffic is billed based on total egress (download) bytes. Ingress (upload) is always free.

Can I use this for website hosting?

Yes! Enable public access on your bucket and configure your DNS to serve static content directly from Object Storage.

How do I migrate from AWS S3?

Use the AWS CLI or any S3-compatible tool to copy data between AWS and DanubeData. Both use the same S3 API, so migration is straightforward.

Is my data replicated?

Yes. All data is replicated across multiple storage nodes for durability and high availability.

Next Steps


Questions? Contact support at support@danubedata.ro