S3-Compatible Object Storage
Store unlimited files, backups, and media with AWS S3 API compatibility.
Overview
DanubeData Object Storage provides fully managed, S3-compatible storage powered by MinIO. It offers industry-standard S3 API compatibility, GDPR-compliant EU data residency, and simple transparent pricing.
Key Benefits
- 100% S3 API Compatible: Use any S3 SDK, CLI, or tool without modifications
- GDPR Compliant: All data stored in Germany datacenters
- Simple Pricing: €3.99/month includes 1TB storage and 1TB egress
- Secure by Default: AES-256 encryption at rest, TLS 1.3 in transit
- Built-in Browser: Manage files directly from the dashboard
Features
Core Storage
- Unlimited Buckets: Up to 10 buckets per team
- Large File Support: Objects up to 5TB each
- Multipart Uploads: Chunked uploads for large files with automatic recovery
- Presigned URLs: Generate temporary access links for sharing
Security & Access Control
- Encryption: AES-256 encryption at rest, TLS 1.3 in transit
- Access Keys: Per-bucket credentials with granular permissions
- Public Access Control: Enable/disable public read access per bucket
- Bucket Policies: Fine-grained access control with JSON policies
Advanced Features
- Object Versioning: Protect against accidental deletion
- Lifecycle Rules: Automatic object expiration and cleanup
- CORS Support: Configure cross-origin access for web applications
- Object Tagging: Organize objects with key-value tags
Monitoring & Management
- Real-time Metrics: Storage size, object count, monthly costs
- Object Browser: Built-in file management in dashboard
- Usage Tracking: Per-bucket storage and traffic statistics
Use Cases
Media & File Storage
Store images, videos, documents, and user uploads for web and mobile applications.
Database Backups
Automatically back up your MySQL, PostgreSQL, and MariaDB databases to durable object storage.
Static Website Hosting
Host static websites and single-page applications with high availability.
Data Archives
Archive infrequently accessed data with lifecycle rules for cost optimization.
CDN Origins
Use as origin storage for content delivery networks.
Application Logs
Store and archive application logs for compliance and debugging.
Access Methods
Path-Style Access
https://s3.danubedata.ro/bucket-name/object-key
Virtual-Host Style Access
https://bucket-name.s3.danubedata.ro/object-key
Both access styles are fully supported. Use whichever works best with your tools.
Pricing
Simple, transparent pricing with generous included quotas.
Base Subscription
| Plan | Price | Included Storage | Included Egress |
|---|---|---|---|
| Object Storage | €3.99/month | 1 TB | 1 TB |
Overage Pricing
| Resource | Price |
|---|---|
| Additional Storage | €3.85/TB/month |
| Additional Egress | €0.80/TB |
| Ingress (uploads) | Always free |
Example Costs
| Scenario | Monthly Cost |
|---|---|
| 500 GB storage, 200 GB egress | €3.99 (within included quota) |
| 2 TB storage, 1 TB egress | €7.84 (€3.99 + €3.85 overage) |
| 5 TB storage, 3 TB egress | €23.14 (€3.99 + €15.40 + €1.60 overage) |
Note: Minimum billable object size is 64 KB. Smaller objects are billed as 64 KB.
Technical Specifications
| Specification | Value |
|---|---|
| S3 API Version | AWS S3 (2006-03-01) |
| Maximum Object Size | 5 TB |
| Maximum Buckets per Team | 10 |
| Bucket Name Length | 3-63 characters |
| Encryption at Rest | AES-256 |
| Encryption in Transit | TLS 1.3 |
| Data Location | Germany (EU) |
| Availability | 99.9% SLA |
S3 API Compatibility
DanubeData Object Storage supports all common S3 operations:
Bucket Operations
CreateBucket/DeleteBucketListBucketsGetBucketLocationGetBucketVersioning/PutBucketVersioningGetBucketPolicy/PutBucketPolicyGetBucketCors/PutBucketCorsGetBucketLifecycle/PutBucketLifecycle
Object Operations
PutObject/GetObject/DeleteObjectListObjects/ListObjectsV2CopyObjectHeadObjectGetObjectTagging/PutObjectTagging
Multipart Upload
CreateMultipartUploadUploadPartCompleteMultipartUploadAbortMultipartUploadListMultipartUploads
Presigned URLs
- Generate temporary download/upload URLs
- Configurable expiration (default: 60 minutes)
Comparison
DanubeData vs AWS S3
| Feature | DanubeData | AWS S3 |
|---|---|---|
| Pricing | €3.99/month (1TB included) | Pay per request + storage |
| Egress | 1TB included, then €0.80/TB | €0.09/GB (90€/TB) |
| Data Location | Germany only | Multiple regions |
| Complexity | Simple | Complex IAM & policies |
| GDPR | Compliant by default | Requires configuration |
DanubeData vs Hetzner Object Storage
| Feature | DanubeData | Hetzner |
|---|---|---|
| Dashboard | Integrated with DanubeData | Separate Hetzner Cloud |
| Billing | Unified with other services | Separate |
| Support | Single provider | Hetzner support |
| Features | Versioning, lifecycle, CORS | Basic S3 |
Integration Examples
AWS CLI
# Configure AWS CLI
aws configure set aws_access_key_id YOUR_ACCESS_KEY
aws configure set aws_secret_access_key YOUR_SECRET_KEY
# List buckets
aws --endpoint-url https://s3.danubedata.ro s3 ls
# Upload a file
aws --endpoint-url https://s3.danubedata.ro s3 cp file.txt s3://my-bucket/
# Download a file
aws --endpoint-url https://s3.danubedata.ro s3 cp s3://my-bucket/file.txt ./
Python (boto3)
import boto3
s3 = boto3.client(
's3',
endpoint_url='https://s3.danubedata.ro',
aws_access_key_id='YOUR_ACCESS_KEY',
aws_secret_access_key='YOUR_SECRET_KEY'
)
# Upload file
s3.upload_file('local-file.txt', 'my-bucket', 'remote-file.txt')
# Download file
s3.download_file('my-bucket', 'remote-file.txt', 'local-file.txt')
# List objects
response = s3.list_objects_v2(Bucket='my-bucket')
for obj in response.get('Contents', []):
print(obj['Key'])
Node.js (AWS SDK v3)
import { S3Client, PutObjectCommand, GetObjectCommand } from '@aws-sdk/client-s3';
const s3 = new S3Client({
endpoint: 'https://s3.danubedata.ro',
region: 'fsn1',
credentials: {
accessKeyId: 'YOUR_ACCESS_KEY',
secretAccessKey: 'YOUR_SECRET_KEY',
},
forcePathStyle: true,
});
// Upload file
await s3.send(new PutObjectCommand({
Bucket: 'my-bucket',
Key: 'file.txt',
Body: 'Hello, World!',
}));
// Download file
const response = await s3.send(new GetObjectCommand({
Bucket: 'my-bucket',
Key: 'file.txt',
}));
const content = await response.Body.transformToString();
PHP (Laravel)
// config/filesystems.php
'disks' => [
's3' => [
'driver' => 's3',
'key' => env('DANUBEDATA_S3_KEY'),
'secret' => env('DANUBEDATA_S3_SECRET'),
'region' => 'fsn1',
'bucket' => env('DANUBEDATA_S3_BUCKET'),
'url' => env('DANUBEDATA_S3_URL'),
'endpoint' => 'https://s3.danubedata.ro',
'use_path_style_endpoint' => true,
],
],
// Usage
Storage::disk('s3')->put('file.txt', 'Hello, World!');
$content = Storage::disk('s3')->get('file.txt');
$url = Storage::disk('s3')->temporaryUrl('file.txt', now()->addHour());
Go
package main
import (
"context"
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/config"
"github.com/aws/aws-sdk-go-v2/credentials"
"github.com/aws/aws-sdk-go-v2/service/s3"
)
func main() {
cfg, _ := config.LoadDefaultConfig(context.TODO(),
config.WithCredentialsProvider(credentials.NewStaticCredentialsProvider(
"YOUR_ACCESS_KEY",
"YOUR_SECRET_KEY",
"",
)),
config.WithRegion("fsn1"),
)
client := s3.NewFromConfig(cfg, func(o *s3.Options) {
o.BaseEndpoint = aws.String("https://s3.danubedata.ro")
o.UsePathStyle = true
})
// Use client for S3 operations
}
FAQ
Is it really 100% S3 compatible?
Yes! We use MinIO which provides complete AWS S3 API compatibility. Any tool, SDK, or application that works with AWS S3 will work with DanubeData Object Storage.
Can I access my data from anywhere?
Yes. Object Storage is accessible from anywhere on the internet. Use access keys to authenticate, or generate presigned URLs for temporary public access.
What happens if I exceed my included quota?
You're automatically billed for overage usage at the rates shown above. There are no service interruptions - your storage continues to work normally.
How is billing calculated?
Storage is billed based on the average GB stored per hour. Traffic is billed based on total egress (download) bytes. Ingress (upload) is always free.
Can I use this for website hosting?
Yes! Enable public access on your bucket and configure your DNS to serve static content directly from Object Storage.
How do I migrate from AWS S3?
Use the AWS CLI or any S3-compatible tool to copy data between AWS and DanubeData. Both use the same S3 API, so migration is straightforward.
Is my data replicated?
Yes. All data is replicated across multiple storage nodes for durability and high availability.
Next Steps
- Quick Start: Create Your First Bucket - Get started in 5 minutes
- Object Storage Security - Learn about access control
- Lifecycle Rules Guide - Automate data management
Questions? Contact support at support@danubedata.ro