Object Storage Quick Start
Create your first S3-compatible bucket and start storing files in 5 minutes!
Prerequisites
- A DanubeData account
- Basic familiarity with object storage concepts
Step 1: Create a Bucket
Via Dashboard
- Navigate to Object Storage in the sidebar
- Click Create Bucket
- Enter your bucket configuration:
- Name:
my-first-bucket(lowercase, 3-63 characters) - Region: EU (Falkenstein) - default
- Versioning: Disabled (can enable later)
- Public Access: Disabled (recommended)
- Name:
- Click Create Bucket
Your bucket will be ready in a few seconds!
Via API
Bash
curl -X POST https://api.danubedata.ro/v1/storage/buckets \
-H "Authorization: Bearer YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "my-first-bucket",
"region": "fsn1",
"versioning_enabled": false,
"public_access": false
}'
Step 2: Create Access Keys
Access keys are required to interact with your bucket via the S3 API.
Via Dashboard
- Navigate to your bucket's detail page
- Click the Access Keys tab
- Click Create Access Key
- Configure the key:
- Name:
my-app-key - Permissions: Read & Write (or select specific permissions)
- Expiration: Optional expiration date
- Name:
- Click Create
- Important: Copy and save your Secret Key immediately - it won't be shown again!
Via API
Bash
curl -X POST https://api.danubedata.ro/v1/storage/buckets/{bucket_id}/access-keys \
-H "Authorization: Bearer YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "my-app-key",
"permissions": ["read", "write"]
}'
Response:
JSON
{
"id": "key_abc123",
"name": "my-app-key",
"access_key": "AKIAIOSFODNN7EXAMPLE",
"secret_key": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
}
Step 3: Configure Your S3 Client
AWS CLI
Install the AWS CLI and configure it:
Bash
# Install AWS CLI (if not already installed)
pip install awscli
# Configure credentials
aws configure
# AWS Access Key ID: YOUR_ACCESS_KEY
# AWS Secret Access Key: YOUR_SECRET_KEY
# Default region name: fsn1
# Default output format: json
Create an alias for easier use:
Bash
# Add to ~/.bashrc or ~/.zshrc
alias danubedata-s3='aws --endpoint-url https://s3.danubedata.ro s3'
alias danubedata-s3api='aws --endpoint-url https://s3.danubedata.ro s3api'
Environment Variables
Set up environment variables for your applications:
Bash
export AWS_ACCESS_KEY_ID="YOUR_ACCESS_KEY"
export AWS_SECRET_ACCESS_KEY="YOUR_SECRET_KEY"
export AWS_ENDPOINT_URL="https://s3.danubedata.ro"
export AWS_REGION="fsn1"
Step 4: Upload Your First File
Using AWS CLI
Bash
# Upload a single file
aws --endpoint-url https://s3.danubedata.ro s3 cp hello.txt s3://my-first-bucket/
# Upload a directory
aws --endpoint-url https://s3.danubedata.ro s3 cp ./my-folder s3://my-first-bucket/my-folder/ --recursive
# Upload with custom metadata
aws --endpoint-url https://s3.danubedata.ro s3 cp document.pdf s3://my-first-bucket/ \
--metadata '{"author":"john","department":"sales"}'
Using the Dashboard
- Navigate to your bucket
- Click Browse Objects or the Objects tab
- Click Upload
- Drag and drop files or click to browse
- Click Upload
Using Python
Python
import boto3
s3 = boto3.client(
's3',
endpoint_url='https://s3.danubedata.ro',
aws_access_key_id='YOUR_ACCESS_KEY',
aws_secret_access_key='YOUR_SECRET_KEY'
)
# Upload a file
s3.upload_file('local-file.txt', 'my-first-bucket', 'remote-file.txt')
print("File uploaded successfully!")
# Upload with progress callback
from boto3.s3.transfer import TransferConfig
def progress_callback(bytes_transferred):
print(f"Transferred: {bytes_transferred} bytes")
config = TransferConfig(use_threads=True)
s3.upload_file(
'large-file.zip',
'my-first-bucket',
'large-file.zip',
Config=config,
Callback=progress_callback
)
Step 5: Download Files
Using AWS CLI
Bash
# Download a single file
aws --endpoint-url https://s3.danubedata.ro s3 cp s3://my-first-bucket/hello.txt ./
# Download a directory
aws --endpoint-url https://s3.danubedata.ro s3 cp s3://my-first-bucket/my-folder/ ./my-folder/ --recursive
# Sync a directory (only downloads new/changed files)
aws --endpoint-url https://s3.danubedata.ro s3 sync s3://my-first-bucket/data/ ./local-data/
Using Python
Python
import boto3
s3 = boto3.client(
's3',
endpoint_url='https://s3.danubedata.ro',
aws_access_key_id='YOUR_ACCESS_KEY',
aws_secret_access_key='YOUR_SECRET_KEY'
)
# Download a file
s3.download_file('my-first-bucket', 'remote-file.txt', 'local-file.txt')
# Get file content directly
response = s3.get_object(Bucket='my-first-bucket', Key='hello.txt')
content = response['Body'].read().decode('utf-8')
print(content)
Step 6: List Objects
Using AWS CLI
Bash
# List all objects
aws --endpoint-url https://s3.danubedata.ro s3 ls s3://my-first-bucket/
# List with details
aws --endpoint-url https://s3.danubedata.ro s3 ls s3://my-first-bucket/ --human-readable --summarize
# List objects in a "folder"
aws --endpoint-url https://s3.danubedata.ro s3 ls s3://my-first-bucket/images/
Using Python
Python
import boto3
s3 = boto3.client(
's3',
endpoint_url='https://s3.danubedata.ro',
aws_access_key_id='YOUR_ACCESS_KEY',
aws_secret_access_key='YOUR_SECRET_KEY'
)
# List objects
response = s3.list_objects_v2(Bucket='my-first-bucket')
for obj in response.get('Contents', []):
print(f"{obj['Key']} - {obj['Size']} bytes - {obj['LastModified']}")
Step 7: Generate Presigned URLs
Share files temporarily without exposing your credentials.
Using Python
Python
import boto3
from datetime import datetime, timedelta
s3 = boto3.client(
's3',
endpoint_url='https://s3.danubedata.ro',
aws_access_key_id='YOUR_ACCESS_KEY',
aws_secret_access_key='YOUR_SECRET_KEY'
)
# Generate download URL (valid for 1 hour)
url = s3.generate_presigned_url(
'get_object',
Params={'Bucket': 'my-first-bucket', 'Key': 'document.pdf'},
ExpiresIn=3600 # seconds
)
print(f"Download URL: {url}")
# Generate upload URL (valid for 1 hour)
upload_url = s3.generate_presigned_url(
'put_object',
Params={'Bucket': 'my-first-bucket', 'Key': 'uploads/new-file.txt'},
ExpiresIn=3600
)
print(f"Upload URL: {upload_url}")
Using AWS CLI
Bash
# Generate presigned URL (valid for 1 hour)
aws --endpoint-url https://s3.danubedata.ro s3 presign s3://my-first-bucket/document.pdf --expires-in 3600
Step 8: Delete Objects
Using AWS CLI
Bash
# Delete a single object
aws --endpoint-url https://s3.danubedata.ro s3 rm s3://my-first-bucket/hello.txt
# Delete multiple objects (folder)
aws --endpoint-url https://s3.danubedata.ro s3 rm s3://my-first-bucket/old-data/ --recursive
# Delete with confirmation
aws --endpoint-url https://s3.danubedata.ro s3 rm s3://my-first-bucket/important.txt --dryrun
Using Python
Python
import boto3
s3 = boto3.client(
's3',
endpoint_url='https://s3.danubedata.ro',
aws_access_key_id='YOUR_ACCESS_KEY',
aws_secret_access_key='YOUR_SECRET_KEY'
)
# Delete single object
s3.delete_object(Bucket='my-first-bucket', Key='hello.txt')
# Delete multiple objects
s3.delete_objects(
Bucket='my-first-bucket',
Delete={
'Objects': [
{'Key': 'file1.txt'},
{'Key': 'file2.txt'},
{'Key': 'folder/file3.txt'},
]
}
)
Common Operations Cheat Sheet
| Operation | AWS CLI Command |
|---|---|
| List buckets | aws --endpoint-url https://s3.danubedata.ro s3 ls |
| List objects | aws --endpoint-url https://s3.danubedata.ro s3 ls s3://bucket/ |
| Upload file | aws --endpoint-url https://s3.danubedata.ro s3 cp file.txt s3://bucket/ |
| Download file | aws --endpoint-url https://s3.danubedata.ro s3 cp s3://bucket/file.txt ./ |
| Sync folder | aws --endpoint-url https://s3.danubedata.ro s3 sync ./local s3://bucket/remote |
| Delete file | aws --endpoint-url https://s3.danubedata.ro s3 rm s3://bucket/file.txt |
| Get bucket size | aws --endpoint-url https://s3.danubedata.ro s3 ls s3://bucket/ --recursive --summarize |
Next Steps
Now that you have your first bucket running, explore more features:
- Object Storage Product Overview - Full feature documentation
- Object Storage Security - Access control and encryption
- Lifecycle Rules - Automate data management
- Enable Versioning - Protect against accidental deletion
Troubleshooting
"Access Denied" Error
- Verify your access key and secret key are correct
- Check that your access key has the required permissions
- Ensure the bucket name is correct (case-sensitive)
"Bucket Not Found" Error
- Bucket names must be globally unique
- Verify the bucket name spelling
- Check you're using the correct endpoint (
https://s3.danubedata.ro)
Slow Uploads
- Use multipart upload for files larger than 100MB
- Enable
use_threads=Truein boto3 TransferConfig - Check your network connection
SSL Certificate Errors
- Ensure you're using
https://nothttp:// - Update your SSL certificates:
pip install --upgrade certifi
Need Help?
- Check our FAQ
- Contact support: support@danubedata.ro
Congratulations! You've successfully created your first bucket and uploaded files to DanubeData Object Storage!