Lifecycle Rules Guide

Automatically expire and delete objects based on age or prefix to manage storage costs and enforce data retention policies.

Overview

Lifecycle rules run automatically in the background and delete objects after a specified number of days. This is useful for cleaning up temporary files, rotating backups, meeting compliance requirements, or keeping storage costs under control.

How It Works

  1. You define one or more rules, each with a unique ID
  2. Each rule specifies which objects to target (by prefix) and when to expire them (by age in days)
  3. The storage system evaluates rules periodically and deletes matching objects automatically
  4. Deleted objects cannot be recovered unless versioning is enabled

Configuring Lifecycle Rules via Dashboard

  1. Navigate to Object Storage in the sidebar
  2. Click on your bucket to open the detail page
  3. Scroll to the Lifecycle Rules panel
  4. Click Create Lifecycle Rule
  5. Configure your rule:
    • Rule ID: A unique name to identify this rule (e.g., delete-old-logs)
    • Expiration (days): Number of days after object creation before deletion (1-36,500)
    • Object Prefix (optional): Only target objects whose key starts with this prefix (e.g., logs/ or temp/). Leave empty to apply to all objects
    • Enable this rule: Check to activate the rule immediately
  6. Click + Add Lifecycle Rule to add additional rules
  7. Click Save Rules

The rules are applied asynchronously. It may take a few moments for the changes to take effect.

Configuring via API

Create or Update Lifecycle Rules

Bash
curl -X PUT https://api.danubedata.ro/v1/storage/buckets/{bucket_id} \
  -H "Authorization: Bearer YOUR_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "lifecycle_rules": [
      {
        "ID": "delete-old-logs",
        "Status": "Enabled",
        "Filter": {
          "Prefix": "logs/"
        },
        "Expiration": {
          "Days": 30
        }
      }
    ]
  }'

Remove All Lifecycle Rules

Bash
curl -X PUT https://api.danubedata.ro/v1/storage/buckets/{bucket_id} \
  -H "Authorization: Bearer YOUR_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "lifecycle_rules": []
  }'

Configuring via AWS CLI

Since DanubeData Object Storage is fully S3-compatible, you can use the AWS CLI to manage lifecycle rules directly.

Set Lifecycle Rules

Create a lifecycle configuration file lifecycle.json:

JSON
{
  "Rules": [
    {
      "ID": "delete-old-logs",
      "Status": "Enabled",
      "Filter": {
        "Prefix": "logs/"
      },
      "Expiration": {
        "Days": 30
      }
    },
    {
      "ID": "cleanup-temp-files",
      "Status": "Enabled",
      "Filter": {
        "Prefix": "tmp/"
      },
      "Expiration": {
        "Days": 7
      }
    }
  ]
}

Apply the configuration:

Bash
aws --endpoint-url https://s3.danubedata.ro s3api put-bucket-lifecycle-configuration \
  --bucket my-bucket \
  --lifecycle-configuration file://lifecycle.json

View Current Rules

Bash
aws --endpoint-url https://s3.danubedata.ro s3api get-bucket-lifecycle-configuration \
  --bucket my-bucket

Remove All Rules

Bash
aws --endpoint-url https://s3.danubedata.ro s3api delete-bucket-lifecycle \
  --bucket my-bucket

Configuring via S3 SDKs

Python (boto3)

Python
import boto3

s3 = boto3.client(
    's3',
    endpoint_url='https://s3.danubedata.ro',
    aws_access_key_id='YOUR_ACCESS_KEY',
    aws_secret_access_key='YOUR_SECRET_KEY'
)

# Set lifecycle rules
s3.put_bucket_lifecycle_configuration(
    Bucket='my-bucket',
    LifecycleConfiguration={
        'Rules': [
            {
                'ID': 'delete-old-backups',
                'Status': 'Enabled',
                'Filter': {'Prefix': 'backups/'},
                'Expiration': {'Days': 90},
            },
            {
                'ID': 'cleanup-incomplete-uploads',
                'Status': 'Enabled',
                'Filter': {'Prefix': ''},
                'AbortIncompleteMultipartUpload': {
                    'DaysAfterInitiation': 7
                },
            },
        ]
    }
)

# View current rules
response = s3.get_bucket_lifecycle_configuration(Bucket='my-bucket')
for rule in response['Rules']:
    print(f"{rule['ID']}: prefix={rule.get('Filter', {}).get('Prefix', '*')}, "
          f"expires={rule.get('Expiration', {}).get('Days', 'N/A')} days, "
          f"status={rule['Status']}")

Node.js (AWS SDK v3)

JavaScript
import {
  S3Client,
  PutBucketLifecycleConfigurationCommand,
  GetBucketLifecycleConfigurationCommand,
} from '@aws-sdk/client-s3';

const s3 = new S3Client({
  endpoint: 'https://s3.danubedata.ro',
  region: 'fsn1',
  credentials: {
    accessKeyId: 'YOUR_ACCESS_KEY',
    secretAccessKey: 'YOUR_SECRET_KEY',
  },
  forcePathStyle: true,
});

// Set lifecycle rules
await s3.send(new PutBucketLifecycleConfigurationCommand({
  Bucket: 'my-bucket',
  LifecycleConfiguration: {
    Rules: [
      {
        ID: 'delete-old-logs',
        Status: 'Enabled',
        Filter: { Prefix: 'logs/' },
        Expiration: { Days: 30 },
      },
    ],
  },
}));

// View current rules
const response = await s3.send(
  new GetBucketLifecycleConfigurationCommand({ Bucket: 'my-bucket' })
);
console.log(response.Rules);

PHP (Laravel)

PHP
use Aws\S3\S3Client;

$s3 = new S3Client([
    'endpoint' => 'https://s3.danubedata.ro',
    'region' => 'fsn1',
    'version' => 'latest',
    'use_path_style_endpoint' => true,
    'credentials' => [
        'key' => 'YOUR_ACCESS_KEY',
        'secret' => 'YOUR_SECRET_KEY',
    ],
]);

// Set lifecycle rules
$s3->putBucketLifecycleConfiguration([
    'Bucket' => 'my-bucket',
    'LifecycleConfiguration' => [
        'Rules' => [
            [
                'ID' => 'delete-old-logs',
                'Status' => 'Enabled',
                'Filter' => ['Prefix' => 'logs/'],
                'Expiration' => ['Days' => 30],
            ],
        ],
    ],
]);

Rule Configuration Reference

Required Fields

FieldTypeDescription
IDString (max 255)Unique identifier for the rule
StatusEnabled or DisabledWhether the rule is active

Expiration Options

FieldTypeDescription
Expiration.DaysInteger (1-36,500)Delete objects this many days after creation
Expiration.DateISO 8601 dateDelete objects after this specific date

Filter Options

FieldTypeDescription
Filter.PrefixString (max 1,024)Only apply to objects whose key starts with this prefix. Empty string or omitted = all objects

Additional Options

FieldTypeDescription
NoncurrentVersionExpiration.NoncurrentDaysInteger (1-36,500)Delete noncurrent versions after this many days (requires versioning)
AbortIncompleteMultipartUpload.DaysAfterInitiationInteger (1-36,500)Abort incomplete multipart uploads after this many days

Limits

  • Maximum 100 lifecycle rules per bucket
  • Each rule ID must be unique within the bucket

Common Use Cases

Rotate Log Files

Delete application logs older than 30 days:

JSON
{
  "ID": "rotate-logs",
  "Status": "Enabled",
  "Filter": {"Prefix": "logs/"},
  "Expiration": {"Days": 30}
}

Clean Up Temporary Files

Remove temp files after 24 hours:

JSON
{
  "ID": "cleanup-temp",
  "Status": "Enabled",
  "Filter": {"Prefix": "tmp/"},
  "Expiration": {"Days": 1}
}

Offsite Backup Retention

Keep backup snapshots for 90 days:

JSON
{
  "ID": "backup-retention",
  "Status": "Enabled",
  "Filter": {"Prefix": "backups/"},
  "Expiration": {"Days": 90}
}

Abort Stale Multipart Uploads

Clean up incomplete uploads that were never finished:

JSON
{
  "ID": "abort-incomplete-uploads",
  "Status": "Enabled",
  "Filter": {"Prefix": ""},
  "AbortIncompleteMultipartUpload": {"DaysAfterInitiation": 7}
}

Multiple Rules on One Bucket

You can combine multiple rules targeting different prefixes:

JSON
[
  {
    "ID": "short-lived-cache",
    "Status": "Enabled",
    "Filter": {"Prefix": "cache/"},
    "Expiration": {"Days": 7}
  },
  {
    "ID": "medium-lived-logs",
    "Status": "Enabled",
    "Filter": {"Prefix": "logs/"},
    "Expiration": {"Days": 30}
  },
  {
    "ID": "long-lived-reports",
    "Status": "Enabled",
    "Filter": {"Prefix": "reports/"},
    "Expiration": {"Days": 365}
  }
]

Important Notes

  • Deletion is permanent: Once an object expires, it is deleted and cannot be recovered unless versioning is enabled on the bucket.
  • Evaluation delay: Rules are evaluated periodically, not at the exact moment an object reaches its expiration age. Objects may persist for a short time after their expiration date.
  • Prefix matching: Prefixes match from the start of the object key. A prefix of logs/ matches logs/app.log and logs/2024/error.log, but not old-logs/app.log.
  • Empty prefix: Omitting the prefix or setting it to an empty string applies the rule to all objects in the bucket.
  • Versioned buckets: Use NoncurrentVersionExpiration to clean up old object versions. The Expiration.Days rule only affects the current version.

Troubleshooting

Rules Not Taking Effect

  • Verify the rule status is Enabled, not Disabled
  • Check the object prefix matches your objects (prefixes are case-sensitive)
  • Allow time for the rules to be evaluated; expiration is not instantaneous

Objects Not Being Deleted

  • The object may not have reached the expiration age yet. Expiration is based on the object's creation date, not its last modified date.
  • Ensure the prefix matches correctly. For example, a prefix of logs (without trailing slash) also matches logs-archive/file.txt.

Want to Temporarily Stop a Rule?

Set the rule's status to Disabled instead of deleting it. You can re-enable it later without reconfiguring.

Next Steps


Questions? Contact support at support@danubedata.ro