S3-compatible object storage for files, images, and documents. Declare buckets in catalog-info.yaml — credentials and SDKs are provisioned automatically.
spec.storage from your catalog-info.yaml@insureco/storage SDK picks up rotated credentials transparently before each operationZero config: no bucket creation, no credential management, no rotation code needed.
# catalog-info.yaml
spec:
storage:
- name: default
tier: s3-sm
The bucket name follows {service}-{env}-{name}. For example: my-api-prod-default.
spec:
storage:
- name: uploads
tier: s3-md
- name: exports
tier: s3-sm
Each bucket gets its own env var. The default bucket uses S3_BUCKET. Named buckets use S3_{NAME}_BUCKET (e.g., S3_UPLOADS_BUCKET).
| Tier | Capacity | Gas/Month | USD/Month |
|---|---|---|---|
s3-sm | 1 GB | 200 | $2 |
s3-md | 5 GB | 800 | $8 |
s3-lg | 25 GB | 3,000 | $30 |
s3-xl | 100 GB | 10,000 | $100 |
You can upgrade a bucket's tier later by changing the tier value and redeploying. Data is preserved across tier changes.
| Variable | Description |
|---|---|
S3_HOST | Object storage host |
S3_PORT | Object storage port |
S3_ACCESS_KEY_ID | Dynamic credential (auto-rotated) |
S3_SECRET_ACCESS_KEY | Dynamic credential (auto-rotated) |
S3_BUCKET | Default bucket name |
S3_{NAME}_BUCKET | Named bucket (e.g., S3_UPLOADS_BUCKET) |
npm install @insureco/storage
import { StorageClient } from '@insureco/storage'
// Auto-reads credentials from injected environment variables
const storage = StorageClient.fromEnv()
// Upload a file
const url = await storage.upload({
bucket: process.env.S3_BUCKET,
key: 'documents/invoice-001.pdf',
body: pdfBuffer,
contentType: 'application/pdf',
})
// Generate a presigned download URL (1-hour TTL)
const downloadUrl = await storage.presign({
bucket: process.env.S3_BUCKET,
key: 'documents/invoice-001.pdf',
expiresIn: 3600,
})
// Delete a file
await storage.delete({
bucket: process.env.S3_BUCKET,
key: 'documents/invoice-001.pdf',
})
// List objects under a prefix
const objects = await storage.list({
bucket: process.env.S3_BUCKET,
prefix: 'uploads/2024/',
})
// Generate a pre-signed URL for direct browser downloads (expires in 1 hour)
const directUrl = await storage.presign({
bucket: process.env.S3_BUCKET,
key: 'uploads/2024/report.pdf',
expiresIn: 3600,
})
The SDK sets Content-Type automatically based on the file extension. Override it explicitly when needed:
await storage.upload({
bucket: process.env.S3_BUCKET,
key: 'data/export.csv',
body: csvBuffer,
contentType: 'text/csv; charset=utf-8', // override automatic detection
})
| Method | Max Size | Notes |
|---|---|---|
storage.upload() | 100 MB | Standard single-part upload |
storage.uploadMultipart() | 5 GB | Use for files larger than 10 MB |
For files over 10 MB, prefer uploadMultipart() — it streams in chunks and is significantly more reliable on larger payloads.
If you prefer to use aws-sdk/client-s3 or another S3-compatible client directly, the injected env vars work with any S3-compatible library:
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3'
const s3 = new S3Client({
endpoint: `http://${process.env.S3_HOST}:${process.env.S3_PORT}`,
credentials: {
accessKeyId: process.env.S3_ACCESS_KEY_ID!,
secretAccessKey: process.env.S3_SECRET_ACCESS_KEY!,
},
forcePathStyle: true,
region: 'us-east-1', // required by SDK, not used by the platform
})
NOTE: When using the raw SDK, you are responsible for re-reading credentials from
process.envafter rotation. The@insureco/storageSDK handles this automatically.
For local dev, set env vars directly in .env.local. Credentials are only auto-injected inside a deployed pod:
# .env.local
S3_HOST=localhost
S3_PORT=9000
S3_ACCESS_KEY_ID=local-access-key
S3_SECRET_ACCESS_KEY=local-secret-key
S3_BUCKET=my-api-dev-default
Run a local S3-compatible server with Docker:
docker run -p 9000:9000 minio/minio server /data
@insureco/storage SDK handles credential refresh automatically; using a raw S3 SDK requires reading credentials manually from env varsLast updated: March 1, 2026