Object Storage

S3-compatible object storage for files, images, and documents. Declare buckets in catalog-info.yaml — credentials and SDKs are provisioned automatically.

How It Works

  1. Builder reads spec.storage from your catalog-info.yaml
  2. For each entry, it provisions a bucket in the platform object storage service and generates credentials via platform secrets management
  3. Credentials are injected into your pod automatically and auto-rotated — no restart required
  4. The @insureco/storage SDK picks up rotated credentials transparently before each operation

Zero config: no bucket creation, no credential management, no rotation code needed.

YAML Configuration

# catalog-info.yaml
spec:
  storage:
    - name: default
      tier: s3-sm

The bucket name follows {service}-{env}-{name}. For example: my-api-prod-default.

Multiple Buckets

spec:
  storage:
    - name: uploads
      tier: s3-md
    - name: exports
      tier: s3-sm

Each bucket gets its own env var. The default bucket uses S3_BUCKET. Named buckets use S3_{NAME}_BUCKET (e.g., S3_UPLOADS_BUCKET).

Storage Tiers

TierCapacityGas/MonthUSD/Month
s3-sm1 GB200$2
s3-md5 GB800$8
s3-lg25 GB3,000$30
s3-xl100 GB10,000$100

You can upgrade a bucket's tier later by changing the tier value and redeploying. Data is preserved across tier changes.

Credential Variables

VariableDescription
S3_HOSTObject storage host
S3_PORTObject storage port
S3_ACCESS_KEY_IDDynamic credential (auto-rotated)
S3_SECRET_ACCESS_KEYDynamic credential (auto-rotated)
S3_BUCKETDefault bucket name
S3_{NAME}_BUCKETNamed bucket (e.g., S3_UPLOADS_BUCKET)

Using the SDK

npm install @insureco/storage
import { StorageClient } from '@insureco/storage'

// Auto-reads credentials from injected environment variables
const storage = StorageClient.fromEnv()

// Upload a file
const url = await storage.upload({
  bucket: process.env.S3_BUCKET,
  key: 'documents/invoice-001.pdf',
  body: pdfBuffer,
  contentType: 'application/pdf',
})

// Generate a presigned download URL (1-hour TTL)
const downloadUrl = await storage.presign({
  bucket: process.env.S3_BUCKET,
  key: 'documents/invoice-001.pdf',
  expiresIn: 3600,
})

// Delete a file
await storage.delete({
  bucket: process.env.S3_BUCKET,
  key: 'documents/invoice-001.pdf',
})

// List objects under a prefix
const objects = await storage.list({
  bucket: process.env.S3_BUCKET,
  prefix: 'uploads/2024/',
})

// Generate a pre-signed URL for direct browser downloads (expires in 1 hour)
const directUrl = await storage.presign({
  bucket: process.env.S3_BUCKET,
  key: 'uploads/2024/report.pdf',
  expiresIn: 3600,
})

Content Type

The SDK sets Content-Type automatically based on the file extension. Override it explicitly when needed:

await storage.upload({
  bucket: process.env.S3_BUCKET,
  key: 'data/export.csv',
  body: csvBuffer,
  contentType: 'text/csv; charset=utf-8',  // override automatic detection
})

Size Limits

MethodMax SizeNotes
storage.upload()100 MBStandard single-part upload
storage.uploadMultipart()5 GBUse for files larger than 10 MB

For files over 10 MB, prefer uploadMultipart() — it streams in chunks and is significantly more reliable on larger payloads.

Using the Raw S3 SDK

If you prefer to use aws-sdk/client-s3 or another S3-compatible client directly, the injected env vars work with any S3-compatible library:

import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3'

const s3 = new S3Client({
  endpoint: `http://${process.env.S3_HOST}:${process.env.S3_PORT}`,
  credentials: {
    accessKeyId: process.env.S3_ACCESS_KEY_ID!,
    secretAccessKey: process.env.S3_SECRET_ACCESS_KEY!,
  },
  forcePathStyle: true,
  region: 'us-east-1',  // required by SDK, not used by the platform
})

NOTE: When using the raw SDK, you are responsible for re-reading credentials from process.env after rotation. The @insureco/storage SDK handles this automatically.

Local Development

For local dev, set env vars directly in .env.local. Credentials are only auto-injected inside a deployed pod:

# .env.local
S3_HOST=localhost
S3_PORT=9000
S3_ACCESS_KEY_ID=local-access-key
S3_SECRET_ACCESS_KEY=local-secret-key
S3_BUCKET=my-api-dev-default

Run a local S3-compatible server with Docker:

docker run -p 9000:9000 minio/minio server /data

Key Facts

  • Buckets are created on first deploy and persist across subsequent deploys
  • Credentials are auto-rotated by the platform — no manual rotation needed
  • The @insureco/storage SDK handles credential refresh automatically; using a raw S3 SDK requires reading credentials manually from env vars
  • Storage gas is charged monthly based on tier, not per-operation

Last updated: March 1, 2026