Serverless vs. Containers: When to Use What
2024-12-03
A practical breakdown of serverless functions and containerised workloads — the real trade-offs beyond the marketing, and how to choose based on your actual use case.
The serverless vs. containers debate generates a lot of noise. Most of it comes from people who have a financial or tribal interest in one side. This article tries to cut through that and give you a decision framework based on what I have seen work (and fail) in real deployments.
What Serverless Actually Means
"Serverless" does not mean no servers — it means you do not manage them. You give your cloud provider a function, and they handle:
- Provisioning and scaling compute
- OS patching and runtime updates
- Idle capacity (you pay only for invocations)
The canonical offerings: AWS Lambda, Google Cloud Functions, Azure Functions, Cloudflare Workers.
What you give up: control over the runtime environment, execution time limits (15 minutes on Lambda), and any persistent local state between invocations.
What Containers Actually Mean
Containers (Docker + an orchestrator like Kubernetes) package your application with its dependencies and run it consistently anywhere. You define the image; the orchestrator handles scheduling across nodes.
What you gain: complete control over the runtime, no time limits, persistent connections (databases, WebSockets), and the ability to run anything from a web server to a background worker.
What you take on: managing the cluster (or paying a managed service like EKS/GKE to do it), writing Dockerfiles, handling scaling configuration, and thinking about resource allocation.
The Real Trade-offs
Cold Starts
Serverless functions have cold starts — when a function has not been invoked recently, the provider needs to initialise a new execution environment. On Lambda this is typically 100ms–2s depending on runtime and memory size.
For user-facing APIs where latency matters, this is a real problem. Solutions exist (provisioned concurrency, keeping lambdas warm), but they cost money and erase the pricing benefit.
Containers started in Kubernetes have no cold start problem — your pod is either running or it is not.
Cost Model
Serverless charges per invocation and duration. For spiky, unpredictable workloads with long idle periods — a nightly ETL job, a webhook handler — it is dramatically cheaper than running a container 24/7.
Containers charge for the reserved compute regardless of whether requests are coming in. But at sustained high throughput, the per-invocation pricing of serverless overtakes the flat container cost.
A rough mental model:
- Under ~3 million requests/month or very spiky traffic → serverless often wins on cost
- Sustained high throughput or always-on services → containers usually win
Operational Complexity
This is where the marketing gets dishonest. Serverless is pitched as "no ops". In reality, you still need to manage:
- IAM roles and permissions (often the most painful part of Lambda)
- Secrets and environment variables
- Cold start tuning
- Observability (distributed traces across dozens of functions are brutal without tooling)
- Vendor lock-in (Lambda functions are not portable)
Kubernetes has its own operational weight, but the tooling ecosystem (Helm, ArgoCD, Prometheus, Grafana) is mature and the skills transfer across clouds.
Developer Experience
For a small team shipping fast, a single serverless function deployed with serverless deploy or aws lambda update-function-code is hard to beat. No Dockerfiles, no manifests, no cluster to think about.
For a larger team with multiple services, Kubernetes provides a consistent model. Every service looks the same — same deployment manifest, same health check pattern, same observability setup.
When I Reach for Serverless
- Event-driven processing — S3 upload triggers an image resizer, Stripe webhook triggers an invoice mailer. These are perfect: short-lived, triggered by events, no persistent state needed.
- Scheduled jobs — cron-style tasks that run for a few seconds. EventBridge + Lambda beats a Kubernetes CronJob for pure simplicity.
- Edge logic — Cloudflare Workers for request routing, auth token validation, A/B testing at the CDN level. Sub-millisecond latency, genuinely no cold start.
- Prototyping — getting something live fast without thinking about infrastructure.
When I Reach for Containers
- Long-running services — APIs, WebSocket servers, background queue workers. Anything that needs to hold a database connection pool open.
- Stateful workloads — Postgres, Redis, anything with persistent data.
- Strict latency requirements — no cold start, predictable performance.
- Complex runtimes — custom binaries, specific OS dependencies, GPU workloads.
- Team scale — once you have more than a handful of services, the consistency of Kubernetes pays for the overhead.
The Hybrid Reality
Most production systems use both. A common pattern:
User request
→ Cloudflare Worker (edge auth, rate limiting)
→ Kubernetes service (core API, database queries)
→ Lambda function (async: send email, resize image, run ML inference)
The mistake is treating it as an either/or decision. Use serverless where it is genuinely the right tool — event-driven, short-lived, spiky. Use containers where you need control, performance, or long-running processes.
A Decision Checklist
Before picking one:
- Is this triggered by events or driven by continuous traffic? → Events point to serverless.
- Does the function need to run longer than 15 minutes? → Must use containers.
- Is latency P99 critical? → Cold starts may rule out serverless.
- How spiky is traffic? → High variance favours serverless pricing.
- Does it need to hold a persistent connection? → Use containers.
- Is the team small and shipping fast? → Serverless lowers the ops floor.
- Do you have multiple services that need to talk to each other? → Kubernetes service mesh may simplify things.
Neither is universally better. The engineers who get the most out of both are the ones who understand the trade-offs deeply enough to choose deliberately.