Skip to main content

Command Palette

Search for a command to run...

Optimizing Containers for Scalability: From Alpine Images to Daemonless Deployments

Optimize container scalability using lightweight images, multi-stage builds, and Docker Swarm for efficient deployment and scaling.

Updated
Optimizing Containers for Scalability: From Alpine Images to Daemonless Deployments

Prerequisites

Before we start with optimizations, try to google a bit about topics like containers, Linux, virtual machines, especially Docker CLI. Done? Let’s go!

Daemon-less container engines

A Daemon in Linux terminology is considered a background service. For tools like Docker (which is widely used), we have the Docker engine being managed by a specialized daemon. Podman’s daemonless property means it operates without a central, long-running background process (daemon) that manages containers, unlike Docker which relies on a daemon (dockerd) to handle container lifecycle and orchestration. Below is the process ID and tree for dockerd

$ pidof dockerd
1234

$ pstree -p 1234
dockerd(1234)─┬─containerd(1300)─┬─runc(1400)───nginx(1500)
              │                  └─runc(1401)───another_container
              └─other_dockerd_threads

Podman does not have a daemon running in the background. Instead, it uses a fork/exec model to directly create and manage containers as child processes of the Podman command itself. This means each container is managed as an individual process, not as a child of a daemon. This reduces baseline memory and CPU usage, and leads to a lighter resource footprint, especially for resource-constrained environments (like in edge computing or CI/CD runners).

Podman interacts directly with the Linux kernel and container runtime (like runC) to manage containers, rather than communicating through a daemon API

$ pidof podman
5678

$ pstree -p 5678
podman(5678)───runc(5700)───nginx(5750)

While Podman isn’t yet extensively used and is being adopted slowly, we’ll continue this article using Docker.

Lightweight base images

Using lightweight alternatives for base images is considered a very good practice in Dev-ops. The goal is to minimize the image size and hence attack surface by using lean, purpose-built base images rather than full featured general-purpose OS images like ubuntu or centos which may have unnecessary packages and configurations. Here’s a cool comparison:

ImageSize (approx.)Use Case / Notes
scratch0 MBMinimalist base image (empty); used for statically compiled binaries like Go or Rust
alpine~5 MBBusyBox-based Linux; common for general-purpose lightweight containers
distrolessVaries (~20MB)From Google; only includes app and runtime, no shell or package manager
busybox~1 MBShell and core Unix utilities; great for small, scriptable containers
debian:slim~22 MBStripped-down Debian, good for apps needing glibc or apt
ubi-micro~7 MBRed Hat's minimal, secure base for RHEL-compatible containers
💡
scratch is not an actual image you can pull — it’s an empty placeholder in Dockerfiles meaning “start from nothing.” So it works only if your binary is fully static (e.g., Go, Rust with static linking).

Multi-stage builds

Multi-stage builds are a very cool concept that help us separate the build stage from the run stage. This means that only the useful stuff (like compiled executables) are transferred to the resultant image. Reduction of attack surface is once again obvious apart from smaller image sizes.

Here’s a sample rust project for example:

my-rust-app/
├── src/
│   └── main.rs
├── Cargo.toml
└── Dockerfile

Here’s the build-time portion of the Dockerfile

# Stage 1: Builder
FROM rust:1.76 as builder

# Create app directory
WORKDIR /usr/src/myapp

# Copy manifest files and build dependencies
COPY Cargo.toml .
COPY src ./src

# Build the app in release mode
RUN cargo build --release

And here’s the run-time portion, whatever is built through the previous container gets respectively transferred to the runtime container

# Stage 2: Runtime
FROM debian:bookworm-slim

# Copy the built binary from the builder stage
COPY --from=builder /usr/src/myapp/target/release/my_rust_app /usr/local/bin/my_rust_app

# Run the binary
ENTRYPOINT ["/usr/local/bin/my_rust_app"]

Here, instead of debian, we can consider alpine that will result in a lighter image size.

Multi-container setups and scaling with Swarm

When dealing with such cases, the compose tool helps us evolve the application from a single main container into multiple inter-connected light containers with their own dedicated service and purpose.

This allows for scalable development because we can add in more services in the future without rebuilding the entire setup.

Let’s take the Rust app + PostgreSQL example we’re working with, use Compose to manage it, and then scale it for a production-like setup.

my-rust-app/
├── Dockerfile
├── docker-compose.yml
├── .env
├── Cargo.toml
├── src/
│   └── main.rs

We use the same multi-stage Dockerfile with the modified alpine base image. (we’re improvising!)

Now here’s the compose file that will define the infrastructure.

version: '3.8'

services:
  rust_app:
    build: .
    environment:
      DATABASE_URL: postgres://user:password@db:5432/mydb
    ports:
      - "8080:8080"
    depends_on:
      - db
    deploy:
      replicas: 3  # Scaling for production
      restart_policy:
        condition: on-failure

  db:
    image: postgres:16
    restart: always
    environment:
      POSTGRES_USER: user
      POSTGRES_PASSWORD: password
      POSTGRES_DB: mydb
    volumes:
      - db_data:/var/lib/postgresql/data

volumes:
  db_data:

To enable scaling, we can use the swarm mode

docker swarm init  # Initialize Docker Swarm (only once per machine)
docker stack deploy -c docker-compose.yml ruststack

This will start 3 replicas of your rust_app container (as declared under deploy).

ID             NAME              MODE         REPLICAS   IMAGE               PORTS
x3h7a5y6hk4e   ruststack_rust_app replicated   3/3        myrustapp:latest    *:8080->8080/tcp
9k8fj4fhs9wo   ruststack_db       replicated   1/1        postgres:16         *:5432->5432/tcp

You can scale up/down anytime:

docker service scale ruststack_rust_app=5

For more advanced features like auto-scaling, we need to shift to platforms like Amazon EKS, or Google GKE (both being services for Kubernetes)

Artemis

Part 12 of 12

The May '25 series marks our first public showcase—an inside look at the ideas, experiments, and projects we're building. These blogs are dense, thoughtful, and a signal to the world: NTL is here, and we’re just getting started.

Start from the beginning

At the Root of Trust: Kernel Anti-Cheats

A deep dive into kernel level anti-cheats and how to build them