All integrations

Docker

Containerized services - send status without adding a runtime.

LiveCI / CDenv var
progress (indigo-55) Live Activity preview
container task

Chirp's Docker integration is a Dockerfile fragment that pre-installs the bash helpers into the image. No new runtime, no new package manager dependency - just `curl` plus a static shell script copied to `/usr/local/share/chirp.sh`. Works on any base image with a POSIX shell.

Pattern fits two common shapes: long-running services (e.g. a worker that wraps each job with `chirp_wrap`) and one-shot containers in a CI pipeline (e.g. an integration-test container that reports its own pass/fail). The container reads `CHIRP_API_KEY` from the environment, so the host orchestration (Docker secret, Compose env-file, K8s secret) handles credential delivery.

Prerequisites

  • A Dockerfile you control (or a base image you can extend).
  • Internet access from the build context to fetch chirp.sh during `docker build`.
  • A way to set `CHIRP_API_KEY` at container runtime (secret, env-file, compose).

Setup

  1. 1

    Add the install fragment to your Dockerfile

    Single RUN block that installs curl (if not present) and downloads chirp.sh into a system path. Pin the script to a specific revision URL if you want reproducible builds - /connectors/bash/[email protected] resolves to a frozen version.

    Dockerfile
    # Alpine
    RUN apk add --no-cache curl && \
        curl -fsSL https://chirpapp.dev/connectors/bash/chirp.sh \
          -o /usr/local/share/chirp.sh
    
    # Debian/Ubuntu
    RUN apt-get update && apt-get install -y curl && \
        curl -fsSL https://chirpapp.dev/connectors/bash/chirp.sh \
          -o /usr/local/share/chirp.sh && \
        rm -rf /var/lib/apt/lists/*
  2. 2

    Source the helpers in your entrypoint

    In whatever script you use as the container entrypoint (often entrypoint.sh or start.sh), source the helpers and call chirp_wrap around the command you actually want to run.

    entrypoint.sh
    #!/usr/bin/env bash
    set -euo pipefail
    
    . /usr/local/share/chirp.sh
    
    # Wrap the real container workload. chirp_wrap propagates the
    # exit code so docker stop / restart policies still see the truth.
    chirp_wrap @worker "$HOSTNAME" -- /app/run.sh "$@"
  3. 3

    Pass CHIRP_API_KEY at runtime

    The helper reads CHIRP_API_KEY from the environment. How you supply it depends on your orchestration:

    compose.yml
    # docker compose
    services:
      worker:
        image: my-worker
        env_file: .env       # contains CHIRP_API_KEY=chirp_sk_...
        # OR explicit secrets (preferred):
        secrets: [chirp_api_key]
        environment:
          CHIRP_API_KEY_FILE: /run/secrets/chirp_api_key
    
    secrets:
      chirp_api_key:
        file: ./secrets/chirp_api_key.txt
  4. 4

    (Optional) Multi-stage build to keep the image lean

    If you don't want chirp.sh baked into a 50MB Alpine image, use a builder stage that has curl and copy the script forward. Keeps the final image identical in size to a non-Chirp build.

    shell
    FROM alpine:3.20 AS chirp-fetch
    RUN apk add --no-cache curl && \
        curl -fsSL https://chirpapp.dev/connectors/bash/chirp.sh \
          -o /chirp.sh
    
    FROM your-base-image
    COPY --from=chirp-fetch /chirp.sh /usr/local/share/chirp.sh
    # rest of your Dockerfile unchanged

What you’ll see

Card header: Docker logo + "Container · WORKING" + container hostname (since `chirp_wrap` uses `$HOSTNAME` in the example). Action line shows the wrapped command. Closes green/red on the entrypoint exit code. For long-running services that don't naturally exit, use `chirp_start` / `chirp_update` / `chirp_end` manually around discrete units of work (per-message in a worker queue, per-job in a job runner).

Troubleshooting

`chirp.sh: not found` at runtime.
Confirm the RUN block actually executed during build (docker build --no-cache . and watch the output). Some base images strip /usr/local on multi-stage copy - verify with docker run --rm your-image ls -la /usr/local/share/chirp.sh.
Container starts but card never appears.
CHIRP_API_KEY isn't reaching the container. Inside the container, run printenv CHIRP_API_KEY - empty means the orchestration layer is dropping it. Most common cause: env_file: path is wrong relative to the compose file's directory.
Build fails on `apt-get update` in air-gapped CI.
Pre-bake curl into your base image, or vendor chirp.sh into your repo and COPY it instead of curl-ing during build.