Last updated on

Shipping Astro + Bun on AWS Lambda (and the header trap I hit)

Tags
  • bun

I’ve spent the last few weeks packaging an Astro site that renders through Bun’s server runtime into a Lambda container. The combo is a joy once it is wired up, but I lost a lot of time chasing an issue caused by the fully managed Lambda Adapter image that silently stripped HTTP headers from my app. Here is the exact flow that now works in production and the lessons that kept me from shipping sooner.


1. Start with a Bun-first multi-stage build


`lambda.Dockerfile` is the backbone. The first stage uses `oven/bun:debian` purely as a build box:


dockerfile
FROM oven/bun:debian AS builder
WORKDIR /var/task
COPY package.json bun.lock ./
RUN bun install --frozen-lockfile
COPY tsconfig.json astro.config.mjs ./
COPY src ./src
COPY public ./public
RUN bun run build



- Everything happens under `/var/task` so the handoff to the runtime layer is painless.

- `bun run build` emits a server entry (`dist/server/entry.mjs`) plus the static assets Astro needs under `dist`.

- I inject the public and private API URLs at build time so they are baked into the client bundle but still overrideable at runtime.


2. Keep the runtime stage boring and controllable


The second stage pivots to the AWS-provided Node.js 22 base image so Lambda patches, security baselines, and bootstrap expectations are handled for me:


```dockerfile
FROM public.ecr.aws/lambda/nodejs:22 AS base
WORKDIR /var/task
RUN npm install -g bun && bun --version
COPY package.json bun.lock ./
RUN bun install --production --frozen-lockfile
COPY --from=builder /var/task/dist ./dist
COPY lambda-handler.js ./index.js
CMD ["index.handler"]
```


Key takeaways:


1. Install Bun globally in the runtime stage because Lambda initializes in a stripped-down environment. Verifying `bun --version` during the build stops cold starts from failing later.

2. Copy only `dist/` and the handler shim (`lambda-handler.js`) into the final image; everything else stays behind in the builder layer.

3. Expose `PORT=8080` and keep `HOST=0.0.0.0` so internal health checks work even though Lambda traffic arrives on the function runtime socket rather than a true network interface.


3. Use a thin Node handler to babysit the Bun server


`lambda-handler.js` is the real hero. Instead of relying on a custom runtime, I spawn Bun once per execution environment and forward every Lambda event to it.


- `ensureServerRunning()` boots `bun dist/server/entry.mjs` exactly once per VM. It uses a simple TCP probe to wait until port 8080 accepts connections so cold starts never race incoming requests.

- `forwardRequest()` reconstructs the HTTP request: method, raw path, query string (including multi-value parameters), cookies, and headers. It rewrites `host`, `x-forwarded-for`, and `x-forwarded-proto` so Astro sees the original client context.

- Binary responses (images, fonts, PDFs) are detected via `content-type` and base64-encoded before returning to Lambda. Everything else is sent as UTF-8.


This shim gives me full control over the request/response lifecycle and makes debugging trivial because I can add logging or metrics without touching Astro’s generated server entry.


4. The managed Lambda Adapter image bit me hard


I originally tried to shortcut all of this by inheriting from `public.ecr.aws/awsguru/aws-lambda-adapter:0.9.1`:


```dockerfile

FROM public.ecr.aws/awsguru/aws-lambda-adapter:0.9.1 AS lambda-adapter

```


On paper, the adapter should ingest Lambda events, run my web process, and relay the response. In practice it never propagated request headers to Bun. Every request arrived without `cookie`, `x-forwarded-*`, or even the original `host`, which broke Astro’s SSR logic (redirect loops, missing session info, and broken CSP generation).


What went wrong?


  1. The adapter clamps headers down to an allowlist optimized for ALB/CloudFront. Bun expected raw headers, especially `cookie` and custom auth headers, and they were silently dropped.
  2. Because the adapter owns the bootstrap layer, I could not patch its nginx/envoy configuration or even inject debug logs without forking the image.
  3. The managed image also wraps the web server with an HTTP/1.1 proxy. When it rewrote `x-forwarded-for`, it overwrote my original client IP with the internal adapter address, causing all geo-aware logic to fail.


Rather than fight the black box, I switched back to the plain Node base image and kept the adapter concept inside `lambda-handler.js`, where I can unit-test header behavior. The moment I did that, cookies synced, CSP hashes stabilized, and my observability tools stopped showing anonymous traffic.


5. Deployment checklist


If you want to replicate this setup for your own Astro + Bun project:


1. Build locally with `bun run build` to confirm Astro emits `/dist/server/entry.mjs`.

2. Run `docker build -f lambda.Dockerfile -t astro-bun-lambda .` and smoke-test the image with `docker run -p 9000:8080 astro-bun-lambda`.

3. Package and push to ECR, then create or update the Lambda function with `--package-type Image` pointing to that URI.

4. Configure `PORT`, `HOST`, `PUBLIC_API_BASE_URL`, and any other environment variables directly on the Lambda function so you can reuse the same image across stages.

5. After every schema or content change, run `bun run astro check` locally to catch mismatches before they surface as runtime errors.


6. Takeaways for LinkedIn readers


- Bun belongs in Lambda as long as you own the bootstrap story. Docker’s multi-stage pattern keeps image sizes lean while still compiling Astro with Bun’s blazing toolchain.

- Treat your handler shim as an adapter you control. Even if AWS offers a managed layer, you want the ability to log, transform headers, and manage connection lifecycles yourself.

- If your web stack depends on precise headers (cookies, signed fetches, origin checks), avoid `public.ecr.aws/awsguru/aws-lambda-adapter:0.9.1` until the project ships a fix for header forwarding. A transparent proxy is critical for observability and compliance, and the current adapter version gets in the way.


Feel free to grab `lambda.Dockerfile` and `lambda-handler.js` as-is, tweak the environment variables for your APIs, and share back what you learn. The more we prove that Bun plays nicely with Lambda, the sooner managed runtimes will catch up. Until then, owning the headers yourself is the safest path to production.