Logo
Overview
Google Pub/Sub with Bun: From Local Development to Production

Google Pub/Sub with Bun: From Local Development to Production

January 25, 2026
13 min read

Why Bun + Pub/Sub?

Google Pub/Sub seems simple: create a topic, publish, subscribe, done.

In reality—especially when layering Bun, Docker, and multiple authentication methods—there are sharp edges that can burn a day or two of debugging.

This post is experience-driven. It covers what actually happens, not what the docs say should happen.

The Case for Bun + Pub/Sub

Bun works exceptionally well with the official Google Cloud Node.js SDKs:

  • No hacks or polyfills needed
  • Async/await support is clean
  • Performs well under load
  • Startup time is fast (important for serverless)

Use Bun + Pub/Sub when building:

  • Log pipelines
  • Background workers
  • Event-driven microservices
  • Internal tooling
  • Message brokers

If you’re doing any of this in GCP, Bun + Pub/Sub is a solid, pragmatic choice.


Part 1: Installation

Add the Dependency

Terminal window
bun add @google-cloud/pubsub

That’s it. No Bun-specific adapters. No polyfills. The official SDK just works.

Package Structure

{
"dependencies": {
"@google-cloud/pubsub": "^4.x.x"
}
}

Version 4.x is stable and actively maintained. Use it.


Part 2: The Mental Model (Critical)

Before writing code, understand responsibility boundaries.

TaskYour App?Infra?
Create topics❌ No✅ Yes
Create subscriptions❌ No✅ Yes
Publish messages✅ Yes❌ No
Consume messages✅ Yes❌ No
Discover resources at runtime❌ No✅ Yes

Golden Rule:

Runtime services should only publish and consume, never create or discover infrastructure.

This single rule avoids 90% of Pub/Sub pain.

Why This Matters

// Anti-pattern
await pubsub.topic('logs').get({ autoCreate: true });

This requires:

  • pubsub.topics.get
  • pubsub.topics.create
  • Admin-level permissions

Your app shouldn’t need admin permissions.

// Correct pattern
const topic = pubsub.topic('logs');
// Publishing will fail clearly if topic doesn't exist
await topic.publishMessage({ json: { ... } });

If the topic doesn’t exist, publishing fails with a clear error. That’s intentional. Your infra team knows about it.

Note

The separation is a feature, not a limitation.

It forces your team to:

  • Provision infrastructure upfront
  • Separate concerns clearly
  • Avoid runtime permission surprises
  • Catch misconfiguration at deploy time, not 3am

Part 3: Basic Publisher (Bun)

Simple Publish

import { PubSub } from '@google-cloud/pubsub';
const pubsub = new PubSub({
projectId: process.env.GCP_PROJECT_ID,
});
const topic = pubsub.topic(process.env.PUBSUB_TOPIC_NAME!);
// Publish a message
await topic.publishMessage({
json: { message: 'hello world', timestamp: Date.now() },
});
console.log('Message published');

If this fails, it will fail clearly—assuming authentication is set up correctly.

Publish with Attributes

await topic.publishMessage({
json: { order_id: '12345', amount: 99.99 },
attributes: {
priority: 'high',
source: 'checkout',
attempt: '1',
},
});

Attributes are key-value pairs for filtering and routing downstream.

Batch Publishing

const messages = [];
for (let i = 0; i < 1000; i++) {
messages.push({
json: { event: 'user_login', userId: `user_${i}` },
});
}
await Promise.all(
messages.map(msg => topic.publishMessage(msg))
);

Or let the client batch automatically:

const pubsub = new PubSub({
projectId: process.env.GCP_PROJECT_ID,
batching: {
maxMilliseconds: 100,
maxBytes: 10 * 1024 * 1024,
maxMessages: 100,
},
});

The client waits up to 100ms, accumulates up to 100 messages or 10MB, then sends a batch.

Error Handling

try {
await topic.publishMessage({ json: { data: 'important' } });
} catch (error) {
console.error('Failed to publish:', error.message);
// Implement backoff/retry logic
// Or send to a dead-letter topic
}

Part 4: Basic Subscriber (Bun)

Consume Messages

import { PubSub } from '@google-cloud/pubsub';
const pubsub = new PubSub({
projectId: process.env.GCP_PROJECT_ID,
});
const subscription = pubsub.subscription(
process.env.PUBSUB_SUBSCRIPTION_NAME!
);
// Listen for messages
subscription.on('message', (message) => {
console.log('Received message:', message.data.toString());
console.log('Attributes:', message.attributes);
// Process the message
// ...
// Acknowledge after processing
message.ack();
});
// Error handling
subscription.on('error', (error) => {
console.error('Subscription error:', error);
});

Acknowledge vs Nack

subscription.on('message', (message) => {
try {
// Process message
const data = JSON.parse(message.data.toString());
processData(data);
// Success: acknowledge
message.ack();
} catch (error) {
// Failure: nack (message returns to queue)
message.nack();
}
});

If you nack(), the message returns to Pub/Sub and will be retried (delivered again).

Graceful Shutdown

subscription.on('message', (message) => {
// Process...
message.ack();
});
// When shutting down
process.on('SIGTERM', async () => {
console.log('Shutting down...');
subscription.close();
await pubsub.close();
});

Part 5: Authentication — The Three Methods

This is where most pain lives. Understand all three.

ADC is how Google Cloud wants you to authenticate. No secrets in code.

Local Setup:

Terminal window
gcloud auth application-default login
gcloud config set project YOUR_PROJECT_ID

This creates:

~/.config/gcloud/application_default_credentials.json

Your Bun app picks it up automatically:

const pubsub = new PubSub({
projectId: process.env.GCP_PROJECT_ID,
// ADC is used automatically, no keyFilename needed
});

Advantages:

  • No secrets in code
  • Works locally and in production
  • Easy credential rotation
  • Same code everywhere

Disadvantages:

  • Must configure locally (one-time)
  • Docker requires special handling (more on that later)

Method 2: Service Account JSON File

Sometimes unavoidable (CI systems, legacy setups).

const pubsub = new PubSub({
projectId: 'my-project',
keyFilename: '/path/to/service-account.json',
});

Advantages:

  • Works in constrained environments
  • Explicit about which SA is used

Disadvantages:

  • Secrets on disk (rotation pain)
  • Easy to commit to git
  • Docker complexity increases
  • Production risk (stale keys)

When to use: Only if ADC is impossible.

Method 3: ADC via Environment Variable

Terminal window
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/sa.json

Still ADC, but file-backed.

const pubsub = new PubSub({
projectId: process.env.GCP_PROJECT_ID,
// Uses GOOGLE_APPLICATION_CREDENTIALS automatically
});

Good for: CI/CD, containers, when file path is dynamic.

Note

Recommendation:

  1. Locally: Use ADC with gcloud auth application-default login
  2. Docker: Mount ADC or use service account
  3. Production (Cloud Run/GKE): ADC is automatic, no action needed
  4. CI/CD: Use GOOGLE_APPLICATION_CREDENTIALS with a service account key

Avoid committing service account keys to git. Use secrets management (GitHub Secrets, GitLab CI Variables, etc.).


Part 6: IAM Permissions (Minimum)

Over-permissioning is the fastest way to hide bugs.

For Publishers Only

roles/pubsub.publisher

That’s it. Nothing else.

For Subscribers Only

roles/pubsub.subscriber

For Both

- roles/pubsub.publisher
- roles/pubsub.subscriber

What You DO NOT Need

roles/pubsub.admin # Way too much
roles/editor # Way too much
roles/owner # Way too much
pubsub.topics.get # Unless discovering at runtime
pubsub.topics.create # Unless creating at runtime

Assign only what’s needed. If your app crashes due to missing permissions, that’s good—it means your config is under-scoped, which is safer than over-scoped.


Part 7: The Infamous Error: “undefined undefined: undefined”

You’ll see this error eventually:

Error: undefined undefined: undefined

It’s cryptic. It’s useless. But it has a meaning.

What It Really Means

One of these:

  1. Missing IAM permission (often pubsub.topics.get)
  2. Calling topic.exists() or topic.get() with insufficient rights
  3. Empty or undefined GCP_PROJECT_ID
  4. Empty or undefined topic name
  5. Topic doesn’t exist (and you’re trying to auto-create)

Why the Error Is So Bad

The Node.js gRPC client sometimes loses metadata during retries and surfaces this generic message. The actual error is swallowed.

How to Debug It

// Add logging
console.log('Project ID:', process.env.GCP_PROJECT_ID);
console.log('Topic name:', process.env.PUBSUB_TOPIC_NAME);
try {
const topic = pubsub.topic(process.env.PUBSUB_TOPIC_NAME!);
// Don't call exists() or get() — just publish
await topic.publishMessage({ json: { test: true } });
console.log('Success');
} catch (error) {
console.error('Full error:', JSON.stringify(error, null, 2));
console.error('Message:', error.message);
console.error('Code:', error.code);
}

The error codes matter:

  • PERMISSION_DENIED → IAM issue
  • NOT_FOUND → Topic doesn’t exist
  • UNAUTHENTICATED → Auth not set up
  • INVALID_ARGUMENT → Bad project ID or topic name

Part 8: Docker — Where Things Break

Docker introduces two critical differences.

Problem 1: ADC Is Not Automatically Available

Your host machine having ADC does NOT mean your container does.

This fails:

Terminal window
docker run my-app

Your app can’t find the credentials.

Correct approach:

Mount the ADC credentials:

services:
app:
build: .
volumes:
- ~/.config/gcloud:/home/app/.config/gcloud:ro
environment:
HOME: /home/app
GCP_PROJECT_ID: my-project
PUBSUB_TOPIC_NAME: logs

Key points:

  • Mount to /home/app/.config/gcloud (the app user’s home)
  • Set HOME=/home/app so the client finds it
  • Use :ro (read-only) for security

Problem 2: Non-root Users Matter

If your Dockerfile has:

USER app

Then ADC must live under /home/app/.config/gcloud, not /root/.config/gcloud.

If you mount to /root and the container runs as user app, the client won’t find it—and will silently fail.

Complete Docker Compose Example (Correct)

version: '3.8'
services:
pubsub-worker:
build:
context: .
dockerfile: Dockerfile
volumes:
# Mount your local ADC
- ~/.config/gcloud:/home/app/.config/gcloud:ro
environment:
# Critical
HOME: /home/app
# Your config
GCP_PROJECT_ID: my-gcp-project
PUBSUB_TOPIC_NAME: user-events
PUBSUB_SUBSCRIPTION_NAME: user-events-worker
# Optional: ensure container stops gracefully
stop_signal: SIGTERM
stop_grace_period: 10s

Complete Dockerfile (Correct)

FROM oven/bun:1-alpine
WORKDIR /app
# Copy code
COPY . .
# Install dependencies
RUN bun install --frozen-lockfile
# Create non-root user
RUN addgroup -S app && adduser -S app -G app
USER app
# Run
CMD ["bun", "run", "src/worker.ts"]

Now when you run:

Terminal window
docker compose up

Your container will:

  1. Mount your local ADC to /home/app/.config/gcloud
  2. Set HOME=/home/app so the client finds it
  3. Run as user app (non-root)
  4. Access Pub/Sub with your credentials

Part 9: The Most Common Mistake

This pattern is everywhere, and it’s wrong:

Don’t do this:

// Auto-creating topics at runtime
const topic = await pubsub.topic('logs').get({
autoCreate: true
});
await topic.publishMessage({ json: { message: 'hello' } });

Why it’s wrong:

  1. Requires pubsub.topics.create permission (too much)
  2. Hides infrastructure problems until production
  3. Creates unpredictable topics in other environments
  4. Your app shouldn’t know how to create topics

Do this instead:

const topic = pubsub.topic('logs');
// Just publish. If topic doesn't exist, publishing fails clearly.
await topic.publishMessage({ json: { message: 'hello' } });

If the topic doesn’t exist:

  • Publishing fails with NOT_FOUND
  • Your infra team gets alerted
  • You fix the infrastructure
  • Deploy again

This is the correct pattern.


Part 10: Docker Debugging Checklist

When Pub/Sub fails inside Docker:

  1. Does bun run test-pubsub.ts (make a small script just pushing to the topic) work on your host?

    • If no, your local setup is wrong. Fix it before Docker.
  2. Is GCP_PROJECT_ID actually defined inside the container?

    Terminal window
    docker compose exec pubsub-worker env | grep GCP_PROJECT_ID
  3. Is the topic name exactly correct?

    Terminal window
    # Check if topic exists in GCP
    gcloud pubsub topics list
  4. Are you calling exists(), get(), or autoCreate?

    • Don’t do this. Just use the topic object directly.
  5. Does the runtime identity (service account) have the right IAM role?

    Terminal window
    gcloud projects get-iam-policy YOUR_PROJECT \
    --flatten="bindings[].members" \
    --filter="bindings.members:serviceAccount:*"
  6. Is ADC mounted to the correct HOME path?

    Terminal window
    docker compose exec pubsub-worker ls -la ~/.config/gcloud/

If #1 works locally and #4 is false, that’s usually the bug.


Part 11: Production (Cloud Run / GKE)

In managed GCP environments (Cloud Run, GKE, Compute Engine), everything simplifies.

Cloud Run

FROM oven/bun:1-alpine
WORKDIR /app
COPY . .
RUN bun install --frozen-lockfile
CMD ["bun", "run", "src/worker.ts"]

Deploy:

Terminal window
gcloud run deploy my-worker \
--source . \
--runtime nodejs18

Assign IAM role:

Terminal window
gcloud run services add-iam-policy-binding my-worker \
--member=serviceAccount:my-service-account@my-project.iam.gserviceaccount.com \
--role=roles/pubsub.publisher

Your Bun code doesn’t change:

const pubsub = new PubSub({
projectId: process.env.GCP_PROJECT_ID,
// ADC is automatic in Cloud Run
});

Why it works:

  • Cloud Run provides a service account automatically
  • ADC works with that service account
  • No secrets, no mounting, no complexity

GKE

Create a Kubernetes service account with Workload Identity:

Terminal window
gcloud iam service-accounts create my-bun-app
gcloud iam service-accounts add-iam-policy-binding \
my-bun-app@my-project.iam.gserviceaccount.com \
--role roles/pubsub.publisher

Link to your Kubernetes service account:

Terminal window
gcloud iam service-accounts add-iam-policy-binding \
my-bun-app@my-project.iam.gserviceaccount.com \
--role roles/iam.workloadIdentityUser \
--member "serviceAccount:my-project.svc.id.goog[default/my-bun-app]"

Deploy:

apiVersion: v1
kind: ServiceAccount
metadata:
name: my-bun-app
annotations:
iam.gke.io/gcp-service-account: my-bun-app@my-project.iam.gserviceaccount.com
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-bun-app
spec:
replicas: 3
template:
spec:
serviceAccountName: my-bun-app
containers:
- name: worker
image: my-app:latest
env:
- name: GCP_PROJECT_ID
value: my-project
- name: PUBSUB_TOPIC_NAME
value: user-events

Again, your Bun code doesn’t change. ADC handles it.


Part 12: A Complete Example (End-to-End)

Publisher Service

src/publisher.ts
import { PubSub } from '@google-cloud/pubsub';
const pubsub = new PubSub({
projectId: process.env.GCP_PROJECT_ID,
});
const topic = pubsub.topic(process.env.PUBSUB_TOPIC_NAME!);
// Simulate events
setInterval(async () => {
const event = {
userId: `user_${Math.floor(Math.random() * 1000)}`,
action: ['login', 'logout', 'purchase'][Math.floor(Math.random() * 3)],
timestamp: Date.now(),
};
try {
await topic.publishMessage({
json: event,
attributes: {
source: 'user-service',
version: '1.0',
},
});
console.log(`Published: ${event.action}`);
} catch (error) {
console.error('Failed to publish:', error.message);
}
}, 5000);

Subscriber Service

src/subscriber.ts
import { PubSub } from '@google-cloud/pubsub';
const pubsub = new PubSub({
projectId: process.env.GCP_PROJECT_ID,
});
const subscription = pubsub.subscription(
process.env.PUBSUB_SUBSCRIPTION_NAME!
);
subscription.on('message', (message) => {
try {
const event = JSON.parse(message.data.toString());
console.log(`Received: ${event.action} from ${event.userId}`);
// Process the event (e.g., update database)
// ...
message.ack();
} catch (error) {
console.error('Failed to process message:', error.message);
message.nack();
}
});
subscription.on('error', (error) => {
console.error('Subscription error:', error);
});
process.on('SIGTERM', async () => {
console.log('Shutting down...');
subscription.close();
await pubsub.close();
});

Docker Compose

version: '3.8'
services:
publisher:
build: .
volumes:
- ~/.config/gcloud:/home/app/.config/gcloud:ro
environment:
HOME: /home/app
GCP_PROJECT_ID: my-project
PUBSUB_TOPIC_NAME: user-events
command: bun run src/publisher.ts
subscriber:
build: .
volumes:
- ~/.config/gcloud:/home/app/.config/gcloud:ro
environment:
HOME: /home/app
GCP_PROJECT_ID: my-project
PUBSUB_SUBSCRIPTION_NAME: user-events-worker
command: bun run src/subscriber.ts

Run locally:

Terminal window
docker compose up

Part 13: Troubleshooting Guide

ErrorCauseFix
undefined undefined: undefinedMissing IAM, bad project ID, or topic doesn’t existCheck IAM roles and environment variables
PERMISSION_DENIEDService account lacks permissionsGrant roles/pubsub.publisher or roles/pubsub.subscriber
NOT_FOUNDTopic or subscription doesn’t existCreate infrastructure via gcloud or console
UNAUTHENTICATEDADC not found or credentials expiredRun gcloud auth application-default login
The specified credentials were not foundWrong HOME path in DockerMount ADC and set HOME correctly
Operation timed outNetwork or quota issueCheck VPC, firewall, quotas

Conclusion: Keep It Simple

Pub/Sub itself is solid. Most pain comes from:

  1. Mixing responsibilities — Apps shouldn’t create infrastructure
  2. Over-permissioning — Give only what’s needed
  3. Misunderstanding ADC — Especially in Docker
  4. Calling exists() or get() at runtime — Just use the topic/subscription object

Once you embrace these patterns, Pub/Sub becomes boring—in the best way.

Your checklist:

  • Only publish and consume at runtime
  • Provision topics/subscriptions upfront
  • Use ADC locally
  • Mount ADC in Docker
  • ADC is automatic in Cloud Run/GKE
  • Assign minimal IAM roles
  • Log environment variables when debugging

If your Pub/Sub setup feels fragile, it’s usually a design issue, not a tooling issue.

Happy shipping 🚀


Further Reading