We Rebuilt Our Dev Environment on GCP in a Weekend

Our internal dev environment had been slowly falling behind.

It wasn’t broken, but it was clunky. Spinning up new services took longer than it should have. Local environments weren’t consistent. Some apps ran in Docker, others used hardcoded scripts. Logs were scattered. Secrets management relied too much on tribal knowledge.

None of this stopped us from building good software. But the cracks were starting to show. And the team was spending more time debugging infrastructure than delivering features.

We had talked about migrating to a more stable, cloud-based setup before. Then one Friday afternoon, someone said, "What if we just did it this weekend?"

So we did.

The Problem

Our previous environment had grown organically.

What started as a few Docker Compose files and shared Postman collections had turned into an ecosystem of scripts, local overrides, and unclear conventions.

Key issues included:

  • Devs running slightly different versions of services and dependencies

  • Long feedback loops when onboarding new engineers

  • Local-only services that didn’t reflect production behavior

  • Secrets and credentials shared too casually

It worked when the team was small and colocated. But now, with a distributed setup and multiple concurrent projects, the environment itself had become a blocker.

The Task

Build a cloud-hosted dev environment that was:

  • Consistent across machines

  • Easy to spin up or tear down

  • Secure by default

  • Close enough to production for realistic testing

And do it without breaking the pace of active projects.

The Actions We Took

1. Standardized on GCP with Terraform modules
We chose GCP because it matched the production infra of most of our active client work. The familiarity helped.

Using Terraform, we created reusable modules for:

  • VPC setup and service networking

  • Cloud Run for stateless services

  • Cloud SQL for managed Postgres

  • Secret Manager for configuration and tokens

  • Pub/Sub for async event handling

This gave us infrastructure that was codified, portable, and versioned.

2. Moved to per-branch sandbox environments
Instead of maintaining one shared staging environment, we built ephemeral dev environments tied to Git branches.

When a PR was opened, GitHub Actions spun up a new Cloud Run service, seeded a temporary database, and linked secrets via GCP IAM roles. This meant:

  • Engineers could test changes in isolation

  • PMs and designers could view work without pulling code

  • Bugs were easier to trace to specific changes

Environments auto-expired after 72 hours, keeping costs down.

3. Created a CLI tool for local workflows
To make it easy for the team, we wrote a simple CLI that wrapped gcloud commands:

dev init --project x  
dev logs --service y  
dev deploy --branch z

It wasn’t fancy. But it was predictable. And it worked on every machine the same way.

4. Tuned cost controls and observability
We used budget alerts, usage quotas, and lightweight logging to make sure the weekend setup didn’t balloon into a money pit.

Cloud Logging piped into our internal dashboard so we could monitor errors, performance, and deployment issues without jumping into GCP manually.

The Results

  • Engineers were able to spin up environments in under 2 minutes

  • Onboarding time for new devs dropped significantly

  • Bugs were caught earlier, closer to where they were introduced

  • Collaboration improved, especially across design and QA

  • Secrets handling became more secure and auditable

We didn’t eliminate every edge case. But we built a foundation the team could trust.

The most surprising part? We kept using it.

Takeaway

Dev environments don’t have to be painful.

If your team is constantly debugging inconsistent setups, chasing environment bugs, or sharing credentials in Slack, it’s time to step back and rethink the system.

You don’t need a full platform team. You just need:

  • Infrastructure as code

  • Consistent branch-based deploys

  • Simple tools for common tasks

  • Enough observability to catch what matters

We didn’t set out to build perfection. Just something better. And we did it in a weekend.

Previous
Previous

An Intern Built a Better Dashboard Than We Ever Did

Next
Next

Designing Interfaces for Engineers, Not Executives