As infrastructure engineers, we are often seduced by the elegance of our own diagrams. We design for “Perfect World” metrics: global scalability, maximum reuse, and zero waste. But professional maturity isn’t measured by how closely we follow a textbook—it’s measured by how quickly we recognize when we are fighting the platform’s gravity.
The Vision: The Trap of Logical Reuse
The design began with a cost-conscious premise: Why build a new front door when we already have a magnificent one?
- The Goal: Secure a backend service behind HTTPS.
- The Strategy: Leverage an existing Load Balancer to consolidate costs.
- The Constraint: Maintain strict isolation with no public IP exposure.
On paper, it was an industry-standard win: Load Balancer → Reverse Proxy → Application.

The Friction: The Hidden Physics of the Cloud
Standard designs often ignore the specific “physics” of the cloud provider. In an enterprise environment with Shared VPCs and multi-project structures, we hit the unwritten rules:
- Administrative Inertia: The “Classic” Load Balancer—a reliable workhorse—turned out to be a rigid inhabitant of its host project. It didn’t “see” cross-project backends with the ease the marketing documentation suggested.
- Contextual Blindness: Routing through a Shared VPC isn’t just a network path; it’s a gauntlet of IAM handshakes and protocol translations that don’t appear on an architectural drawing.
The Health Check Trap
We often treat health checks as a simple “ping,” but in a complex cloud environment, a health check is a sophisticated negotiation.
Our application was, by every internal metric, healthy. It responded to local curls; its logs were clean. Yet, the Load Balancer saw only a graveyard of “unhealthy” nodes.
The Lesson: A Load Balancer doesn’t care if your code is running; it cares if your code is running within its specific, narrow expectations:
- Does the Host header align with the probe?
- Is the SNI (Server Name Indication) breaking the handshake?
- Is the probe originating from an IP range the Shared VPC actually trusts?
The Tipping Point: Engineering vs. Litigation
As we piled on workarounds—Bridge Services, Proxy VMs, and Network Endpoint Groups—the architecture began to resemble a Rube Goldberg machine. Every workaround introduced a new failure domain.
We realized we were no longer engineering a solution; we were litigating with the platform. If your architecture requires ten exceptions to function, you have chosen the wrong architecture.
The Simple Truth: Maturity over Dogma
We eventually asked the most painful question in engineering: Does this actually need a Load Balancer?
Load balancers are not just traffic managers; they are protocol enforcers. They introduce strict semantics that a single-node or protocol-sensitive service might not actually require.
Simplicity is not a compromise. Sometimes, the most professional architecture is the one that looks “lesser” on paper but performs better in reality:
- A direct, hardened HTTPS endpoint.
- Strict, identity-aware firewall rules.
- Properly managed TLS at the edge.
Final Thoughts
Good architecture isn’t about strictly avoiding public IPs or centralizing every ingress point. It’s about matching the complexity of the solution to the reality of the platform. When the cost of integration outweighs the benefit of the feature, the most mature thing an engineer can do is step back.
Design for the cloud you have, not the cloud you wish you had.
Leave a Reply