GCP Native Networking and Security Constraints

GCP Global VPC as HUB for Transit Routing

GCP Global VPC is a great technology and really a powerful concept but it is not a good design choice for enterprise networks. It is a loaded statement. First, let us understand GCP Global VPC.

What is Global VPC?

Imagine you have a big basket (VPC) that spans across the planet. You put all your eggs (workload) in that basket. Now, these different colored eggs (workloads in different subnets in different regions) are magically allowed to talk to each other (unless you manually create the firewall rules) across the planet. It is indeed powerful yet at the same time scary.

GCP Global VPC is different than other Clouds such as AWS. In AWS all VPCs are regional. In GCP all VPCs are global in nature. There is no knob to turn the feature ON or OFF.

Huge Failure Domain or Blast Radius

Global VPC has a huge blast radius or failure domain. The lateral movement of the bad actors within the VPC will be a huge issue. If a VM is compromised, it can potentially harm other VMs as well. You need more VPC to contain the blast radius. A VM in Singapore can talk to another VM in Oregon region without any restriction from a routing perspective. It is true that one can create the “GCP L4 Firewall Rules” to block the traffic, but it is painful to manage and also you need NGFW to be inserted for your important and Prod. VMs/K8S/etc.

Service Insertion – Not Possible

In Global VPC it is not possible to insert 3rd party services in the traffic flows. Services such as Next-Generation Firewall, Load Balancer etc. This is due to the fact that your source and destination VM is now sitting in this giant global VPC and if you try to put the traffic over to a VM in a different VPC or same VPC you will not be able to do that. The native VPC router will take precedence at that point and just route the traffic using the GCP fabric.

Air Gaping, DMZ or Networking Segmentation – Not Possible

The same technical reason as I mentioned before applies here too. With Global VPC it is not possible to create DMZ or network segments inside the VPC itself. You would need more VPCs in order to create segments or air gaps between your applications such as Prod/Dev/SRE/etc.

Compliance and Governance

This is a major headache with Global VPC for large enterprises. You can elevate some of it by using the GCP Shared VPC model but what I have seen is that a customer would again use a single Global VPC with Shared VPC and they will use it as a Transit/Hub replacements.

What are the Best Practices of creating GCP VPCs?

GCP recommends a balanced approach for enterprises. For small companies with a few workloads, it might be ok to use a single Global VPC, but for enterprises, it is not a good design choice.

Single Global VPC is Not a Good Design for Enterprises

This is also recommended by GCP to create more VPC for network security, scale, cost optimization, etc. purposes.


Factors that might lead you to create additional VPC networks include scale, network security, financial considerations, operational requirements, and identity and access management (IAM).

Create Regional VPC

When you create a GCP VPC, it is global by default. There is no knob option that turn this off. So how would you create regional VPC?

When you create the VPC, you need to make sure the subnets stay within the same region. For example for VPC in US-West region, you need to make sure you are not adding subnets from Singapore region.

Avoid VPC Sprawl

VPC sprawl is not a good design practice. We have seen enterprise customers where they allocate one VPC per account in AWS as an example. This would create management, compliance, and operational overhead. This could potentially lead to an expensive design too due to cross VPC and cross-region charges.

At the same time it not a good design to just use one single giant global VPC for all types of workloads for all LOB and teams. Avoid the VPC sprawl and strike a balance.

Use Multiple VPCs for Segmentation and Air Gaping

The best practice is to use additional VPCs for proper segmentation, security, air gaping, network segmentation, etc. GCP also recommends using a hub and spoke model like what you would see in other Clouds as well.


Some large enterprise deployments involve autonomous teams that each require full control over their respective VPC networks. You can meet this requirement by creating a VPC network for each business unit, with shared services in a common VPC network (for example, analytic tools, CI/CD pipeline and build machines, DNS/Directory services).

Encryption is Best Effort inside the VPC

GCP does not guarantee encryption for traffic inside the VPC. It means if you have one single global VPC, all the workloads could be talking to each other without encryption. This is true for traffic between subnets of different regions inside a global VPC.

This is very risky and dangerous. Just imagine some dev. workload sniffing the un-encrypted data. I understand that your application should be TLS encrypted but if a breach happens who will be responsible? It is the responsibility of the NetSec architect. When it comes to security, the more is better with the Zero-Trust mindset.

Source: https://cloud.google.com/security/encryption-in-transit

Data in transit within these physical boundaries is generally authenticated, but may not be encrypted by default – you can choose which additional security measures to apply based on your threat model.

Aviatrix follows the Zero-Trust model and defence in depth approach. This is applicable for encryption as well. Hence Aviatrix Cloud Network are encrypted by default.

VPC Peering Quota

The max peering limit is 25. Aviatrix encrypted peering is better choice.


Routes Per Project

Be mindful of the GCP route limit. Which is 500 by default. It can be increased. GCP did not publish the max limit. My request for 1000 routes per project was approved within minutes. In any case, be careful about those limits.

GCP HA VPN IPSec Limitation

Google Cloud does not support the creation of tunnel connections between an HA VPN gateway and any non-HA VPN gateway hosted in Google Cloud. This restriction includes Classic VPN gateways and third-party VPN gateways that are running on Compute Engine VMs.

This limitation is documented here

You cannot provide an interface with an IP address owned by Google Cloud.
  You can only create tunnels from an HA gateway to an HA gateway
  or create tunnels from an HA gateway to an ExternalVpnGateway.
Error: Error creating ExternalVpnGateway: googleapi: Error 400: Invalid value for field 'resource.interfaces[0]': '{  "id": 0,  "ipAddress": ""}'. You cannot provide interface with IP address owned by Google Cloud., invalid

Leave a Reply

Your email address will not be published. Required fields are marked *