GCP Networking Constraints and Limits


Like any other CSP (Cloud Service Provider), GCP (Google Cloud Platform) has done an awesome job educating customers about building cloud networks with the right design and right architecture. In this post, we will discuss some of the Google Cloud networking-related limits and constraints that one should be aware of. The post will provide necessary references to Google Cloud documents as well.

GCP Global VPC as HUB for Transit Routing

GCP Global VPC is a great technology and really a powerful concept, but it might not be the best design choice for enterprises. It is a loaded statement. First, let us understand GCP Global VPC.

What is Global VPC?

Imagine you have a big basket (VPC) that spans across the planet. You put all your eggs (workload) in that basket. Now, these different colored eggs (workloads in different subnets in different regions) are magically allowed to talk to each other (unless you manually create the firewall rules) across the planet. It is indeed powerful yet at the same time scary.

GCP Global VPC is different than other Clouds such as AWS. In AWS, all VPCs are regional. In GCP, all VPCs are global in nature. There is no knob to turn the feature ON or OFF.

Huge Failure Domain or Blast Radius

Global VPC has a huge blast radius or failure domain. The lateral movement of the bad actors within the VPC could be a huge issue. If a VM is compromised, it can potentially harm other VMs as well. You need more VPCs to contain the blast radius. A VM in Singapore can talk to another VM in Oregon region without any restriction from a routing perspective. One can indeed create the “GCP L4 Firewall Rules” to block the traffic, but it is painful to manage, and also you need NGFW to be inserted for your important and Prod. VMs/K8S/etc. workloads.

Service Insertion – Not Possible

In Global VPC, it is not possible to insert 3rd party services into the traffic flows. Services such as Next-Generation Firewall, Load Balancer, etc. This is due to the fact that your source and destination VM is now sitting in this giant global VPC. If you try to punt the traffic over to a VM in a different VPC or the same VPC, you will not be able to do that. The native VPC router will take precedence at that point and just route the traffic using the GCP fabric.

Air Gaping, DMZ, or Networking Segmentation – Not Possible

The same technical reason I mentioned before applies here too. With Global VPC, it is impossible to create DMZ or network segments inside the VPC itself. You would need more VPCs in order to create segments or air gaps between your applications, such as Prod/Dev/SRE/etc.

Compliance and Governance

This is a major headache with Global VPC for large enterprises. You can elevate some of it by using the GCP Shared VPC model, but what I have seen is that a customer would again use a single Global VPC with Shared VPC, and they will use it as a Transit/Hub replacement.

Non-Standard Architecture and Operating Model

For a lot of enterprise customers, Google Cloud is the second or even third cloud after AWS or Azure. Both AWS and Azure follow a regional model for their VPC and VNET. Customers looking for consistency do not opt for Global VPC and segment workloads based on regional VPCs.

Potential Latency Issues

One cannot defy the law of physics. If workloads are sitting in the same global VPC, one sitting in the Seattle region vs. the other in the Middle East, the traffic has to cross the under-the-sea cable anyways. This could potentially lead to higher latencies as compared to a model where VPC boundaries are defined based on a region.

What are the Best Practices for creating GCP VPCs?

GCP recommends a balanced approach for enterprises. For small companies with a few workloads, it might be ok to use a single Global VPC, but for enterprises, it is not a good design choice.

Single Global VPC is Not a Good Design for Enterprises

This is also recommended by GCP to create more VPC for network security, scale, cost optimization, etc. purposes.


Factors that might lead you to create additional VPC networks include scale, network security, financial considerations, operational requirements, and identity and access management (IAM).

Create Regional VPC

When you create a GCP VPC, it is global by default. There is no knob option that turns this off. So how would you create a regional VPC?

When you create the VPC, you need to make sure the subnets stay within the same region. For example for VPC in the US-West region, you need to make sure you are not adding subnets from the Singapore region.

Avoid VPC Sprawl

VPC sprawl is not a good design practice. We have seen enterprise customers where to allocate one VPC per account in AWS as an example. This would create management, compliance, and operational overhead. This could potentially lead to an expensive design too due to cross VPC and cross-region charges.

At the same time, it is not a good design to just use one single giant global VPC for all types of workloads for all LOB and teams. Avoid the VPC sprawl and strike a balance.

Use Multiple VPCs for Segmentation and Air Gaping

The best practice is to use additional VPCs for proper segmentation, security, air gaping, network segmentation, etc. GCP also recommends using a hub and spoke model like what you would see in other Clouds as well.


Some large enterprise deployments involve autonomous teams that each require full control over their respective VPC networks. You can meet this requirement by creating a VPC network for each business unit, with shared services in a common VPC network (for example, analytic tools, CI/CD pipeline and build machines, DNS/Directory services).

Encryption is the Best Effort inside the VPC

GCP does not guarantee encryption for traffic inside the VPC. It means if you have one single global VPC, all the workloads could be talking to each other without encryption. This is true for traffic between subnets of different regions inside a global VPC.

This is very risky and dangerous. Just imagine some dev. workload sniffing the un-encrypted data. I understand that your application should be TLS encrypted but if a breach happens who will be responsible? It is the responsibility of the NetSec architect. When it comes to security, the more is better with the Zero-Trust mindset.

Source: https://cloud.google.com/security/encryption-in-transit

Data in transit within these physical boundaries is generally authenticated, but may not be encrypted by default – you can choose which additional security measures to apply based on your threat model.

Aviatrix follows the Zero-Trust model and defense-in-depth approach. This is applicable for encryption as well. Hence Aviatrix Cloud Network is encrypted by default.

VPC Peering Quota

The max peering limit is 25. Aviatrix encrypted peering is a better choice.


Routes Per Project

Be mindful of the GCP route limit. Which is 500 by default. It can be increased. GCP did not publish the max limit. My request for 1000 routes per project was approved within minutes. In any case, be careful about those limits.

GCP HA VPN IPSec Limitation

Google Cloud does not support the creation of tunnel connections between an HA VPN gateway and any non-HA VPN gateway hosted in Google Cloud. This restriction includes Classic VPN gateways and third-party VPN gateways that are running on Compute Engine VMs.

This limitation is documented here

You cannot provide an interface with an IP address owned by Google Cloud.
  You can only create tunnels from an HA gateway to an HA gateway
  or create tunnels from an HA gateway to an ExternalVpnGateway.
Error: Error creating ExternalVpnGateway: googleapi: Error 400: Invalid value for field 'resource.interfaces[0]': '{  "id": 0,  "ipAddress": ""}'. You cannot provide interface with IP address owned by Google Cloud., invalid

GCP Route Limit


GCP Peer Group Quota Limit

Regional Route Priority and Metric

Google Cloud Router

A GCP Cloud Router dynamically exchanges routes between VPC and on-premises networks using Border Gateway Protocol (BGP). When connected to Google Cloud Internet, the Cloud Router provides both Data and Control planes but for Cloud NAT, the Cloud Router only provides the BGP control plane functionality.

Cloud Router Provides BGP services for the following

  • Google Interconnect (both Dedicated and Partner)
  • HA VPN
  • GCP NCC Router Appliance
  • Cloud NAT (allows VMs with private IP to connect to the Internet)

GCP Cloud Router is restricted to a single VPC.

GCP Cloud Router is also restricted to a single region of that VPC. T

Google Managed Instance Group (MIG)

Managed instance groups (MIGs) let you operate apps on multiple identical VMs. You can make your workloads scalable and highly available by taking advantage of automated MIG services, including autoscaling, auto-healing, regional (multiple zones) deployment, and automatic updating.

A managed instance group (MIG) is a group of virtual machine (VM) instances that you treat as a single entity. Each VM in a MIG is based on an instance template.


  • You cannot create a MIG with multiple subnets. Once created, you cannot change the network or subnetwork in a MIG.
  • You cannot use autoscaling if your MIG has a stateful configuration.
  • MIG / Autoscale is not supported with Google Global VPC

Leave a Reply

Your email address will not be published.