Aviatrix Transit Network for GCP Shared VPC

What is GCP Shared VPC?

GCP shared VPC allows an organization to share or extend its vpc-network (you can also call it subnet) from one projects (called host) to another project (called service/tenant).

When you use Shared VPC, you designate a project as a host project and attach one or more other service projects to it. The VPC networks in the host project are called Shared VPC networks. Eligible resources from service projects can use subnets in the Shared VPC network.

Is Shared VPC a replacement of Transit (Hub-Spoke) Network?

Shared VPC is not for transit networking. It does not provide any enterprise grade routing or traffic engineering capabilities. Shared VPC lets organization administrators delegate administrative responsibilities, such as creating and managing instances, to Service Project Admins while maintaining centralized control over network resources like subnets, routes, and firewalls.

Aviatrix Transit Network Design Patterns with GCP Shared VPC

Aviatrix supports the GCP Shared VPC model and build the cloud and multi-cloud transit networking architecture to provide enterprise grade routing, service insertion, hybrid connectivity and traffic engineering for the workload VMs. There are number of different deployment model possible but we will focus on two designs with GCP Shared VPC network.

  1. Aviatrix spoke GW and workload VMs in the same shared VPC network
    • Suited for small, PoC or lab deployments where the networking is kept very simple
  2. Aviatrix spoke GW and workload VMs in different shared VPC network
    • Recommended model for the enterprises
    • The Aviatrix Transit and Spoke GWs are deployed inside the host-project in their respective vpc network
    • These vpc networks are not shared but stay local to the host project
    • The workloads vpc networks are created inside the host project and shared with the service/tenant VPCs using the GCP shared VPC network

1- Aviatrix spoke GW and workload VMs in the same shared VPC network

This pattern is more suited for small deployments, PoC or lab setup where the networking is kept very simple. The Aviatrix transit GW is deployed inside the host-project. The Aviatrix spokes are also deployed inside the host-project but the VPC network (or subnet) is shared with the service/tenant VPC. This same shared VPC network (subnet) is then used by the workload VMs as well. Using same subnet for Aviatrix spoke GW and workload VMs might not be desirable for some organizations.

This article discusses the deployment aspects of first design pattern. 
Second design pattern will be discussed in a different blog.

Deployment Details

Create VPC networks under Host Project


Attach service project with Host Project under Shared VPC option

While attaching the project, you need to select the respective subnet as well. Following screen shows the subnets shared with the service project

You can also list all the “Shared VPC networks” and their attached “Service Projects” as shown in the following screen

Deploy Aviatrix Transit Gateway

Now deploy Aviatrix transit gateway in the host project’s “transit vpc network”

Enable connected transit as shown in the following screen. This make all spoke vpc-networks to communicate with each other. By default it is disabled.

Also assign BGP AS Number to newly deployed Aviatrix Gateway. This step is the best practice and not mandatory. This becomes critical for traffic engineering or if one wants to connect to on-prem router/firewall for eBGP.

Deploy Aviatrix Spoke Gateways

Deploy Aviatrix spoke gateways in the prod and dev shared vpc network. These networks were created inside the central IT host project but were shared to prod and dev service projects. For instance “gcp-spk-gw-host-project-vpcnet-prod” will be deployed as Aviatrix gateway inside the host project. It will consume the compute resources from the central IT host project. This way central IT will have complete control over the transit and spoke gateways and their networking aspects. The service/tenant projects will be deploying their VMs inside their own compute and will be responsible for paying for it but will not have any control over the networking aspects.

Following screen shot shows the spoke gateway deployment. Notice that the Account Name / Project name selected in the drop down menu is “netjoints-host-project”

Similarly the spoke gateway for dev service project was deployed. The following list shows the outcome of transit and spoke gateways

Attach Spoke Gateways to Transit Gateway

Now attach both spokes to the transit gateway to build the complete hub-spoke topology

After the attachment, the spoke list will show the connected transit as follows

Testing and Verification

CoPilot View

If you have Aviatrix CoPilot, then you can visualize the topology that was build on the run-time.

GCP View

GCP host project shows the Aviatrix transit and spoke gateways

GCP Prod and Dev projects would show the Prod and Dev VMs that will be used for testing

shahzad_netjoints_com@service-project-prod-vm1:~$ ping 10.21.12.3
PING 10.21.12.3 (10.21.12.3) 56(84) bytes of data.
64 bytes from 10.21.12.3: icmp_seq=1 ttl=61 time=4.65 ms
64 bytes from 10.21.12.3: icmp_seq=2 ttl=61 time=2.66 ms
64 bytes from 10.21.12.3: icmp_seq=3 ttl=61 time=2.99 ms
64 bytes from 10.21.12.3: icmp_seq=4 ttl=61 time=2.79 ms
^C
--- 10.21.12.3 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 8ms
rtt min/avg/max/mdev = 2.658/3.270/4.647/0.805 ms
shahzad_netjoints_com@service-project-prod-vm1:~$

shahzad_netjoints_com@service-project-prod-vm1:~$ traceroute 10.21.12.3
traceroute to 10.21.12.3 (10.21.12.3), 30 hops max, 60 byte packets
1 gcp-spk-gw-host-project-vpcnet-prod.us-east4-a.c.shahzad-host-project-11.internal (10.21.11.2) 1.648 ms 1.632 ms 1.612
ms
2 * * *
3 * * *
4 * * *
5 10.21.12.3 (10.21.12.3) 5.521 ms 5.499 ms 5.751 ms
shahzad_netjoints_com@service-project-prod-vm1:~$

Leave a Reply