Aviatrix Intrusion Prevention System (IPS) Solution for AWS FAQs

Aviatrix providers a solution to protect public-facing applications and services with its IPS capabilities. Aviatrix IPS solution is also known as the “Public Subnet Filtering” solution.

Please listen to this ~2 min long video as refresher on this topic.

I also created a lab showcasing the configuration, routing and forwarding details for a real enterprise use-case.

This post is about some FAQs.

Q: Where do I go to see the GuardDuty findings in Aviatrix Controller?

Controller –> Security –> AWS GuardDuty –> Highlight the Region Name –> Actions –> Show Findings

In the screenshot above, you can see that GuardDuty is informing Aviatrix Controller about the malicious IP addresses.

Q: What is the criteria for the Controller to block IP address on PSF-GW?

It is based on the AWS GuardDuty findings.

Q: Where can I see those IPs being blocked in Aviatrix Controller?

Aviatrix has a L4 stateful firewall that is enabled on the PSF-GW when the IPS feature is enabled. This L4 stateful firewall blocks the malicious IP addresses. This feature does not depend on the EC2 “Security Group” which allows limited number of rules to be programmed.

Controller –> Security –> Stateful Firewall –> Select the PSF GW

Now click Edit Policy to see IPs being blocked by Aviatrix as shown in the screen shot below

Q: Can I see what sessions are established through the PSF-GW

Controller –> Security –> Stateful Firewall –> Session View

Note that is the IP address of the PSF-GW itself.

Q: Some IP addresses are well-known and not malicious. Why AWS GuardDuty is marking them as malicious and how do I fix this?

This is a well-known observation with AWS GuardDuty that rarely, it could mark some good IPs as malicious IPs. The fix is very simple. You can exclude those from the Aviatrix GuardDuty database. With this method, Aviatrix will not block those IPs.

For example we excluded from the finding list.

Now after the exclusion list was created, Aviatrix removed the block rule from its PSF-GW for this IP address. You can observe this change from the following screen shot as well.

Aviatrix Security Features

Aviatrix is modern born in the cloud for the cloud security platform. Unlike legacy vendors, Aviatrix Security is bolted on the platform itself. It is a pervasive security platform that also provides a framework for other vendors to integrate. This in turn provides the best possible security posture for the enterprises.

This post highlights some of the most important security features Aviatrix offers. This is by no means an exhaustive list. Visit https://aviatirx.com and https://docs.aviatrix.com for a comprehensive list

Aviatrix Security Feature List

  • Encryption
    • Standard IPSec
    • Patented High-Performance IPSec
  • Egress Security
    • Secure Egress with FQDN Filtering
  • Ingress Security
    • Provided by integrating AWS GuardDuty
  • L4 Stateful Firewall
  • Cloud Security Framework (FireNet)
    The framework covers many security features by partnering with 3rd party vendors such as
    • Check Point
    • Cisco FirePower
    • FortiNet
    • Palo Alto Network
  • Secure Network Segmentation
    • Multi-Cloud with Aviatrix Transit
    • AWS TGW
  • Secure Private Service Access
    • Private S3 Access
  • Secure Cloud Access
    On-prem users, Branches, and Data Centers, all need to access the Cloud resources. The Center of gravity is in the cloud. Aviatrix Secure Cloud Access provides features such as Secure Cloud User Access and Secure Site Access. Secure Site Access also covers connecting to SD-WAN branches and sites from Cloud.
  • Compliance and Visibility
  • CoPilot visualization, flow analysis is critical for securing the infrastructure. CoPilot helping enterprises identify rouge VPCs, detecting DDOS and anomalies

Security Certifications

Cisco CSR Configuration for Packet Fabric Network


Cisco-CSR-DC———->Packet Fabric Router ———>DX/ER/GCI—–>

The packet fabric is already setup with

  • Pre-shared key
  • IKEv2 IPSec
  • Route Based VPN

cisco-dc-router#sh run 
Building configuration...

Current configuration : 8013 bytes
! Last configuration change at 14:06:03 UTC Fri Jun 4 2021 by shahzad
version 17.3
service timestamps debug datetime msec
service timestamps log datetime msec
service password-encryption
service call-home
platform qfp utilization monitor load 80
platform punt-keepalive disable-kernel-core
platform console virtual
hostname cisco-dc-router
vrf definition GS
rd 100:100
address-family ipv4
logging persistent size 1000000 filesize 8192 immediate
no aaa new-model
login on-success log
subscriber templating
multilink bundle-name authenticated
license udi pid CSR1000V sn 97Y1K8PCDUC
diagnostic bootup level minimal
memory free low-watermark processor 71497
spanning-tree extend system-id
username ec2-user privilege 15
username shahzad privilege 15 password 7 0337530A0E1520481F5B4A44
username admin privilege 15 password 7 03254D02071B334556584B5656
crypto ikev2 proposal PF
encryption aes-cbc-256
integrity sha256
group 14
crypto ikev2 policy PF
proposal PF
crypto ikev2 profile PF-profile
match identity remote address
authentication remote pre-share key Shahzad123!
authentication local pre-share key Shahzad123!
crypto ipsec transform-set PF esp-aes 256 esp-sha256-hmac
mode transport
crypto ipsec profile FP
set security-association lifetime seconds 28800
set transform-set PF
set pfs group14
set ikev2-profile PF-profile
crypto ipsec profile PF
set security-association lifetime seconds 28800
set transform-set PF
set pfs group14
set ikev2-profile PF-profile
interface Tunnel1
ip address
ip tcp adjust-mss 1379
tunnel source
tunnel mode ipsec ipv4
tunnel destination
tunnel path-mtu-discovery
tunnel protection ipsec profile PF
ip virtual-reassembly
interface VirtualPortGroup0
vrf forwarding GS
ip address
ip nat inside
no mop enabled
no mop sysid
interface GigabitEthernet1
ip address dhcp
ip nat outside
negotiation auto
no mop enabled
no mop sysid
ip forward-protocol nd
ip tcp window-size 8192
ip http server
ip http authentication local
ip http secure-server
ip nat inside source list GS_NAT_ACL interface GigabitEthernet1 vrf GS overload
ip route GigabitEthernet1
ip route vrf GS GigabitEthernet1 global
ip ssh rsa keypair-name ssh-key
ip ssh version 2
ip ssh pubkey-chain
username ec2-user
key-hash ssh-rsa 5E874AE74054420DF7B81D6C422A33E2 ec2-user
ip ssh server algorithm publickey ecdsa-sha2-nistp256 ecdsa-sha2-nistp384 ecdsa-sha2-nistp521 ssh-rsa x509v3-ecdsa-sha2-nistp256 x509v3-ecdsa-sha2-nistp384 x509v3-ecdsa-sha2-nistp521
ip scp server enable
ip access-list standard GS_NAT_ACL
10 permit
line con 0
stopbits 1
line vty 0 4
privilege level 15
login local
transport input ssh
line vty 5 20
privilege level 15
login local
transport input ssh
! If contact email address in call-home is configured as sch-smart-licensing@cisco.com
! the email address configured in Cisco Smart License Portal will be used as contact email address to send SCH notifications.
contact-email-addr sch-smart-licensing@cisco.com
profile "CiscoTAC-1"
destination transport-method http
app-hosting appid guestshell
app-vnic gateway1 virtualportgroup 0 guest-interface 0
guest-ipaddress netmask
app-default-gateway guest-interface 0


Cloud, SaaS and SD-WAN Lunch

Cloud and SaaS has eaten the SD-WAN lunch and SD-WAN did not compliant 🙂

SD-WAN submitted to CSP

SD-WAN party is over too soon

Cloud is the Network and Cloud is the WAN

NCC. vWAN, AWS TGW Connect, these are all example of CSP taking over and “killing” SD-WAN.

With SaaS why someone would need an expensive physical SD-WAN branch router? Application are accessed over the Internet in SaaS model. No need for PCoIP, RDP or VDI etc. Look at the shrinking Citrix and VMware VDI solution market.

Branch already moved into the Cloud so why one would need SD-WAN.

Internet is reliable than ever before.

Look around, all SD-WAN vendors are gone, their talent moved on to Multi-Cloud Networking bandwagon. Hardware vendors have no clue where to go, and what to do. Acquisiotion season is coming to acquire half backed multi-cloud startups, like it was four years ago to acquire half-backed SD-WAN vendors/

One name emerges and stand out, that is Aviatrix. Full-stack, Multi-Cloud Networking and Security solution.

GCP Native Networking and Security Constraints

GCP Global VPC as HUB for Transit Routing

GCP Global VPC is a great technology and really a powerful concept but it is not a good design choice for enterprise networks. It is a loaded statement. First, let us understand GCP Global VPC.

What is Global VPC?

Imagine you have a big basket (VPC) that spans across the planet. You put all your eggs (workload) in that basket. Now, these different colored eggs (workloads in different subnets in different regions) are magically allowed to talk to each other (unless you manually create the firewall rules) across the planet. It is indeed powerful yet at the same time scary.

GCP Global VPC is different than other Clouds such as AWS. In AWS all VPCs are regional. In GCP all VPCs are global in nature. There is no knob to turn the feature ON or OFF.

Huge Failure Domain or Blast Radius

Global VPC has a huge blast radius or failure domain. The lateral movement of the bad actors within the VPC will be a huge issue. If a VM is compromised, it can potentially harm other VMs as well. You need more VPC to contain the blast radius. A VM in Singapore can talk to another VM in Oregon region without any restriction from a routing perspective. It is true that one can create the “GCP L4 Firewall Rules” to block the traffic, but it is painful to manage and also you need NGFW to be inserted for your important and Prod. VMs/K8S/etc.

Service Insertion – Not Possible

In Global VPC it is not possible to insert 3rd party services in the traffic flows. Services such as Next-Generation Firewall, Load Balancer etc. This is due to the fact that your source and destination VM is now sitting in this giant global VPC and if you try to put the traffic over to a VM in a different VPC or same VPC you will not be able to do that. The native VPC router will take precedence at that point and just route the traffic using the GCP fabric.

Air Gaping, DMZ or Networking Segmentation – Not Possible

The same technical reason as I mentioned before applies here too. With Global VPC it is not possible to create DMZ or network segments inside the VPC itself. You would need more VPCs in order to create segments or air gaps between your applications such as Prod/Dev/SRE/etc.

Compliance and Governance

This is a major headache with Global VPC for large enterprises. You can elevate some of it by using the GCP Shared VPC model but what I have seen is that a customer would again use a single Global VPC with Shared VPC and they will use it as a Transit/Hub replacements.

What are the Best Practices?

GCP recommends a balanced approach for enterprises. For small companies with a few workloads, it might be ok to use a single Global VPC, but for enterprises, it is not a good design choice.

Single Global VPC is Not a Good Design for Enterprises

This is also recommneded by GCP to create more VPC for network security, scale, cost optimization, etc. purposes.


Factors that might lead you to create additional VPC networks include scale, network security, financial considerations, operational requirements, and identity and access management (IAM).

Create Regional VPC

When you create a GCP VPC, it is global by default. There is no knob option that turn this off. So how would you create regional VPC?

When you create the VPC, you need to make sure the subnets stay within the same region. For example for VPC in US-West region, you need to make sure you are not adding subnets from Singapore region.

Avoid VPC Sprawl

VPC sprawl is not a good design practice. We have seen enterprise customers where they allocate one VPC per account in AWS as an example. This would create management, compliance, and operational overhead. This could potentially lead to an expensive design too due to cross VPC and cross-region charges.

At the same time it not a good design to just use one single giant global VPC for all types of workloads for all LOB and teams. Avoid the VPC sprawl and strike a balance.

Use Multiple VPCs for Segmentation and Air Gaping

The best practice is to use additional VPCs for proper segmentation, security, air gaping, network segmentation, etc. GCP also recommends using a hub and spoke model like what you would see in other Clouds as well.


Some large enterprise deployments involve autonomous teams that each require full control over their respective VPC networks. You can meet this requirement by creating a VPC network for each business unit, with shared services in a common VPC network (for example, analytic tools, CI/CD pipeline and build machines, DNS/Directory services).

Encryption is Best Effort inside the VPC

GCP does not guarantee encryption for traffic inside the VPC. It means if you have one single global VPC, all the workloads could be talking to each other without encryption. This is true for traffic between subnets of different regions inside a global VPC.

This is very risky and dangerous. Just imagine some dev. workload sniffing the un-encrypted data. I understand that your application should be TLS encrypted but if a breach happens who will be responsible? It is the responsibility of the NetSec architect. When it comes to security, the more is better with the Zero-Trust mindset.

Source: https://cloud.google.com/security/encryption-in-transit

Data in transit within these physical boundaries is generally authenticated, but may not be encrypted by default – you can choose which additional security measures to apply based on your threat model.

Aviatrix follows the Zero-Trust model and defence in depth approach. This is applicable for encryption as well. Hence Aviatrix Cloud Network are encrypted by default.

VPC Peering Quota

The max peering limit is 25. Aviatrix encrypted peering is better choice.


Routes Per Project

Be mindful of the GCP route limit. Which is 500 by default. It can be increased. GCP did not publish the max limit. My request for 1000 routes per project was approved within minutes. In any case, be careful about those limits.

Cisco CSR Sample Configuration for IPSec

Configuration for two tunnels from TransitAGWs
!Username admin privilege level 15 password ave
crypto keyring mykey
  pre-shared-key address key aviatrix
  pre-shared-key address key aviatrix
! is the public IP address of NV-TransitAGW1
! is the public IP address of NV-TransitAGW2
crypto isakmp policy 1
 encryption aes 256
 authentication pre-share
 hash sha256
 group 14
 lifetime 28800
crypto isakmp keepalive 10 3 periodic
crypto isakmp profile myprofile
  keyring mykey
  self-identity address
  match identity address 
  match identity address
crypto ipsec transform-set myset esp-aes 256 esp-sha256-hmac 
 mode tunnel
crypto ipsec df-bit clear
crypto ipsec profile ipsec_profile
 set security-association lifetime seconds 3600
 set transform-set myset 
 set pfs group14
 set isakmp-profile myprofile
interface Tunnel0
 ip address
 ip tcp adjust-mss 1387
 tunnel source g1
!!! the local IP of this CSR
 tunnel mode ipsec ipv4
 tunnel destination
!!! is the public IP of the NV-TransitAW1
 tunnel protection ipsec profile ipsec_profile
interface Tunnel1
 ip address
 ip tcp adjust-mss 1387
 tunnel source g1
!!! the local IP of this CSR
 tunnel mode ipsec ipv4
 tunnel destination
!!! is the public IP of the NVTransitAGW2
 tunnel protection ipsec profile ipsec_profile
router bgp 65014
 bgp log-neighbor-changes
 neighbor remote-as 65013
 neighbor timers 10 30 30
 neighbor remote-as 65013
 neighbor timers 10 30 30
 address-family ipv4
 redistribute connected
 neighbor activate
 neighbor route-map ORDC2CSR1-TO-ORTransit out
 neighbor activate
 neighbor route-map ORDC2CSR1-TO-ORTransit out
ip access-list standard 1
 10 permit
route-map ORDC2CSR1-TO-ORTransit permit 10 
 match ip address 1

Google Cloud Interconnect High Performance Encryption

Cloud Security – High Performance Encryption (HPE)

Aviatrix builds high-performance encrypted networks by default. The solution is called Aviatrix HPE or Aviatrix Insane mode.

GCP HPE across Google Cloud Interconnect

The Google Cloud Interconnect (GCI) is a great service to provide hybrid connectivity back to the on-premises DCs. For example, Enterprise can get 10 Gbps GCI connections for their on-prem to GCP migration needs. The challenge is that GCI is not encrypted and the native IPSec encryption only provides 1.25Gbps

Aviatrix provides 20 times more IPSec throughput than standard cloud encryption.

GCP HPE inside Google Cloud (GCP)

HPE is also available for a Multi-cloud Transit solution. For performance benchmarks, refer to GCP Insane Mode performance test results. Insane Mode is enabled when launching a new Aviatrix Transit Gateway or Spoke gateway in GCP.

  • Support N2 and C2 instance types on GCP gateways improve Insane Mode (HPE) performance on GCP cloud. For new network throughput with these new instance types, refer to GCP Insane Mode Performance.

Palo Alto VM-Series Design and Deployment in Google Cloud

Cloud Security – Service Insertion and Chaining

Automated and Policy-based service insertion and chaining have always been a real pain point for enterprise GCP customers. Native solutions are not adequate and 3rd party vendors’ solutions leave it up to the enterprise architect to figure out the end-to-end solution architecture.

Aviatrix GCP FireNet GCP solves all those challenges and limitations and provides a best practice way to Service Chain NGFW and other services into a cohesive architecture.

Aviatrix Solution Advantages

  • Policy-Based Inspection – decide what traffic is being inspected
  • Traffic Engineering – customize traffic flow
  • NGFW Life Cycle management – deployment of VMs
  • Automated Route Propagation – the controller manages the firewall’s route table
  • HA with Automated Failover
  • Flexible Design Options
    • Single Centralized Security VPC
    • Dual Centralized Security VPCs with Dedicated E/W + N/S VPC and Dedicated Egress VPC
  • Full visibility on E-W/on-prem to cloud traffic flows
    • No BGP/ECMP
    • No SNAT required when Symmetric Hashing is enabled

Aviatrix Solution Advantages

  • Policy-Based Inspection – decide what traffic is being inspected
  • Traffic Engineering – customize traffic flow
  • NGFW Life Cycle management – deployment of VMs
  • Automated Route Propagation – the controller manages the firewall’s route table
  • HA with Automated Failover
  • Flexible Design Options
    • Single Centralized Security VPC
    • Dual Centralized Security VPCs with Dedicated E/W + N/S VPC and Dedicated Egress VPC
  • Full visibility on E-W/on-prem to cloud traffic flows
    • No BGP/ECMP
    • No SNAT required when Symmetric Hashing is enabled

More design and deployment details here

We want to highlight two large enterprises utilizing Aviatrix FireNet to solve NGFW service insertion pain points.

1- Hospitality Chain: Ingress Traffic Inspection for GKE Workload

For this customer inspecting ingress traffic for GKE ingress controller traffic was a challenge for GKE workload based on their compliance policies. Following explains their ingress web traffic requirement

  • Route ingress traffic to a dedicated Ingress Spoke VPC first
  • Ingress VPC has an Nginx LB that would receive the traffic.
  • Then application LB policies were looked into and then based on the ingress service, the traffic must be routed to a centralized VPC where NGFW were deployed

Without Aviatrix, they were forced to terminate the ingress traffic directly to the NGFW which was not abiding by their Ingress traffic requirement. Also without Aviatrix, they were forced to figure out the routing and manually adjust the route based on a new service they would spin up.

Aviatrix FireNet policy-based and flexible model allows them to achieve the requirement with the traffic engineering demanded by the GKE workload

2- HealthCare Provider: Highly Available and Policy-Based Egress Traffic Inspection  

This provider processes a large number of image and video scans for their clients. The egress traffic sent to the clients after image/video processing was done must be secure and inspected by NGFW due to HIPPA compliance needs. They were forced to use Active/Standby NGFW due to various limitations.

Aviatrix FireNet solution allowed them to deploy the NGFW in an Active/Active fashion without involving any manual route updates or GCP network tag management. In case of failure Aviatrix FireNet automatically redirected traffic to the available NGFW. This reduced deployment complexity and manual intervention.

Multi-Cloud Network Security and Compliance

In the cloud era, the role of network security architect has become more critical than it has ever been. Complexity and human errors have always been the banes of security professionals. As enterprise cloud computing scales, it is expected to be 10x larger in size and complexity and will be deployed 1,000x faster than data center computing.

In this TechTalk, we focused on how enterprises are leveraging the Aviatrix cloud network platform to improve overall security posture, ensure corporate and regulatory compliance and embrace the exponential growth in enterprise cloud networking complexity by building on the combination of cloud security best practices and multi-cloud network architecture.

Use this link https://pages.aviatrix.com/eBook_Security-Architects-Guide-to-Multi-Cloud.html to download the “The Security Architect’s Guide to Multi-Cloud Networking” eBook.

Protect Ingress Traffic with AWS Guard Duty and Aviatrix IPS Security Appliance

AWS VPC Ingress Routing allows customers to insert (or service chain) a security appliance/gateway or firewall for the traffic flows coming from the Internet and going towards the public-facing applications such as a web server. With Amazon VPC Ingress Routing, customers can define routing rules at the Internet Gateway (IGW) to redirect ingress traffic to third-party appliances, before it reaches the final destination.

Aviatrix takes full advantage of the Amazon VPC Ingress Routing Enhancement by combining it with

  • Aviatrix Security Gateway’s policy-based FQDN Filtering capabilities and
  • Amazon GuardDuty’s continuous threat intelligence feed

What is AWS GuardDuty?

AWS GuardDuty is a threat and intrusion detection (IDS) service but it does not provide intrusion prevention (IPS) capabilities. The Aviatrix Controller programs an inline Aviatrix Security Gateway (called Public Subnet Filtering Gateway) to dynamically filter traffic to prevent unauthorized and malicious traffic.

You can find more information on this topic here

Cloud Networking and Security Predictions For 2021

Recently I joined an expert panel and shared my opinion for Cloud Compliance and Governance in 2021. Take a listen

cloud Security predictions for 2021

What I talked about can be summarized in the following bullet points

  • The role of network security architect has become more critical than ever before
  • Attacks are on the rise and won’t go down
  • Complexity and Human errors have gone up
  • Cloud is not immune to these attacks
  • Cloud is expected to be 10x larger in size and complexity and will be deployed 1,000x faster than data center computing
  • Cyber Hygiene is a must Improve overall security posture, ensure corporate and regulatory compliance

Security and Compliance is NOT shared responsibility It is YOUR responsibility


  1. Cloud and Multi-Cloud Security spending will increase dramatically
  2. More and more enterprises will adopt centralized security models
  3. Cloud-based User VPN or Client VPN solution with SMAL based MFA will gain lot of traction
  4. Policy-based, Zero-Trust Multi-Cloud Networking solution will dominate the market

GCP FireNet



Aviatrix Firewall Network Services (FireNet) simplify the Next Generation Firewall Insertion and Operations. FireNet is the simplest, highest performance, best scale-out architecture for next generation firewalls in the cloud.

Following are some of the highlights

  • Simple deployment, autoroute propagation to firewalls
  • Advanced egress, IDS, IPS, and ingress security
  • Maximize performance, scale, and visibility
  • Simplified  – no IPSec tunneling or SNAT required
  • Integration with Check Point, Fortinet, and Palo Alto Networks Firewalls

FireNet for Google Cloud (GCP)

FireNet for GCP follows the same principle and provides the same benefits as in the other Cloud. It is the same architecture that is consistent across multiple clouds.

Like any other cloud GCP native networking and security services are different than AWS and Azure so there are some unique requirements that engineers should be aware of but from the design, deployment, and operations perspective it is transparent to the enterprise security and networking team.

Before we take a look at FireNet in GCP, we must understand the design requirement imposed by GCP that will lead us into the understanding it better later in the document.

GCP Networking Behavior

In this section, we will discuss the unique GCP Networking Behavior that will dictate some of the design choices needed to design a GCP FireNet solution

GCP Does Not Support Multiple Network Interfaces in the same VPC

GCP does not allow multi-NIC VMs to be deployed in the same VPC network. This restrictions forces Aviatrix FireNet customer to plan for additional VPCs as we will discuss later in the session

GCP Support ECMP on Routing Table

GCP has this great feature to allow ECMP on its routing table.

GCP Support Internal TCP/UDP LB IP as Next Hops

GCP networking is also better as compare to other cloud in terms of supporting internal TCP/UDP LB IP as next hops

GCP FireNet Design Notes

The GCP FireNet design is applicable to GCP Shared VPC or Standard VPC designs.

The GCP FireNet solution requires at least three interfaces to be deployed on the 3rd party security appliance or firewall. Some security vendors do allow Firewalls on stick design where there is only one NIC present to handle ingress, egress, east/west, and management traffic. This type of solution might be good for small deployment but poses scale and segmentation challenges in enterprise-grade security solutions.

Multi-NIC FireNet design is also more flexible and scalable and works really well with the GCP Shared VPC design.

GCP Best Practices for Service Insertion

If you look at the GCP service insertion or NGFW insertion best practices, you will notice they also recommend multiple nic design.

managing traffic with native firewall rules

GCP FireNet Topology

Following shows a logical GCP FireNet topology

  • Transit FireNet GW VM
    • This GW is deployed by Aviatrix controller with two interfaces.
    • NIC0 sits in Transit VPC (or Transit FireNet VPC). This side will be connected to Spoke GWs in their respective VPC
    • NIC1 connects to LAN VPC facing the Firewall
  • Firewall / Security Appliance VM
    • The Firewall is deployed by the Aviatrix Controller with 3 interfaces
    • NIC0 has the public IP address for the egress traffic. If egress traffic is not required, one can ignore it. The Controller will build this interface regardless
    • NIC1 is the management interface. Usually, a public IP address is assigned to this interface. In some cases, customers might want to access management via a private network (over Google Cloud Interconnect etc.) and can assign a private IP as well
    • NIC2 is connected to LAN VPC. This will be used to communicate with the Aviatrix Transit FireNet GW

Design Recommendations and Best Practices

  • Because VPC resource quotas are set at the project level, make sure to have aggregate VPC resource needs across all VPC
  • Do not deploy any workload in the Transit VPC
  • Do not deploy any workload in the LAN VPC
  • Do not deploy any workload in the Egress VPC

Aviatrix Transit FireNet Design with Standard GCP VPC

Following design shows Aviatrix Transit FireNet design with standard GCP VPC setup. This shows the Aviatrix Spokes also connected to Aviatrix Transit GW

Transit FireNet Deployment

[07:49:12] Starting to create GW aviatrix-transit-fnet-gw.
[07:49:12] Connected to GCE.
[07:49:14] Need gateway image. Copying now..
[07:53:28] Project check complete.
[07:53:29] License check is complete.
[07:53:53] Updating IGW for new gateway…
[07:54:08] Launching compute instance in GCE….
[07:54:44] GCE compute instance created successfully.
[07:54:44] Updating DB.
[07:54:44] Added GW info to Database.
[07:54:45] AVX SQS Queue created.
[07:54:45] Initializing GW…..
[07:55:02] Copy configuration to GW aviatrix-transit-fnet-gw done.
[07:55:02] Copy new software to GW aviatrix-transit-fnet-gw done.
[07:55:03] Copy misc new software to GW aviatrix-transit-fnet-gw done.
[07:55:03] Copy scripts to GW aviatrix-transit-fnet-gw done.
[07:55:03] Copy sdk to GW aviatrix-transit-fnet-gw done.
[07:55:03] Copy libraries to GW aviatrix-transit-fnet-gw done.
[07:55:03] Copy gateway system data files is done.
[07:55:03] Installing software ….
[07:55:07] Issuing certificates …
[07:55:22] Issue certificates done
[07:55:52] GW software started.
[07:57:06] Software Installation done.
[07:57:08] Run self diagnostics done.
[07:57:11] Creating Firenet iLB
[07:57:23] initializing security rule
[07:57:24] GW security policy configured.
[07:57:38] Enable FireNet function.

Aviatrix Kickstart – Spin up Cloud Networks in Minutes – UI Mode

Kickstart deploys cloud and multi-cloud networks in minutes without any effort. Once the hub/spoke transit network is built in the cloud, it will act as a core networking layer on which one can add more use-cases as needed later.

The lightweight automation script deploys an Aviatrix controller and an Aviatrix transit architecture in AWS (and optionally in Azure). Everything is self-contained in a docker image. You do not need to install anything besides docker run time on your laptop/desktop/VM/instance.

Important Note

  • This tool works the best for brand new Aviatrix Controller deployment
  • It is not recommended to launch the controller if one deployed already
  • If you have previously deployed Aviatrix Controller under the AWS account, you will receive the following errors. You need to manually remove those roles and policies before moving forward
Error: Error creating IAM Role aviatrix-role-ec2: EntityAlreadyExists: Role with name aviatrix-role-ec2 already exists.
Error: Error creating IAM Role aviatrix-role-app: EntityAlreadyExists: Role with name aviatrix-role-app already exists.
Error: Error creating IAM policy aviatrix-assume-role-policy: EntityAlreadyExists: A policy called aviatrix-assume-role-policy already exists. Duplicate names are not allowed.
Error: Error creating IAM policy aviatrix-app-policy: EntityAlreadyExists: A policy called aviatrix-app-policy already exists. Duplicate names are not allowed.

Brief Deployment Instructions

Before you start the deployment process, you need to do/have following

  1. AWS accounts with root access
  2. AWS Access Key ID
  3. AWS Secret Access Key
  4. AWS Keypair name (for the test EC2 instances) in the region (default region in standard mode is us-east-2) where you are planning to deploy the Spoke VPCs
  5. Subscribe to Aviatrix Controller software in the AWS marketplace
  6. Install Dockers and make sure Docker Desktop is running in your Mac / Linux or Windows Laptop, Desktop, VM, or EC2 during the deployment process
  7. Run the following CLI commands on your laptop, desktop, VM, or EC2
    1. % docker volume create TF
    2. % docker run -v TF:/root -p 5000:5000 -d aviatrix/kickstart-gui

It should show you progress as follows

shahzadali@shahzad-ali ~ % docker run -v TF:/root -p 5000:5000 -d aviatrix/kickstart-gui
Unable to find image 'aviatrix/kickstart-gui:latest' locally
latest: Pulling from aviatrix/kickstart-gui
188c0c94c7c5: Pull complete 
269daf956265: Pull complete 
932e18d0c55d: Pull complete 
3eebf109acbf: Pull complete 
704fc01fe5c0: Pull complete 
1fff100b1be8: Pull complete 
b6b3de2ab177: Pull complete 
d760a63fff4d: Pull complete 
8eeeebe7f48f: Pull complete 
f5d70a31f2a8: Pull complete 
aa15b35f27d5: Pull complete 
98bf76eb1939: Pull complete 
51cbf27a70a8: Pull complete 
f1497051644d: Pull complete 
9c5a173e5d5c: Pull complete 
39acd96b15aa: Pull complete 
e4c46d457a8c: Pull complete 
Digest: sha256:1b510889995425b3f628a38ac81c056b52be6848d94944ddef94db3f00f3f628
Status: Downloaded newer image for aviatrix/kickstart-gui:latest
shahzadali@shahzad-ali ~ % 

In your web browser type and follow the Standard UI workflow

Detailed Deployment Instructions

The workflow is very simple to follow. The Azure step is optional.

After that you should skip step 6 and 7. This will complete the deployment


### Find the container process name using the docker ps commmand ###
shahzadali@shahzad-ali ~ % docker ps
CONTAINER ID   IMAGE                    COMMAND                  CREATED        STATUS        PORTS                    NAMES
e773a53d34ca   aviatrix/kickstart-gui   "/bin/sh -c 'python3…"   17 hours ago   Up 17 hours>5000/tcp   awesome_hofstadter

### To login to container istself use docker exec command ###
shahzadali@shahzad-ali /Users % docker exec -it awesome_hofstadter bash

### To check the docker logs without loggin into the container user docker logs command ###
shahzadali@shahzad-ali ~ % docker logs -f awesome_hofstadter

shahzadali@shahzad-ali ~ % docker logs -f awesome_hofstadter | more
[2021-01-11T22:13:32+0000] INFO __main__:24 Aviatrix kickstart has been started.
 * Serving Flask app "app" (lazy loading)
 * Environment: production
   WARNING: This is a development server. Do not use it in a production deployment.
   Use a production WSGI server instead.
 * Debug mode: on
[2021-01-11T22:13:32+0000] INFO werkzeug:113  * Running on (Press CTRL+C to quit)
[2021-01-11T22:13:32+0000] INFO werkzeug:113  * Restarting with stat
[2021-01-11T22:13:32+0000] INFO __main__:24 Aviatrix kickstart has been started.
[2021-01-11T22:13:32+0000] WARNING werkzeug:113  * Debugger is active!
[2021-01-11T22:13:32+0000] INFO werkzeug:113  * Debugger PIN: 320-893-258
[2021-01-11T22:13:50+0000] INFO werkzeug:113 - - [11/Jan/2021 22:13:50] "ESC[37mGET / HTTP/1.1ESC[0m" 200 -
[2021-01-11T22:13:50+0000] INFO werkzeug:113 - - [11/Jan/2021 22:13:50] "ESC[37mGET /static/css/main.16b366d6.chunk.css HTTP/1.1ESC[0m" 200 -
[2021-01-11T22:13:50+0000] INFO werkzeug:113 - - [11/Jan/2021 22:13:50] "ESC[37mGET /static/js/main.c622dd7f.chunk.js HTTP/1.1ESC[0m" 200 -
[2021-01-11T22:13:50+0000] INFO werkzeug:113 - - [11/Jan/2021 22:13:50] "ESC[37mGET /static/js/2.746a07c0.chunk.js HTTP/1.1ESC[0m" 200 -
[2021-01-11T22:13:50+0000] INFO werkzeug:113 - - [11/Jan/2021 22:13:50] "ESC[37mGET /static/media/Roboto-Regular.03523cf5.ttf HTTP/1.1ESC[0m" 200 -
[2021-01-11T22:13:51+0000] INFO werkzeug:113 - - [11/Jan/2021 22:13:51] "ESC[37mGET /static/media/Roboto-Light.0cea3982.ttf HTTP/1.1ESC[0m" 200 -
[2021-01-11T22:13:51+0000] INFO werkzeug:113 - - [11/Jan/2021 22:13:51] "ESC[37mGET /static/media/Roboto-Bold.4f39c579.ttf HTTP/1.1ESC[0m" 200 -
[2021-01-11T22:13:51+0000] INFO werkzeug:113 - - [11/Jan/2021 22:13:51] "ESC[37mGET /api/v1.0/get-statestatus HTTP/1.1ESC[0m" 200 -
[2021-01-11T22:13:51+0000] INFO werkzeug:113 - - [11/Jan/2021 22:13:51] "ESC[37mGET /static/media/Roboto-Medium.13a29228.ttf HTTP/1.1ESC[0m" 200 -
[2021-01-11T22:13:51+0000] INFO werkzeug:113 - - [11/Jan/2021 22:13:51] "ESC[37mGET /favicon.ico HTTP/1.1ESC[0m" 200 -
[2021-01-11T22:13:51+0000] INFO werkzeug:113 - - [11/Jan/2021 22:13:51] "ESC[37mGET /logo192.png HTTP/1.1ESC[0m" 200 -
[2021-01-11T22:13:54+0000] INFO werkzeug:113 - - [11/Jan/2021 22:13:54] "ESC[37mPOST /api/v1.0/mode-selection HTTP/1.1ESC[0m" 200 -
[2021-01-11T22:13:54+0000] INFO werkzeug:113 - - [11/Jan/2021 22:13:54] "ESC[37mGET /api/v1.0/get-statestatus HTTP/1.1ESC[0m" 200 -

shahzadali@shahzad-ali ~ % docker logs -f awesome_hofstadter | tail
/root/kickstart_web.sh: line 743: export: `': not a valid identifier

To delete the docker volume

 To delete docker volume try following

shahzadali@shahzad-ali ~ % docker volume remove TF
Error response from daemon: remove TF: volume is in use - [4a75b428ff5badf368f1dc9761c51b903652d8cfa4da70b2bdd543be3d352fea, 7f54de5c900d28d23ea61965423394534fe40dd769b20ff78f3a31c1fa98987d]
shahzadali@shahzad-ali ~ %

If it gives you the error, then try following 

shahzadali@shahzad-ali ~ % docker system prune
WARNING! This will remove:
  - all stopped containers
  - all networks not used by at least one container
  - all dangling images
  - all dangling build cache

Are you sure you want to continue? [y/N] y
Deleted Containers:

LAB6 – GCP Remote User VPN / Client VPN

Business Objective

An important security requirement for GCP VPCs is to effectively control remote user access in a policy based manner. The cloud and the COVID-19 pandemic makes most users “remote.” Not only for employees who are out of the office, the “remote” label can be applied to developers, contractors, and partners whether they’re in the office or around the globe.

Note: User VPN, Client VPN or OpenVPN are interchangeable terms.

Remote User VPN / Client VPN Overview

While a bastion host using an SSH tunnel is an easy way to encrypt network traffic and provide direct access, most companies looking for more robust networking will want to invest in a SMAL based VPN solution. Because …

  • Single instance VPN servers in each VPC results in tedious certificate management
    • No centralized enforcement give rise to questions like “who can access what VPC?”
    • With more than dozen users and more than a few VPCs, management and auditing of the user access can become a major challenge

What’s needed is an easily managed, secure, cost-effective solution. Aviatrix provides a cloud-native and feature-rich client VPN solution.

  • The solution is based on OpenVPN® and is compatible with all OpenVPN® clients
  • In addition, Aviatrix provides its own client that supports SAML authentication directly from the client
  • Each VPN user can be assigned to a profile with access privileges – down to hosts, protocols and ports
  • Any Identity provider auth for LDAP/AD, Duo, Okta, Centrify, MFA, Client SAML and other integrations
  • Centralized visibility of all users, connection history and all certificates across your network.

LAB Topology and Objective

  • This LAB is not dependent on any previous labs.
    • This LAB will build on the topology we have deployed already in the previous LABs. Following is what we have deployed already.
  • A GCP Spoke gateway (gcp-spoke-vpn) is already deployed in the gcp-spoke-vpn-vpc.
    • This is needed to make sure remote users, employees, developers or partners have a clear demarcation point (called Cloud Access layer in MCNA architecture) before they access the enterprise or corporate resources/workloads/VMs/etc.


  • Students will use their laptops to connect to this lap topology using an Aviatrix SAML client VPN and will become part of this topology. This will allow them to access any resources using the private IP address

Deploy Smart SAML Remote User VPN Solution

Deploy User VPN

Controller –> Gateway –> Create New (with the following information)

While creating this gateway, you must select “VPN Access” checkbox. This will make this gateway as OPENVPN Gateway for Aviatrix User VPN Solution

Common Mistake

The process could take upto ~10 minute to complete. It is hard to predict the deployment time even when you are deploying it in the same region and same cloud all the time. 

Once the gateway is deployed, you can see the status and the GCP LB address that was created as part of the automation.

After the gateway deployment, the topology looks like following

GCP TCP LB Configuration (Reference Only)

Following screen shots show the TCP LB details in GCP that was created by Aviatrix automation. LB helps in scaling out the solution without any disruption to the user profile or certificates.

Notice: Students do not have access to this details. It is shared here fro reference purposes only.

Profile Based Zero-Trust Access Control

Each VPN user can be assigned to a profile that is defined by access privileges to network, host, protocol and ports. The access control is dynamically enforced when a VPN user connects to the public cloud via an Aviatrix VPN gateway.

Create a new profile: Controller –> OpenVPN –> Profile

Create a policy to allow users access to only VMs in gcp-spoke-vpc1.

Now add a remote uswr and assign profile to it. Make sure to provide correct email address here.

Add a New VPN User

Controller –> OPENVPN –> Add a New VPN User

Download the .ovpn profile file from the Aviatrix Controller

Now download the Aviatrix OpenVPN Client: https://docs.aviatrix.com/Downloads/samlclient.html

MAC: https://s3-us-west-2.amazonaws.com/aviatrix-download/AviatrixVPNClient/AVPNC_mac.pkg
Windows: https://s3-us-west-2.amazonaws.com/aviatrix-download/AviatrixVPNClient/AVPNC_win_x64.exe
Linux: Check the Download link here

Now Open VPN client is connected to your network.

Testing and Verification

  • Ping VM in the gcp-spoke-vpc-2
    • This should not ping because as per the zero-trust profile, the remote users are not allowed to access any resources expect in gcp-spoke-vpc-1
  • Ping the VM in the gcp-spoke-vpc-1
    • Most likely it will not ping because the “gcp-spoke-vpn” VPC is not assigned to any MCNS Domain yet
shahzadali@shahzad-ali ~ % ping
PING ( 56 data bytes
92 bytes from gcp-spoke-vpn.c.cne-pod24.internal ( Time to live exceeded
Vr HL TOS  Len   ID Flg  off TTL Pro  cks      Src      Dst
 4  5  00 5400 7c18   0 0000  01  01 544d 

Request timeout for icmp_seq 0
92 bytes from gcp-spoke-vpn.c.cne-pod24.internal ( Time to live exceeded
Vr HL TOS  Len   ID Flg  off TTL Pro  cks      Src      Dst
 4  5  00 5400 b6da   0 0000  01  01 198b 

Request timeout for icmp_seq 1
92 bytes from gcp-spoke-vpn.c.cne-pod24.internal ( Time to live exceeded
Vr HL TOS  Len   ID Flg  off TTL Pro  cks      Src      Dst
 4  5  00 5400 fcc1   0 0000  01  01 d3a3 

--- ping statistics ---
3 packets transmitted, 0 packets received, 100.0% packet loss
shahzadali@shahzad-ali ~ % 

Now change MCNS setting and assign gcp-spoke-vpn to Green domain.

Now the new topology will look like following

Now connectivity is established and ping will start working as you can see in the following output as well.

Request timeout for icmp_seq 7
92 bytes from gcp-spoke-vpn.c.cne-pod24.internal ( Time to live exceeded
Vr HL TOS  Len   ID Flg  off TTL Pro  cks      Src      Dst
 4  5  00 5400 2b8e   0 0000  01  01 a4d7 

Request timeout for icmp_seq 8
Request timeout for icmp_seq 9
Request timeout for icmp_seq 10
64 bytes from icmp_seq=11 ttl=60 time=70.931 ms
64 bytes from icmp_seq=12 ttl=60 time=63.498 ms
64 bytes from icmp_seq=13 ttl=60 time=62.943 ms
64 bytes from icmp_seq=14 ttl=60 time=69.129 ms
64 bytes from icmp_seq=15 ttl=60 time=62.002 ms
64 bytes from icmp_seq=16 ttl=60 time=68.655 ms
--- ping statistics ---
17 packets transmitted, 6 packets received, 64.7% packet loss
round-trip min/avg/max/stddev = 62.002/66.193/70.931/3.477 ms
shahzadali@shahzad-ali ~ % 


  • Aviatrix User VPN is a powerful solution
  • MCNS provides additional security beyond the profile based user-vpn

LAB5 – Overlapping Subnet / IP (Duplicate IP) Solution in GCP


ACE Enterprise in GCP wants to connect to different partners to consume SaaS services. These partners could be present in physical DC or Branches, or in VPC/VNET in the cloud such as GCP/AWS/Azure/etc. ACE cannot dictate or control the IPs/Subnets/CIDR those partners have configured and must support “Bring Your own IP” which might overlap with what is already configured in GCP.

In our topology GCP Spoke3 VPC subnet is overlapping with AWS Spoke2 VPC subnet ( We need to make sure that VM in GCP Spoke3 VPC is able to communicate with EC2 in AWS Spoke2 VPC.

Topology Modifications

In order for this lab to work, we are simulating AWS Spoke2 VPC as a remote site/branch

  • Step#1: Detach the aws-spoke2 gateway from transit
  • Step#2: Delete the aws-spoke2-gw
  • Step#3: Deploy a standard Aviatrix Gateway (s2c-gw) in aws-spoke2-vpc


Controller –> Multi-Cloud Transit –> Setup –> Detach AWS VPC2


Controller –> Gateway –> Highlight AWS VPC2 Spoke GW –> Delete


Controller –> Gateway –> Add New (use the following screen)

This getaway could be any router or firewall or vpn device in the on-prem or cloud location.


  • Following diagram shows the topology after the modifications are done
  • In the topology notice that there is no path between gcp-spoke3-vpc and s2c-gw in aws-spoke2-vpc
  • Also notice the local and remote virtual IPs allocated for the VPCs/Sites that are overlapping. This is needed so that these overlapping VPCs/Sites can communicate with each other using those virtual IPs
    • This is possible using Aviatrix patented technology where as a an enterprise you do not need to worry about programming advanced NAT rather Aviatrix intent based “Mapped NAT” policy will automatically take care of all the background VPC/VNET route programming, Gateway Routing, Secure Tunnel Creation, Certificate Exchange, SNAT, DNAT, etc.


Bi-directional setup is needed for this to work. We will create two connections in coming configuration steps.

Connection From GCP to Partner Site (AWS) – Leg1

Controller –> SITE2CLOUD –> Add a new connection (with following details)

VPC ID / VNet Name: vpc-gcp-spoke-3
Connection Type: Mapped
Connection Name: partner1-s2c-leg1
Remote Gateway Type: Aviatrix
Tunnel Type: Policy-based
Primary Cloud Gateway: gcp-spoke-3
Remote Gateway IP Address: Check Controller’s Gateway section to find the public ip address
Remote Subnet (Real):
Remote Subnet (Virtual):
Local Subnet (Real):
Local Subnet (Virtual):

This will create the first leg of the connection from Cloud (GCP) to Partner Site (AWS). Tunnel will stay down until the other end is configured.

Download the Configuration

Aviatrix Controller provides a template that can be used to configure the remote router/firewall. Click on the Site-to-Cloud connection that you have just created. Click “EDIT” and then download the configuration for Aviatrix.

Downloaded file (vpc-vpc-gcp-spoke-3~-~cne-pod24-partner1-s2c-leg1.txt) contents just for reference purposes. We will import this file in next step.

  "ike_ver": "1",
  "name": "partner1-s2c-leg1",
  "type": "mapped",
  "tunnel_type": "policy",
  "peer_type": "avx",
  "ha_status": "disabled",
  "null_enc": "no",
  "private_route_enc": "no",
  "PSK": "Password123!",
  "ph1_enc": "AES-256-CBC",
  "ph2_enc": "AES-256-CBC",
  "ph1_auth": "SHA-256",
  "ph2_auth": "HMAC-SHA-256",
  "ph1_dh_group": "14",
  "ph2_dh_group": "14",
  "ph2_lifetime": "3600",
  "remote_peer": "",
  "remote_peer_private_ip": "",
  "local_peer": "",
  "remote_subnet_real": "",
  "local_subnet_real": "",
  "remote_subnet": "",
  "local_subnet": "",

  "tun_name": "",
  "highperf": "false",
  "ovpn": "",
  "enable_bgp": "false",
  "bgp_local_ip" : "",
  "bgp_remote_ip" : "",
  "bgp_local_asn_number": "0",
  "bgp_remote_as_num": "0",
  "bgp_neighbor_ip_addr": "",
  "bgp_neighbor_as_num": "0",
  "tunnel_addr_local": "",
  "tunnel_addr_remote": "",
  "activemesh": "yes"

Close the dialogue box now.

Connection Partner Site (AWS) to GCP AWS – Leg2

Now we need to create a new connection and import the file we just downloaded

Controller –> SITE2CLOUD –> Add a new connection –> Select “aws-west-2-spoke-2” VPC –> Import

VPC ID / VNet Name: aws-west-2-spoke-2
Connection Type: auto-populated (Mapped)
Connection Name: partner1-s2c-leg2
Remote Gateway Type: Aviatrix
Tunnel Type: auto-populated (Policy-based)
Algorithms: auto-populated (do not change these settings)
Primary Cloud Gateway: auto-populated (aws-spoke2-vpc-s2c-gw)
Remote Gateway IP Address: auto-populated
Remote Subnet (Real): auto-populated (
Remote Subnet (Virtual): auto-populated (
Local Subnet (Real): auto-populated (
Local Subnet (Virtual): auto-populated (

Now it will take about a minute for both tunnels to come up

This makes is look like following


Now you can ping the overlapping subnet VM on the AWS side from GCP. Make sure to use the virtual subnet for AWS with the last octet of the VM being the same.

ubuntu@vm-gcp-spoke-3:~$ ifconfig
ens4 Link encap:Ethernet HWaddr 42:01:0a:2a:00:82
inet addr: Bcast: Mask:
inet6 addr: fe80::4001:aff:fe2a:82/64 Scope:Link
RX packets:1492666 errors:0 dropped:0 overruns:0 frame:0
TX packets:871540 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:518583431 (518.5 MB) TX bytes:111566034 (111.5 MB)

ubuntu@vm-gcp-spoke-3:~$ ping
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=62 time=83.8 ms
64 bytes from icmp_seq=2 ttl=62 time=83.5 ms
64 bytes from icmp_seq=3 ttl=62 time=83.6 ms
64 bytes from icmp_seq=4 ttl=62 time=83.6 ms
64 bytes from icmp_seq=5 ttl=62 time=84.2 ms
--- ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4006ms
rtt min/avg/max/mdev = 83.554/83.784/84.203/0.391 ms

This concludes the lab. Final topology looks like following

Note: This lab is not depend on previous labs.