LAB4 – GCP FQDN Based Egress Security

This lab will demonstrate how to provide Fully Qualified Domain Name (FQDN) based Egress Filtering security using Aviatrix. Only those FQDNs will be allowed which are permitted in the configured policy.

Egress FQDN Filtering Overview

Aviatrix FQDN Egress is a highly available security service specifically designed for workloads or applications in
the public clouds.

Aviatrix Egress FQDN Filtering is Centrally managed by the Aviatrix Controller and executed on Aviatrix FQDN gateway
in the VNET/VPC/VCN in the distributed or centralized architecture. All the internet bound traffic (TCP/UDP
including HTTP/HTTPS/SFTP) is first discovered and based on the results admin can create egress filters using
whitelist or blacklist model.

Egress FQDN filtering allows organizations to achieve PCI compliance by limiting application’s access to
approved FQDNs. This is a common replacement for SQUID proxy type of manual solutions. There are several
ways to deploy Egress FQDN filtering depending on requirements.

This lab will use existing GCP Spoke GWs to provide filtering to protect instances that are on private subnet but require Egress Security. For more scalable solution, enterprises opt for a dedicated Egresss FQDN GW rather than using the existing Spoke GW for this function.

Topology

  • The workload in gcp-spoke2-vpc will follow zero trust security model
    • Workload/VM in gcp-spoke2-vpc will only have access to https://*.ubuntu.com and https://*.google.com FQDN.
    • Rest of the traffic will be blocked with the base zero trust policy
  • We will configure the gcp-spoke2-gw as Egress FQDN GW as well to enforce this security policy
  • We will use VM in gcp-spoke3-vpc as “Jump Host” for this testing

Enable Egress FQDN Filtering

Controller –> Security –> Egress Control –> New TAG

Controller –> Security –> Egress Control –> Egress FQDN Filter –> Edit “Allow-List” TAG –> Add New

Now click “SAVE”, then “UPDATE” and CLOSE.

Now make sure that the base policy is “White” which stands for “Zero Trust”. This will make sure only the FQDNs in the “Allowed List” are accessible and rest of the FQDNs are blocked.

Now we will attach this filter policy to gcp-spoke-2-gw and then enable it.

It will look like following

The status is still disabled and now we need to enable it.

Testing and Verification

We have completed the following topology

  • ssh into the gcp-spoke3 VM using its public ip address (vm_gcp_public_ip_spoke3)
    • User: ubuntu / pass: Password123!
  • Then from there ssh into the gcp-spoke2 VM using its private ip address (vm_gcp_private_ip_spoke2)
    • User: ubuntu / pass: Password123!
  • gcp-spoke2 is where we have enforced the Egress FQDN policy
    • Since both spoke2 and spoke3 are in Blue segment, they can communicate to each others. If you would try to ssh into gcp-spoke2-vm from gcp-spoke1-vm, it will not work
shahzadali@shahzad-ali ~ % ssh ubuntu@34.86.180.56

ubuntu@34.86.180.56's password:
Welcome to Ubuntu 16.04.7 LTS (GNU/Linux 4.15.0-1087-gcp x86_64)
Documentation: https://help.ubuntu.com
Management: https://landscape.canonical.com
Support: https://ubuntu.com/advantage
3 packages can be updated.
0 updates are security updates.
New release '18.04.5 LTS' available.
Run 'do-release-upgrade' to upgrade to it.
*** System restart required ***
Last login: Sat Jan 2 16:41:56 2021 from 172.124.233.126
To run a command as administrator (user "root"), use "sudo ".
See "man sudo_root" for details.

ubuntu@vm-gcp-spoke-3:~$ ifconfig

ens4 Link encap:Ethernet HWaddr 42:01:0a:2a:00:82
inet addr:10.42.0.130 Bcast:10.42.0.130 Mask:255.255.255.255
inet6 addr: fe80::4001:aff:fe2a:82/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1460 Metric:1
RX packets:1461966 errors:0 dropped:0 overruns:0 frame:0
TX packets:846760 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:510003616 (510.0 MB) TX bytes:107570824 (107.5 MB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:126 errors:0 dropped:0 overruns:0 frame:0
TX packets:126 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:14552 (14.5 KB) TX bytes:14552 (14.5 KB)
ubuntu@vm-gcp-spoke-3:~$


ubuntu@vm-gcp-spoke-3:~$ ssh ubuntu@10.20.12.130

ubuntu@10.20.12.130's password:
Welcome to Ubuntu 16.04.7 LTS (GNU/Linux 4.15.0-1087-gcp x86_64)
Documentation: https://help.ubuntu.com
Management: https://landscape.canonical.com
Support: https://ubuntu.com/advantage
3 packages can be updated.
0 updates are security updates.
New release '18.04.5 LTS' available.
Run 'do-release-upgrade' to upgrade to it.
*** System restart required ***
Last login: Sat Jan 2 16:42:30 2021 from 10.20.12.2
To run a command as administrator (user "root"), use "sudo ".
See "man sudo_root" for details.
ubuntu@vm-gcp-spoke-2:~$
ubuntu@vm-gcp-spoke-2:~$

ubuntu@vm-gcp-spoke-2:~$ wget https://www.google.com
--2021-01-02 17:46:12-- https://www.google.com/
Resolving www.google.com (www.google.com)… 74.125.197.147, 74.125.197.103, 74.125.197.104, …
Connecting to www.google.com (www.google.com)|74.125.197.147|:443… connected.
HTTP request sent, awaiting response… 200 OK
Length: unspecified [text/html]
Saving to: ‘index.html.21’
index.html.21 [ <=> ] 12.54K --.-KB/s in 0s
2021-01-02 17:46:12 (29.4 MB/s) - ‘index.html.21’ saved [12844]

ubuntu@vm-gcp-spoke-2:~$ wget https://cloud.google.com
--2021-01-02 17:46:59-- https://cloud.google.com/
Resolving cloud.google.com (cloud.google.com)… 74.125.20.113, 74.125.20.102, 74.125.20.100, …
Connecting to cloud.google.com (cloud.google.com)|74.125.20.113|:443… connected.
HTTP request sent, awaiting response… 200 OK
Length: 706920 (690K) [text/html]
Saving to: ‘index.html.22’
index.html.22 100%[==============================================================>] 690.35K 3.44MB/s in 0.2s
2021-01-02 17:47:00 (3.44 MB/s) - ‘index.html.22’ saved [706920/706920]
ubuntu@vm-gcp-spoke-2:~$

ubuntu@vm-gcp-spoke-2:~$ wget https://www.ubuntu.com
--2021-01-02 17:48:34-- https://www.ubuntu.com/
Resolving www.ubuntu.com (www.ubuntu.com)… 91.189.88.180, 91.189.88.181, 91.189.91.45, …
Connecting to www.ubuntu.com (www.ubuntu.com)|91.189.88.180|:443… connected.
HTTP request sent, awaiting response… 301 Moved Permanently
Location: https://ubuntu.com/ [following]
--2021-01-02 17:48:35-- https://ubuntu.com/
Resolving ubuntu.com (ubuntu.com)… 91.189.88.180, 91.189.91.44, 91.189.91.45, …
Connecting to ubuntu.com (ubuntu.com)|91.189.88.180|:443… connected.
HTTP request sent, awaiting response… 200 OK
Length: 121017 (118K) [text/html]
Saving to: ‘index.html.23’
index.html.23 100%[==============================================================>] 118.18K 319KB/s in 0.4s
2021-01-02 17:48:36 (319 KB/s) - ‘index.html.23’ saved [121017/121017]
ubuntu@vm-gcp-spoke-2:~$

Now if we try to access any other FQDN, that should fail

ubuntu@vm-gcp-spoke-2:~$ wget https://www.espn.com
--2021-01-02 17:49:27-- https://www.espn.com/
Resolving www.espn.com (www.espn.com)… 13.224.10.82, 13.224.10.114, 13.224.10.88, …
Connecting to www.espn.com (www.espn.com)|13.224.10.82|:443… connected.


ubuntu@vm-gcp-spoke-2:~$ wget https://www.cnn.com
--2021-01-02 17:51:00-- https://www.cnn.com/
Resolving www.cnn.com (www.cnn.com)… 151.101.1.67, 151.101.65.67, 151.101.129.67, …
Connecting to www.cnn.com (www.cnn.com)|151.101.1.67|:443… connected.

Egress FQDN Stats on Controller

Controller –> Security –> FQDN Stats

Per Gateway Stats

Egress FQDN Search

==============================
Search results on Gateway gcp-spoke-2
==============================
2021-01-02T16:42:54.990606+00:00 GW-gcp-spoke-2-34.83.204.207 avx-nfq: AviatrixFQDNRule3[CRIT]nfq_ssl_handle_client_hello() L#291  Gateway=gcp-spoke-2 S_IP=10.20.12.130 D_IP=13.224.10.88 hostname=www.espn.com state=MATCHED drop_reason=BLACKLISTED Rule=*.espn.com,SourceIP:IGNORE;0;0;443
2021-01-02T16:43:12.620897+00:00 GW-gcp-spoke-2-34.83.204.207 avx-nfq: AviatrixFQDNRule0[CRIT]nfq_ssl_handle_client_hello() L#291  Gateway=gcp-spoke-2 S_IP=10.20.12.130 D_IP=13.224.10.82 hostname=www.espn.com state=MATCHED drop_reason=BLACKLISTED Rule=*.espn.com,SourceIP:IGNORE;0;0;443
2021-01-02T17:49:27.437085+00:00 GW-gcp-spoke-2-34.83.204.207 avx-nfq: AviatrixFQDNRule0[CRIT]nfq_ssl_handle_client_hello() L#291  Gateway=gcp-spoke-2 S_IP=10.20.12.130 D_IP=13.224.10.82 hostname=www.espn.com state=NO_MATCH drop_reason=NOT_WHITELISTED

2021-01-02T17:49:41.679243+00:00 GW-gcp-spoke-2-34.83.204.207 avx-nfq: message repeated 7 times: [ AviatrixFQDNRule0[CRIT]nfq_ssl_handle_client_hello() L#291  Gateway=gcp-spoke-2 S_IP=10.20.12.130 D_IP=13.224.10.82 hostname=www.espn.com state=NO_MATCH drop_reason=NOT_WHITELISTED]
2021-01-02T17:49:55.759092+00:00 GW-gcp-spoke-2-34.83.204.207 avx-nfq: AviatrixFQDNRule0[CRIT]nfq_ssl_handle_client_hello() L#291  Gateway=gcp-spoke-2 S_IP=10.20.12.130 D_IP=13.224.10.82 hostname=www.espn.com state=NO_MATCH drop_reason=NOT_WHITELISTED
2021-01-02T17:50:02.669462+00:00 GW-gcp-spoke-2-34.83.204.207 avx-nfq: message repeated 6 times: [ AviatrixFQDNRule0[CRIT]nfq_ssl_handle_client_hello() L#291  Gateway=gcp-spoke-2 S_IP=10.20.12.130 D_IP=13.224.10.82 hostname=www.espn.com state=NO_MATCH drop_reason=NOT_WHITELISTED]
2021-01-02T17:50:05.926066+00:00 GW-gcp-spoke-2-34.83.204.207 avx-nfq: AviatrixFQDNRule0[CRIT]nfq_ssl_handle_client_hello() L#291  Gateway=gcp-spoke-2 S_IP=10.20.12.130 D_IP=13.224.10.82 hostname=www.espn.com state=NO_MATCH drop_reason=NOT_WHITELISTED

CoPilot Egress FQDN Stats

https://copilot-pod24.mcna.cc/#/login   user: copilot / pass: Copilot123!  (Read-Only on Controller)

CoPilot –> Security –> Egress

CoPilot Live Status

CoPilot Search

LAB3 – GCP Multi-Cloud Network Segmentation (MCNS)

It is important to provide security compliance and fulfill audit requirements by using various methods and network segmentation is one of them. Providing Network Security segmentation is a critical business requirement. Aviatrix MCNS is helping many customers who achieved this requirement.

So far we have built following topology

Our objective in this lab to segment VPCs in GCP based on the workload. Here are the business requirements

  • There are two types of workload present in GCP called Green and Blue
  • The workload in Blue and Green must not be allowed to communicate to each other
  • Workloads within Blue and Green segments must be allowed to communicate with each other
  • These segments must also extend to AWS as well.
  • These segments should also extend to the on-prem data centers to provide segmentation for hybrid connectivity and their respective workload deployed in the on-premise DC locations.

Following is how the final topology will look like after all the business objectives are met

Enable MCNS on Transit gateways

Controller –> Multi-Cloud Transit –> Segmentation –> Plan –> Enable for “gcp-transit”

Controller –> Multi-Cloud Transit –> Segmentation –> Plan –> Enable for “aws-transit-gw-us-west-2”

Create Multi-Cloud Security Domains (MCSD)

Create two MCSD Green and Blue. These two domains are not connected to each other by default.

Controller –> Multi-Cloud Transit –> Segmentation –> Plan –> Create MCSD

Repeat it again for Blue

Controller –> Multi-Cloud Transit –> Segmentation –> Plan –> Create MCSD

MCNS Connection Policy

The following screen shows that Green and Blue are not connected as per their Security or “Connection Policy”.

Assign GCP and AWS VPC to MCSD

In order to enforce the intent/policy we just created, we need to assign VPCs to their respective Security Domain based on the business policy.

  • gcp-spoke-1 :: Green
  • gcp-spoke-2 :: Blue
  • gcp-spoke-3 :: Blue
  • gcp-to-dc-route-1 :: Green
  • aws-spoke-1 :: Green
  • aws-spoke-2 :: Blue
  • aws-to-dc-router-2 :: Green

Controller –> Multi-Cloud Transit –> Segmentation –> Build –> Associate Aviatrix Spoke to MCSD

Repeat this step as per the business requirement

Controller –> Multi-Cloud Transit –> Segmentation –> List –> Domains to verify the configuration

Verify the Connectivity

Now Ping from vm_gcp_private_ip_spoke1 (Green Segment) to other test machines (as listed below) and check the connectivity

  • vm_gcp_private_ip_spoke2 Blue Segment (10.20.12.130) – should not work
  • vm_gcp_private_ip_spoke3 Blue Segment (10.42.0.130) – should not work
  • vm_aws_private_ip_spoke1 Green Segment (10.101.0.84) – should work
ubuntu@vm-gcp-spoke-1:~$ ping 10.20.12.130
PING 10.20.12.130 (10.20.12.130) 56(84) bytes of data.
From 10.20.11.2 icmp_seq=1 Time to live exceeded
From 10.20.11.2 icmp_seq=2 Time to live exceeded
From 10.20.11.2 icmp_seq=3 Time to live exceeded
^C
--- 10.20.12.130 ping statistics ---
3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2003ms

ubuntu@vm-gcp-spoke-1:~$ ping 10.42.0.130
PING 10.42.0.130 (10.42.0.130) 56(84) bytes of data.
From 10.20.11.2 icmp_seq=1 Time to live exceeded
From 10.20.11.2 icmp_seq=2 Time to live exceeded
From 10.20.11.2 icmp_seq=3 Time to live exceeded
From 10.20.11.2 icmp_seq=4 Time to live exceeded
^C
--- 10.42.0.130 ping statistics ---
4 packets transmitted, 0 received, +4 errors, 100% packet loss, time 3003ms

ubuntu@vm-gcp-spoke-1:~$ ping 10.101.0.84
PING 10.101.0.84 (10.101.0.84) 56(84) bytes of data.
64 bytes from 10.101.0.84: icmp_seq=1 ttl=60 time=63.3 ms
64 bytes from 10.101.0.84: icmp_seq=2 ttl=60 time=61.3 ms
64 bytes from 10.101.0.84: icmp_seq=3 ttl=60 time=61.5 ms
64 bytes from 10.101.0.84: icmp_seq=4 ttl=60 time=61.3 ms
^C
--- 10.101.0.84 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 61.376/61.914/63.303/0.806 ms
ubuntu@vm-gcp-spoke-1:~$ 

Now keep the ping running from gcp-spoke-1 VM to 10.20.12.130 and change the policy to connect Green and Blue. Notice that ping starts working.

Now change the policy as it was before so that Blue is not allowed to Green

LAB2 – GCP Multi-Cloud Network Transit / Hub-Spoke

In this lab, we will build the hub and spoke network in GCP. All the GCP VPCs and their respective subnets are already created for you to save time.

GCP Spoke-2 GW VPC Network/Subnet Creation

This step is already done for you so please do not attempt.

Controller can create those subnets using API/Terraform as well. It makes it easy to make it part of your CI/CD pipleline.

Controller –> Useful Tools –> Create A VPC –> Add New
Controller –> Useful Tools –> Create A VPC –> Add New Row

You should view the VPCs created for you already.

Controller –> Useful Tools –> VPC Tracker

Now click “VIEW GCLOUD SUBNET” to see further details

Deploy GCP Spoke-2 Gateway

Since we have all the subnets created properly, it is time to deploy the GCP Spoke-2 GW using the Controller.

Controller –> Multi-Cloud Transit –> Setup

Connect GCP Spokes to GCP Transit

At this stage we have 4 spokes in GCP

  • gcp-spoke1 –> connected to gcp transit gw
  • gpc-spoke2 –> not connected to gcp transit gw
  • gcp-spoke3 –> connected to gpc transit gw (this will be use for user-vpn later in the lab)
  • gcp-spoke4 –> not connected to gcp transit gw

Login to CoPilot (user: Copilot / pass: Copilot123!) now to see the connectivity in real-time

In this part of the lab, we will connect the remaining spoke gateway to aviatrix transit-gw

Controller –> Multi-Cloud Transit –> Setup –> Attach Spoke Gateway to Transit Network

Spoke#2

Spoke#3

Now notice the topology changed and all GCP spokes are connected now

Veryify GCP Spoke Connectivity

Now Ping from vm_gcp_private_ip_spoke1 (user: ubuntu / pass: Password123!) to other test machines (as listed below) and check the connectivity

  • vm_gcp_private_ip_spoke2 (10.20.12.130) – should work
    • using Aviatrix hub/spoke
  • vm_gcp_private_ip_spoke3 (10.42.0.130) – should work
    • using Aviatrix hub/spoke

Connect AWS Spoke to AWS-Transit

If we look at the AWS side, we have aws-spoke1 connected to aws-transit but the spoke2 is not yet connected as shown in the topology below

Now we will use Aviatrix Controller to establishe this connection. The aws-spoke2 gateway is already deployed for you

Controller –> Multi-Cloud Transit –> Setup –> Attach Spoke Gateway to Transit Network

After this setup, following is the topology

Veryify AWS Spoke Connectivity

Now Ping from vm_gcp_private_ip_spoke1 (user: ubuntu / pass: Password123!) to AWS test EC2s (as listed below) and check the connectivity

  • vm_aws_private_ip_spoke1 (10.101.0.84) – should work
    • This works because the traffic is routed via the dc-router-1. There is a private link between dc-router-1 and dc-router-2 connecting GCP and AWS using services like Equinix/Megaport/Pureport/Epsilon/etc.
    • Use tracepath or traceroute ($sudo apt install traceroute) to confirm
  • vm_aws_private_ip_spoke2 (10.42.0.84) – should not work
    • Because this subnet overlaps with the gcp spoke-vpc3 subnet
    • We will fix this issue later in the lab

Following output shows a traffic route from gcp-spoke-1 test VM to AWS test EC2 in aws-spoke1

ubuntu@vm-gcp-spoke-1:~$ tracepath 10.101.0.84
 1?: [LOCALHOST]                                         pmtu 1460
 1:  gcp-spoke-1.c.cne-pod24.internal                      1.563ms 
 1:  gcp-spoke-1.c.cne-pod24.internal                      0.260ms 
 2:  gcp-spoke-1.c.cne-pod24.internal                      0.256ms pmtu 1396
 2:  10.20.3.2                                             1.846ms 
 3:  10.20.3.2                                             0.775ms pmtu 1390
 3:  169.254.100.2                                        51.410ms 
 4:  172.16.0.2                                           55.184ms 
 5:  10.10.0.93                                           75.906ms 
 6:  10.101.0.70                                          77.206ms 

Multi-Cloud Transit Peering (Connection GCP and AWS)

Now we have setup all the gateway and build the hub-spoke (transit) in GCP, it is time to connect GCP with AWS now. This peering is secure and encrypted and takes care of number of configuration options and best practices that are needed for such a complex connectivity. You will appreciate the simplicity of doing that.

Select Multi-Cloud Transit –> Transit Peering –> ADD NEW

Note that the order of cloud selection does not matter here. After few seconds, the status will be green as shown in the screen shot below.

Verify Transit Connectivity

Ping from vm_gcp_private_ip_spoke1 (user: ubuntu / pass: Password123!) to GCP and AWS VMs (as listed below) and make sure connectivity works

  • vm_gcp_private_ip_spoke2 (10.20.12.130) – should work
    • using Aviatrix hub/spoke
  • vm_gcp_private_ip_spoke3 (10.42.0.130) – should work
    • using Aviatrix hub/spoke
  • vm_aws_private_ip_spoke2 (10.42.0.84) – should not work
    • Because this subnet overlaps with the gcp spoke-vpc3 subnet
    • We will fix this issue later in the lab
  • vm_aws_private_ip_spoke1 (10.101.0.84) – should work
    • This time the packet will use gcp to aws aviatrix transit gateway route for connectivity
    • Use tracepath or traceroute ($sudo apt install traceroute) to confirm
    • In case this link goes down, the connectivity will be provided using the DC router DCI link as backup. This setup is important for Business Continuity and Disaster Recover
ubuntu@vm-gcp-spoke-1:~$ traceroute 10.101.0.84
traceroute to 10.101.0.84 (10.101.0.84), 30 hops max, 60 byte packets
1 gcp-spoke-1.c.cne-pod24.internal (10.20.11.2) 1.322 ms 1.354 ms 1.413 ms
2 10.20.3.2 (10.20.3.2) 2.684 ms 2.708 ms 2.710 ms
3 10.10.0.93 (10.10.0.93) 62.072 ms 62.032 ms 62.032 ms
4 10.101.0.70 (10.101.0.70) 61.975 ms 61.981 ms 61.895 ms

Controller –> Multi-Cloud Transit –> List –> Transit Gateway –> gcp-transit –> Show Details

Following screen shows the best path available to reach 10.101.0.84

Controller –> Multi-Cloud Transit –> List –> Transit Gateway –> gcp-transit –> Show Details –> Route Info DB Details

Following screen shows that 10.101.0.0/24 was received from two paths and transit peering was selected as the best path

This completes the transit topology and testing. You can verify it in CoPilot as well.

LAB1 – Google Cloud and Aviatrix Testing/POC LAB Guide

Introduction

This document is the lab guide for GCP Test Flight Project. Anyone with basic GCP knowledge is the audience. It is GCO focused with connection to AWS as optional component of the cloud.

Topology

Following is the starting topology. Some components are pre-built to save time.

Once you finish all lab steps, following is what it will look like

Main Use-Cases Covered

  • Cloud and Multi-Cloud Transit (Hub and Spoke) connectivity
  • Multi-Cloud Network Segmentation (MCNS)
  • Egress FQDN
  • User-VPN
  • Multi-Cloud Transit with AWS
  • Policy Based Remote User SAML/SSL VPN
  • Hybrid / On-Premise Connectivity (S2C)
  • Traffic Engineering with SD and BGP advance knobs
  • Day2 Operations, troubleshooting and monitoring (Aviatrix CoPilot)

Warning / Pre-Requisite / Notes

  • Do not change the password of any device or server in the lab pod
  • Do not change controller password
  • In most of the places:
    • The Aviatrix Controller is referred to as “Controller”
    • The Aviatrix Gateway is referred to as “Gateway”

LAB1 – Verify Connectivity

Make sure the lab is in good standing. Verify the following tasks by logging into the Aviatrix Controller UI. Make sure you log in to your own pod. Pod name is displayed on top.

Make sure you have resources deployed and matches to following


The GCP Project is already on-boarded to save time in the Aviatrix Controller under Accounts Access Account


Now change the email address to your corporate email address under Accounts –> Account Users –> admin (do not change the Controller password)

Aviatrix gateway are pre-deployed to save time. Make sure all gateways are up and running.

Check the transit gateway under Multi-Cloud Transit –> List –> Transit

Check the spoke gateway under Multi-Cloud Transit –> List –> Spoke

Verify GCP VM SSH Connectivity

GCP VMs only requires a password. Password is Password123!
There is no .pem file needed to login to them. Login to vm_gcp_public_ip_spoke1. This IP address is provided in the LAB POD file

shahzadali@shahzad-ali Pem Files % ssh ubuntu@35.224.13.215
ubuntu@35.224.13.215's password:
Welcome to Ubuntu 16.04.7 LTS (GNU/Linux 4.15.0-1087-gcp x86_64)
 
 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage
 
11 packages can be updated.
0 updates are security updates.
 
New release '18.04.5 LTS' available.
Run 'do-release-upgrade' to upgrade to it.
 
 
Last login: Sat Nov 28 16:42:18 2020 from 172.124.233.126
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.
 
ubuntu@vm-gcp-test-spoke-1:~$

ubuntu@vm-gcp-spoke-1:~$ ifconfig

ens4 
Link encap:Ethernet HWaddr 42:01:0a:14:0b:82
inet addr:10.20.11.130 Bcast:10.20.11.130 Mask:255.255.255.255
inet6 addr: fe80::4001:aff:fe14:b82/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1460 Metric:1
RX packets:1025064 errors:0 dropped:0 overruns:0 frame:0
TX packets:663466 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:511233248 (511.2 MB) TX bytes:81897766 (81.8 MB)

lo 
Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
ubuntu@vm-gcp-spoke-1:~$

Now Ping from vm_gcp_private_ip_spoke1 to other test machines (as listed below) and check the connectivity

  • vm_gcp_private_ip_spoke2 (10.20.12.130) – should not work
  • vm_gcp_private_ip_spoke3 (10.42.0.130) – should not work
  • vm_aws_private_ip_spoke1 (10.101.0.84) – should work
  • vm_aws_private_ip_spoke2 (10.42.0.84) – should not work

Can you guess why it worked or did not work?

Verify AWS EC2 SSH Connectivity

ssh into AWS VM in Spoke1using its public ip address and .pem file (the address is provided in the lab pod file you received). If you get the following error, then please fix your .pem file permission first.

shahzadali@shahzad-ali Pem Files % ssh ubuntu@35.163.104.122 - instance_priv_key.pem
ubuntu@35.163.104.122: Permission denied (publickey).
 
shahzadali@shahzad-ali Pem Files % chmod 400 instance_priv_key.pem

ssh using the user-name and .pem file again and ping the second AWS instance

shahzadali@shahzad-ali Desktop % ssh ubuntu@34.217.68.104 -i instance_priv_key_gcp_pod24.pem
Welcome to Ubuntu 16.04.5 LTS (GNU/Linux 4.4.0-1072-aws x86_64)
Documentation: https://help.ubuntu.com
Management: https://landscape.canonical.com
Support: https://ubuntu.com/advantage
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
79 packages can be updated.
0 updates are security updates.
New release '18.04.5 LTS' available.
Run 'do-release-upgrade' to upgrade to it.
*** System restart required ***
Last login: Thu Dec 31 20:36:52 2020 from 172.124.233.126
To run a command as administrator (user "root"), use "sudo ".
See "man sudo_root" for details.
ubuntu@ip-10-101-0-84:~$

Ping from vm_aws_private_ip_spoke1 to …

  • vm_gcp_private_ip_spoke1 (10.20.11.130) – should work
  • vm_gcp_private_ip_spoke2 (10.20.12.130) – should not work
  • vm_gcp_private_ip_spoke3 (10.42.0.130) – should not work
  • vm_aws_private_ip_spoke2 (10.42.0.84) – should not work

Verify On-Prem Router SSH Connectivity

ssh into the on-prem dc-router-1 (we are using Cisco CSR to simulate it). User is admin and Password is “Password123”

shahzadali@shahzad-ali Desktop % ssh admin@54.219.225.218
Password: 


dc-router-1#show ip int brief
Interface              IP-Address      OK? Method Status                Protocol
GigabitEthernet1       192.168.20.162  YES DHCP   up                    up      
Loopback0              10.20.11.254    YES TFTP   up                    up      
Tunnel1                169.254.100.2   YES TFTP   up                    up      
Tunnel42               172.16.0.1      YES TFTP   up                    up      
VirtualPortGroup0      192.168.35.101  YES TFTP   up                    up      
dc-router-1#

From dc-router-1 ping gcp and aws instances private ip addresses

  • vm_gcp_private_ip_spoke1 (10.20.11.130) – should no work
  • vm_gcp_private_ip_spoke2 (10.20.12.130) – should not work
  • vm_gcp_private_ip_spoke3 (10.42.0.130) – should not work
  • vm_aws_private_ip_spoke1 (10.101.0.84) – should not work
  • vm_aws_private_ip_spoke2 (10.42.0.84) – should not work

ssh into the on-prem dc-router-1 (we are using Cisco CSR to simulate it). User is admin and Password is “Password123”

shahzadali@shahzad-ali Desktop % ssh admin@54.193.196.247
The authenticity of host '54.193.196.247 (54.193.196.247)' can't be established.
RSA key fingerprint is SHA256:fi8bbpJc8LCE32dn9RL1EIDzznl+mgQ5V5u5vR/hxFo.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '54.193.196.247' (RSA) to the list of known hosts.
Password: 
dc-router-2#
dc-router-2#show ip int brief 
Interface              IP-Address      OK? Method Status                Protocol
GigabitEthernet1       192.168.10.120  YES DHCP   up                    up      
Tunnel1                169.254.101.2   YES TFTP   up                    up      
Tunnel42               172.16.0.2      YES TFTP   up                    up      
VirtualPortGroup0      192.168.35.101  YES TFTP   up                    up      
dc-router-2#

From dc-router-2 ping gcp and aws instances private ip addresses

  • vm_gcp_private_ip_spoke1 (10.20.11.130) – should no work
  • vm_gcp_private_ip_spoke2 (10.20.12.130) – should not work
  • vm_gcp_private_ip_spoke3 (10.42.0.130) – should not work
  • vm_aws_private_ip_spoke1 (10.101.0.84) – should not work
  • vm_aws_private_ip_spoke2 (10.42.0.84) – should not work

This completes the verification lab1. We will now move to other use-cases.

Aviatrix’s Check Point CloudGuard Related Features

There are many features Aviatrix has developed for our Firewall partners to help achieve compliance, lower TCO, and enhanced application security needs.

The following table is a list of some of the important features for Check Point CloudGuard deployment. There are some very specific ones for Check Point, and then there are some features applicable to other firewall vendors as well.

FeatureBusiness Outcome / Use-CaseApplicable Cloud/Transit
Support existing or private offer security gateway (BYOL). Some customer comes with the private offer and deploys the security gateway themselves or their own automation process. For such customers, Aviatrix allows ingesting the existing security gateways.Cost optimization, compliance, and auditAVX-TR-AWS
AVX-TR-AZU
AWS-TGW
Azure-Native
CloudGuard Metered OptionTime-to-market, CI/CD integrationAVX-TR-AWS
AVX-TR-AZU
AWS-TGW
Azure-Native
Policy-Based Service Insertion, Threat Prevention, and Deep Packet InspectionSingle click and intent based automatic policy creation to provide complianceAVX-TR-AWS
AVX-TR-AZU
AWS-TGW
Azure-Native
Active/Active Centralized DeploymentIncreased availability, cost-optimization, simplified operations and enhanced visibilityAVX-TR-AWS
AVX-TR-AZU
AWS-TGW
Azure-Native
Scale-out and scale-up Security Gateway deployment support Cost optimization, enhanced security posture, reduces riskAVX-TR-AWS
AVX-TR-AZU
AWS-TGW
Azure-Native
Egress Traffic inspection supportCost optimization and enhanced application security postureAVX-TR-AWS
AVX-TR-AZU
AWS-TGW
Azure-Native
Ingress Traffic inspection support. Various deployment models to protect ingress traffic while also preserving the source IPEnhanced visibility and securityAVX-TR-AWS
AVX-TR-AZU
AWS-TGW
Azure-Native
Fail-open or Fail-close operationsBusiness continuity and quick problem resolutionAVX-TR-AWS
AVX-TR-AZU
AWS-TGW
Azure-Native
Diagnostic capabilities. Help find the common causes quickly.  Shows Sec.GW/firewall status, spoke attachments, management access etc.Enhanced visibility and reduced MTTR. AVX-TR-AWS
AVX-TR-AZU
AWS-TGW
Azure-Native
ICMP Health Check on LAN interface. Detect failure in less than 5 seconds and rebalance/rehash the traffic towards active firewall/sec.gwImproved security posture and DDoS preventionAVX-TR-AWS
AVX-TR-AZU
AWS-TGW
Azure-Native
TCP Health Check
Using the Azure native LB to load balance for CloudGuard and also for health check via TCP probes
Increase availability and security compliance needsAVX-TR-AZU
Native-AZU
Check Point CloudGuard Geo Cluster support for East-West trafficIncreased application availability in case of failureAVX-TR-AWS
AVX-TR-AZU
AWS-TGW
Azure-Native
Support for newer Check Point versionsEnhances security and business agilityAVX-TR-AWS
AVX-TR-AZU
TGW-AWS
Azure-Native
Support security domains and connection policies with encrypted tunnels and connectivityEnhanced application security posture and protectionAVX-TR-AWS
AVX-TR-AZU
Azure-Native
CheckPoint Vendor Integration with AWS and Azure to propagate and install RFC1918 and BGP routesReduces risk and increase time to market with always-on automationAVX-TR-AWS
AVX-TR-AZU
TGW-AWS
Azure-Native
Exclude list of CIDR/IP from being inspected by FireNet. Customer can create a policy to exclude Check Point Security Manager, Controller, and GW IP addressesReduces unnecessary burden on security infrastructure that in turn could help with cost-optimizationAVX-TR-AWS
AVX-TR-AZU
TGW-AWS
Azure-Native
Egress and E-W Filtering by different firewall clusters (Dual FireNet). Take the guesswork out from the design. Traffic segregation across different sets of CloudGuard security gatewayMeets compliance and audit requirements to segregate traffic. Reduces the attack surface. Enhanced visibiliy.AVX-TR-AWS
AVX-TR-AZU
TGW-AWS
Azure-Native
Intra Security Domain Firewall Inspection. Inspection within the VPC.Enhanced application securityAWS-TGW
API and Terraform support for CloudGuards. Consistent automation and a single entry point for IaC.Time-to-market, agility, and automated complianceAVX-TR-AWS
AVX-TR-AZU
TGW-AWS
Azure-Native
Azure Transit FireNet support Insane Mode. Increase the throughput in AzureCost optimizationAVX-TR-AZU
CheckPoint Bootstrap for automated deploymentIncreased compliance and reduced riskAVX-TR-AWS
AVX-TR-AZU
TGW-AWS
Azure-Native
2-tuple and 5-tuple hashing choices.
The 2-tuple use case is to support an application where multiple TCP sessions are used for an egress Internet service therefore requiring all sessions to go through one firewall with the same source NAT IP address.
Compliance and auditAVX-TR-AWS
AVX-TR-AZU
TGW-AWS
Azure-Native
Single Click ClougGuard Enable/Disable inspection. Reduces MTTR, enhances operations, and supportAVX-TR-AWS
AVX-TR-AZU
TGW-AWS
Azure-Native
Route Synchronization
New route received from on-prem via BGP will be programmed automatically in the VPC/VNET and also in the Security Gateway / Firewall
Business continuity and improved application protectionAVX-TR-AWS
AVX-TR-AZU
TGW-AWS
Native-AZU
Private Communication from On-Prem for Sec.GW management accessImproves compliance. Reduces the attack surface. Improves TCOAVX-TR-AWS
AVX-TR-AZU
TGW-AWS
Native-AZU
Check Point CloudGuard SIC Key
Secure Internal Communication Activation Key provides easy of deployment
Improves security and automation capabilitiesAVX-TR-AWS
AVX-TR-AZU
TGW-AWS
Native-AZU

Legends

  • AVX-TR-AWS: All encrypted Aviatrix Transit FireNet deployment in
  • AVX-TR-AZU: All encrypted Aviatrix Transit FireNet deployment in Azure
  • TGW-AWS: non-encrypted AWS Transit Gateway FireNet deployment
  • Native-AZU: non-encrypted Azure Native Peering FireNet deployment

GCP Networking Concepts and Best Practices

  • Cloud Armor – A service that sits before a native Load Balancer to protect against DDoS attacks. In GKE, the target for Cloud Armor service is GKE Ingress LB
  • Load Balancer Options
    • There are different load balancer options in GCP based on specific requirement
Load BalancerTraffic TypeGlobal/RegionalExternal/
Internal
External Ports
HTTP(s)HTTP or HTTPSGlobal IPv4/v6ExternalHTTP on 80 or 8080
HTTPS on 443
SSL ProxyTCP with SSL offloadGlobal IPv4/v6External25, 43, 110, 143, 195, 443, 465,
587, 700, 993, 995, 1883, 5333
TCP ProxyTCP without SSL offload
Client IP not preserved
Global IPv4/v6External25, 43, 110, 143, 195, 443, 465,
587, 700, 993, 995, 1883, 5333
NLB (Network)TCP/UDP without SSL offload
Client IP preserved
Regional
IPv4
ExternalAny
HTTP(s)HTTP or HTTPSRegional
IPv4
InternalHTTP on 80 or 8080
HTTPS on 443
TCP/UDPTCP or UDPRegional
IPv4
InternalAny
  • As a network and security admin, it is important to know about at least following roles and their permission details
  • roles/compute.admin
  • roles/compute.loadBalancerAdmin
  • roles/compute.networkAdmin
    • Permissions to create, modify, and delete networking resources, except for firewall rules and SSL certificates.
    • The network admin role allows read-only access to firewall rules, SSL certificates, and instances (to view their ephemeral IP addresses).
    • The network admin role does not allow a user to create, start, stop, or delete instances.
    • For example, if your company has a security team that manages firewalls and SSL certificates and a networking team that manages the rest of the networking resources, then grant this role to the networking team’s group.
  • roles/compute.securityAdmin
    • Permissions to create, modify, and delete firewall rules and SSL certificates, and also to configure Shielded VMBETA settings.
    • For example, if your company has a security team that manages firewalls and SSL certificates and a networking team that manages the rest of the networking resources, then grant this role to the security team’s group.
  • roles/compute.xpnAdmin
    • Permissions to administer shared VPC host projects, specifically enabling the host projects and associating shared VPC service projects to the host project’s network.
    • At the organization level, this role can only be granted by an organization admin.
    • Google Cloud recommends that the Shared VPC Admin be the owner of the shared VPC host project. The Shared VPC Admin is responsible for granting the Compute Network User role (roles/compute.networkUser) to service owners, and the shared VPC host project owner controls the project itself. Managing the project is easier if a single principal (individual or group) can fulfill both roles.
  • GCP does not log the deny request for the firewall log, unless a rule is hit. Best practice is to create a deny all firewall rule with priority 65500 to deny all traffic and then enable logs
  • Shared VPC is the place where one terminate the Cloud Interconnect private connection and enable the VLAN
  • Best practice is to disable the auto subnet creation
    • Auto subnet creation can cause overlapping IP between VPCs
  • The best practice is to create multiple VPCs in GCP. Having just one or few global VPCs will not allow you to segment the traffic and workload in future
  • It is also a good idea to stay away from GCP Shared VPC concept. Shared VPC concept is not to create network boundary of hub-spoke architecture. Shared VPC is just an administrative concept so that Network and Sec. admin can assing subnet and firewall rules to Service or Client VPCs

On-Premise to GCP Connectivity (Hybrid) Options

Following table shows the options

DedicatedShared
Layer3Direct PeeringCareer Peering
Layer2Dedicated InterconnectPartner Interconnect
  • For layer 2 connections, traffic just passes through the service provider’s network.
    • Service Provider Network acts as L2 only.
    • It means for layer 2 connections, you must configure and establish a BGP session between your Cloud Routers and on-premises routers.
  • For layer 3 connections, your service provider establishes a BGP session between GCP Cloud Routers and their edge routers.
    • You don’t need to configure BGP on your on-premises router.
    • Google and your service provider automatically set the correct configurations.

GKE Cluster IP Address Scheme

3 different types of IP ranges are required

  • Node
    • Primary CIDR is assigned to the node
  • Pod
    • Secondary or Alias IP is assgined to the Pods
    • default network size is /24. This cannot be smaller than this.
    • Pod IPs are assigned on per Node basis
    • Pod IPs are always the dobule the requirement because these PODs are created and destryoed all the time. Or when you update or do antyhing then first the new POD is deployed and then old is deleted
  • Services:
    • Secondary or Alias IP is used by the Services as well

Example:

If a GKE cluster is of 2 nodes then it means at minimum you would need two /24 networks for Pod IP assignment.

Auto-scaling and VPC Dependencies

Unmanaged Instance Group only resides in a single zone, VPC network and subnet. It is not a good idea to use the unmanaged Instance Group (UIG). UIG cannot do autoscaling as well.

Autoscaling only works with zonal and regional managed instance groups (MIGs). It means autoscaling cannot cross region boundaries. So it means if you use a single VPC with autoscaling, you are limited to only one region. So in essence your VPC is not really global from application perspective. So why should you restrict your design to single VPC? Using single VPC is not a good idea for many reasons, and this is just one of them.

https://cloud.google.com/compute/docs/autoscaler

https://cloud.google.com/compute/docs/instance-groups/creating-groups-of-unmanaged-instances

Cloud to On Premise Data Center Active/Standby Firewall Design and Deployment

Problem Statement

  • As enterprises moving their applications into the cloud, they are following the best practice to deploy their virtual NGFW in the Cloud using Aviatrix’s active/active, centralized, uncompromised, cost optimized an dpolicy-based Firewall Service Insertion (FireNet) solution as shown in the following diagram

  • Some enterprises want to keep using their on-premise physical NGFW until they deploy virtual NGFW in the cloud.
  • If these on-premise firewalls are deployed in an active/active fashion they plug right into Aviatrix active/active transit architecture in the cloud to provide the full benefit and throughput
  • However, often times these on-premise firewalls are deployed in active/standby fashion where it is important that sessions are symmetric otherwise firewall could drop the traffic
  • Enterprises want to provide active/active ECMP functionality in the Cloud while still making active/standby connection to the on-premise physical firewall to make sure traffic is not asymmetric when it passes through those Firewalls.

Solution

  • With Aviatrix release 6.2, Aviatrix transit gateways provide best of the both worlds by providing Active/Active traffic distribution towards the North (Cloud) side, while providing Active/Standby (optionally) towards the South (on-premise) side to satisfy Firewall’s Active/Standby needs.
  • This is done at the Aviatrix Transit layer by providing dual routing capabilities at the Aviatrix Transit Gateway. This built-in, single click dual routing makes the Transit GW to act as ECMP router on the North side and Active/Standby on the South.

Enterprise Active/Standby Firewall Design

Design Notes

  • Active Aviatrix Transit GW builds one tunnels towards the DC-FW1
  • Standby Aviatrix Transit GW (HA) will build another tunnel towards the DC-FW1
  • The IPSec tunnels build from Aviatrix Transit GW are Route Based IPSec tunnels (VTI)
  • At any time, only one Aviatrix Transit GW can be in an Active Mode
  • Both tunnels are technically up but only one path is selected based on the MED (Metric) value
  • It is the on-premise firewall team’s responsibility to properly configure the firewall to be in the Active/Standby mode
    • It is on-premise firewall team’s responsibility to properly configure the BGP attributes (if needed) so that the route received from the Aviatrix Transit GW active tunnel are preferred over the secondary tunnel
    • BGP local attribute such a weight can be used on the firewall to influence the active path selection

Traffic Flow From Cloud to On-Premise Firewall

Traffic Flow#1

  • Traffic from Cloud EC2/VM leaves the Spoke VPC and enter into Primary Transit GW
    • The primary transit GW is also the active one
    • The secondary transit GW is on the standby
  • From here there are two paths to reach to On-premise firewall
  • Since the MED value from the Primary (active) Transit GW is lower than the one received from the HA pair, it will prefer the Primary Transit GW path and will directly go towards the DC-FW1

Traffic Flow#2

  • Another scenario is that the traffic from Cloud EC2/VM leaves the Spoke VPC and enters into Secondary Transit GW (HA)
    • The secondary transit GW is on the standby
    • The primary transit GW is the active one
  • From here again there are two paths to reach to on-premise firewall
  • Since the MED value from the primary (active) transit GW is lower than the one received from the secondary transit GW (standby), the traffic will be routed towards the primary transit GW via the HA-Link between Active and HA Transit GW pair

Deployment Details

  • Create site to cloud tunnel using the Multi-Cloud Transit workflow
  • Do not enable “Remote Gateway HA” option
    • It means this feature requires only one IP address on the on-premise firewall
  • Build all 4 tunnels from the firewall towards the Aviatrix designated primary and secondary gateways
  • Once the tunnels are UP we need to enable active-standby feature underMulticloud Transit -> Advanced Config

Testing and validation when both firewalls are up

  • Controller also gives visibility into the Active GW
  • Controller also allows to switch the Active GW to Standby and vice versa for Troubleshooting and verification purposes

Following picture shows that the Active/Standby feature is enabled and also shows the active aviatrix gateway name

Transit Gateway Routing Tables when “transit-A-GW” is Active

  • In the following table “transit-A-gw” is active
  • Table shows that the on-premise route 192.168.222.0/24 is learned both from Active and Standby Transit GWs.
  • Since 192.168.222.0/24 is learned via DC-FW-1 with lower MED value, it means that route will be preferred over the other

DestinationViaDevNexthop IPNexthop GatewayStatusMetric
192.168.222.0/24tun-0D34F649-013.52.246.73DC-FW1up100
192.168.222.0/24tun-0A0A0029-010.10.0.41aws-transit-A-gw-uswest-hagwup200
transit-A-gw route table when transit-A-gw is active

  • Next table shows the route from standby transit GW’s (aws-transit-A-gw-uswest-hagw) perspective
  • The standby can reach 192.168.222.0/24 via the active transit GW with the lower MED, hence that route will be preferred
DestinationViaDevNexthop IPNexthop GatewayStatusMetric
192.168.222.0/24tun-0A0A002C-010.10.0.44aws-transit-A-gw-uswestup200
192.168.222.0/24tun-0D34F649-013.52.246.73DC-FW1up300
transit-A-gw-ha route table when transit-A-gw is active

Transit Gateway Routing Tables when “transit-A-gw-ha” is Active

We switched the Active/Standby via the controller

  • Now aws-transit-A-gw-uswest-hagw is active.
  • We will look at the GW route table again. Now notice that “aws-transit-A-gw-uswest-hagw” is preferred over the other based on the lower MED value
DestinationViaDevNexthop IPNexthop GatewayStatusMetric
192.168.222.0/24tun-0A0A0029-010.10.0.41aws-transit-A-gw-uswest-hagwup200
192.168.222.0/24tun-0D34F649-013.52.246.73DC-FW-1up300
aws-transit-A-gw-uswest route table when aws-transit-A-gw-uswest-hagw is active
DestinationViaDevNexthop IPNexthop GatewayStatusMetric
192.168.222.0/24tun-0D34F649-013.52.246.73DC-FW-1up100
192.168.222.0/24tun-0A0A002C-010.10.0.44aws-transit-A-gw-uswestup200
aws-transit-A-gw-uswest-hagw route table when aws-transit-A-gw-uswest-hagw is active

Testing and validation when active firewall is down

  • Now we will shut down the Active DC-FW-1
  • This will stop sending the 192.168.222.0 route from DC-FW-1
  • The standby firewall DC-FW-2 will become active now and start sending 192.168.222.0 to the Aviatrix transit gateways
DestinationViaDevNexthop IPNexthop GatewayStatusMetric
192.168.222.0/24tun-0A0A0029-010.10.0.41aws-transit-A-gw-uswest-hagwup200
192.168.222.0/24tun-0D3975AD-013.57.117.173DC-FW-2up300
aws-transit-A-gw-uswest route table while aws-transit-A-gw-uswest-hagw is active

DestinationviaDevNexthop IPNexthop GatewayStatusMetric
192.168.222.0/24tun-0D3975AD-013.57.117.173DC-FW-2up100
192.168.222.0/24tun-0A0A002C-010.10.0.44aws-transit-A-gw-uswestup200
aws-transit-A-gw-uswest-hagw route table while aws-transit-A-gw-uswest-hagw is active

Additional Configuration

Following diagram shows the second tunnel towards the DC-FW2


Aviatrix Transit GW Global Routing Table shows the on-prem subnet received at both GWs and best route installed via the Active one only

GCP High Performance Encryption

Aviatrix Gateway VM TypeThroughput
n1-highcpu-43.12Gbps
n1-highcpu-86.54Gbps
n1-highcpu-1611.58Gbps
n1-highcpu-3219.97Gbps

How does Aviatrix GCP HPE work?

Aviatrix HPE utilizes native peering and multiple tunnels to provide higher throughput

GCP HPE can also work with /24 subnet scheme.

Controller builds native peering

GCP Transit Gateway Details

Following is the output from the Aviatrix Transit GW. Notice the number of tunnels interfaces (14 in this case due to the size of the VM we have selected) created inside the GW.

Name: gcp-transit-gw-uscentral1

eth0: flags=4163 mtu 1460
inet 10.20.1.3 netmask 255.255.255.255 broadcast 10.20.1.3
inet6 fe80::4001:aff:fe14:103 prefixlen 64 scopeid 0x20

ether 42:01:0a:14:01:03 txqueuelen 1000 (Ethernet)
RX packets 185466 bytes 265891040 (265.8 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 28034 bytes 5148269 (5.1 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

lo: flags=73 mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10
loop txqueuelen 1000 (Local Loopback)
RX packets 385 bytes 37184 (37.1 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 385 bytes 37184 (37.1 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

tun-0A140B03-0: flags=209 mtu 8936
inet 1.1.1.19 netmask 255.255.255.255 destination 1.1.1.19
inet6 fe80::5efe:a14:103 prefixlen 64 scopeid 0x20
tunnel txqueuelen 1000 (IPIP Tunnel)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 9 dropped 0 overruns 0 carrier 9 collisions 0

tun-0A140B41-0: flags=209 mtu 8936
inet 1.1.1.205 netmask 255.255.255.255 destination 1.1.1.205
inet6 fe80::5efe:a14:141 prefixlen 64 scopeid 0x20
tunnel txqueuelen 1000 (IPIP Tunnel)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 16 bytes 1344 (1.3 KB)
TX errors 10 dropped 0 overruns 0 carrier 10 collisions 0

tun-0A140B42-0: flags=209 mtu 8936
inet 1.1.1.92 netmask 255.255.255.255 destination 1.1.1.92
inet6 fe80::5efe:a14:142 prefixlen 64 scopeid 0x20
tunnel txqueuelen 1000 (IPIP Tunnel)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 10 dropped 0 overruns 0 carrier 10 collisions 0

tun-0A140B43-0: flags=209 mtu 8936
inet 1.1.1.236 netmask 255.255.255.255 destination 1.1.1.236
inet6 fe80::5efe:a14:143 prefixlen 64 scopeid 0x20
tunnel txqueuelen 1000 (IPIP Tunnel)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 9 dropped 0 overruns 0 carrier 9 collisions 0

tun-0A140B44-0: flags=209 mtu 8936
inet 1.1.1.144 netmask 255.255.255.255 destination 1.1.1.144
inet6 fe80::5efe:a14:144 prefixlen 64 scopeid 0x20
tunnel txqueuelen 1000 (IPIP Tunnel)
RX packets 16 bytes 1344 (1.3 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 10 dropped 0 overruns 0 carrier 10 collisions 0

tun-0A140B45-0: flags=209 mtu 8936
inet 1.1.1.4 netmask 255.255.255.255 destination 1.1.1.4
inet6 fe80::5efe:a14:145 prefixlen 64 scopeid 0x20
tunnel txqueuelen 1000 (IPIP Tunnel)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 9 dropped 0 overruns 0 carrier 9 collisions 0

tun-0A140B46-0: flags=209 mtu 8936
inet 1.1.1.8 netmask 255.255.255.255 destination 1.1.1.8
inet6 fe80::5efe:a14:146 prefixlen 64 scopeid 0x20
tunnel txqueuelen 1000 (IPIP Tunnel)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 10 dropped 0 overruns 0 carrier 10 collisions 0

tun-0A140B47-0: flags=209 mtu 8936
inet 1.1.1.32 netmask 255.255.255.255 destination 1.1.1.32
inet6 fe80::5efe:a14:147 prefixlen 64 scopeid 0x20
tunnel txqueuelen 1000 (IPIP Tunnel)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 9 dropped 0 overruns 0 carrier 9 collisions 0

tun-0A140B48-0: flags=209 mtu 8936
inet 1.1.1.71 netmask 255.255.255.255 destination 1.1.1.71
inet6 fe80::5efe:a14:148 prefixlen 64 scopeid 0x20
tunnel txqueuelen 1000 (IPIP Tunnel)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 9 dropped 0 overruns 0 carrier 9 collisions 0

tun-0A140B49-0: flags=209 mtu 8936
inet 1.1.1.212 netmask 255.255.255.255 destination 1.1.1.212
inet6 fe80::5efe:a14:149 prefixlen 64 scopeid 0x20
tunnel txqueuelen 1000 (IPIP Tunnel)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 10 dropped 0 overruns 0 carrier 10 collisions 0

tun-0A140B4A-0: flags=209 mtu 8936
inet 1.1.1.37 netmask 255.255.255.255 destination 1.1.1.37
inet6 fe80::5efe:a14:14a prefixlen 64 scopeid 0x20
tunnel txqueuelen 1000 (IPIP Tunnel)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 10 dropped 0 overruns 0 carrier 10 collisions 0

tun-0A140B4B-0: flags=209 mtu 8936
inet 1.1.1.108 netmask 255.255.255.255 destination 1.1.1.108
inet6 fe80::5efe:a14:14b prefixlen 64 scopeid 0x20
tunnel txqueuelen 1000 (IPIP Tunnel)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 10 dropped 0 overruns 0 carrier 10 collisions 0

tun-0A140B4C-0: flags=209 mtu 8936
inet 1.1.1.194 netmask 255.255.255.255 destination 1.1.1.194
inet6 fe80::5efe:a14:14c prefixlen 64 scopeid 0x20
tunnel txqueuelen 1000 (IPIP Tunnel)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 9 dropped 0 overruns 0 carrier 9 collisions 0

tun-0A140B4D-0: flags=209 mtu 8936
inet 1.1.1.56 netmask 255.255.255.255 destination 1.1.1.56
inet6 fe80::5efe:a14:14d prefixlen 64 scopeid 0x20
tunnel txqueuelen 1000 (IPIP Tunnel)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 9 dropped 0 overruns 0 carrier 9 collisions 0

Following output shows the transit gw route table for HPE config.

DestinationViaDevNexthop IPNexthop GatewayStatusMetricWeight
default10.20.1.1eth0up0
10.20.1.0/2410.20.1.1eth0up0
10.20.1.1eth0up0
10.20.11.0/24tun-0A140B03-010.20.11.3gcp-spoke1-gw-uscentral1up1001
tun-0A140B41-010.20.11.65gcp-spoke1-gw-uscentral1up1
tun-0A140B42-010.20.11.66gcp-spoke1-gw-uscentral1up1
tun-0A140B43-010.20.11.67gcp-spoke1-gw-uscentral1up1
tun-0A140B44-010.20.11.68gcp-spoke1-gw-uscentral1up1
tun-0A140B45-010.20.11.69gcp-spoke1-gw-uscentral1up1
tun-0A140B46-010.20.11.70gcp-spoke1-gw-uscentral1up1
tun-0A140B47-010.20.11.71gcp-spoke1-gw-uscentral1up1
tun-0A140B48-010.20.11.72gcp-spoke1-gw-uscentral1up1
tun-0A140B49-010.20.11.73gcp-spoke1-gw-uscentral1up1
tun-0A140B4A-010.20.11.74gcp-spoke1-gw-uscentral1up1
tun-0A140B4B-010.20.11.75gcp-spoke1-gw-uscentral1up1
tun-0A140B4C-010.20.11.76gcp-spoke1-gw-uscentral1up1
tun-0A140B4D-010.20.11.77gcp-spoke1-gw-uscentral1up1
10.20.11.78tun-0A140B03-010.20.11.3gcp-spoke1-gw-uscentral1up01
tun-0A140B41-010.20.11.65gcp-spoke1-gw-uscentral1up1
tun-0A140B42-010.20.11.66gcp-spoke1-gw-uscentral1up1
tun-0A140B43-010.20.11.67gcp-spoke1-gw-uscentral1up1
tun-0A140B44-010.20.11.68gcp-spoke1-gw-uscentral1up1
tun-0A140B45-010.20.11.69gcp-spoke1-gw-uscentral1up1
tun-0A140B46-010.20.11.70gcp-spoke1-gw-uscentral1up1
tun-0A140B47-010.20.11.71gcp-spoke1-gw-uscentral1up1
tun-0A140B48-010.20.11.72gcp-spoke1-gw-uscentral1up1
tun-0A140B49-010.20.11.73gcp-spoke1-gw-uscentral1up1
tun-0A140B4A-010.20.11.74gcp-spoke1-gw-uscentral1up1
tun-0A140B4B-010.20.11.75gcp-spoke1-gw-uscentral1up1
tun-0A140B4C-010.20.11.76gcp-spoke1-gw-uscentral1up1
tun-0A140B4D-010.20.11.77gcp-spoke1-gw-uscentral1up1
169.254.0.0/16eth0up0

Spoke GW Routing and Tunnel Details

Spoke GW Interface Details. 14 tunnels interfaces in total

eth0: flags=4163 mtu 1460
inet 10.20.11.3 netmask 255.255.255.255 broadcast 10.20.11.3
inet6 fe80::4001:aff:fe14:b03 prefixlen 64 scopeid 0x20
ether 42:01:0a:14:0b:03 txqueuelen 1000 (Ethernet)
RX packets 232462 bytes 278505311 (278.5 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 73455 bytes 19303335 (19.3 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

lo: flags=73 mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10
loop txqueuelen 1000 (Local Loopback)
RX packets 691 bytes 59476 (59.4 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 691 bytes 59476 (59.4 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

tun-0A140103-0: flags=209 mtu 8936
inet 1.1.1.171 netmask 255.255.255.255 destination 1.1.1.171
inet6 fe80::5efe:a14:b03 prefixlen 64 scopeid 0x20
tunnel txqueuelen 1000 (IPIP Tunnel)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 15 dropped 0 overruns 0 carrier 15 collisions 0

tun-0A140141-0: flags=209 mtu 8936
inet 1.1.1.33 netmask 255.255.255.255 destination 1.1.1.33
inet6 fe80::5efe:a14:b41 prefixlen 64 scopeid 0x20
tunnel txqueuelen 1000 (IPIP Tunnel)
RX packets 1168 bytes 98112 (98.1 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 15 dropped 0 overruns 0 carrier 15 collisions 0

tun-0A140142-0: flags=209 mtu 8936
inet 1.1.1.14 netmask 255.255.255.255 destination 1.1.1.14
inet6 fe80::5efe:a14:b42 prefixlen 64 scopeid 0x20
tunnel txqueuelen 1000 (IPIP Tunnel)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 15 dropped 0 overruns 0 carrier 15 collisions 0

tun-0A140143-0: flags=209 mtu 8936
inet 1.1.1.191 netmask 255.255.255.255 destination 1.1.1.191
inet6 fe80::5efe:a14:b43 prefixlen 64 scopeid 0x20
tunnel txqueuelen 1000 (IPIP Tunnel)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 15 dropped 0 overruns 0 carrier 15 collisions 0

tun-0A140144-0: flags=209 mtu 8936
inet 1.1.1.132 netmask 255.255.255.255 destination 1.1.1.132
inet6 fe80::5efe:a14:b44 prefixlen 64 scopeid 0x20
tunnel txqueuelen 1000 (IPIP Tunnel)
RX packets 56 bytes 4704 (4.7 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1168 bytes 98112 (98.1 KB)
TX errors 15 dropped 0 overruns 0 carrier 15 collisions 0

tun-0A140145-0: flags=209 mtu 8936
inet 1.1.1.224 netmask 255.255.255.255 destination 1.1.1.224
inet6 fe80::5efe:a14:b45 prefixlen 64 scopeid 0x20
tunnel txqueuelen 1000 (IPIP Tunnel)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 15 dropped 0 overruns 0 carrier 15 collisions 0

tun-0A140146-0: flags=209 mtu 8936
inet 1.1.1.241 netmask 255.255.255.255 destination 1.1.1.241
inet6 fe80::5efe:a14:b46 prefixlen 64 scopeid 0x20
tunnel txqueuelen 1000 (IPIP Tunnel)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 13 dropped 0 overruns 0 carrier 13 collisions 0

tun-0A140147-0: flags=209 mtu 8936
inet 1.1.1.77 netmask 255.255.255.255 destination 1.1.1.77
inet6 fe80::5efe:a14:b47 prefixlen 64 scopeid 0x20
tunnel txqueuelen 1000 (IPIP Tunnel)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 15 dropped 0 overruns 0 carrier 15 collisions 0

tun-0A140148-0: flags=209 mtu 8936
inet 1.1.1.222 netmask 255.255.255.255 destination 1.1.1.222
inet6 fe80::5efe:a14:b48 prefixlen 64 scopeid 0x20
tunnel txqueuelen 1000 (IPIP Tunnel)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 15 dropped 0 overruns 0 carrier 15 collisions 0

tun-0A140149-0: flags=209 mtu 8936
inet 1.1.1.157 netmask 255.255.255.255 destination 1.1.1.157
inet6 fe80::5efe:a14:b49 prefixlen 64 scopeid 0x20
tunnel txqueuelen 1000 (IPIP Tunnel)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 15 dropped 0 overruns 0 carrier 15 collisions 0

tun-0A14014A-0: flags=209 mtu 8936
inet 1.1.1.5 netmask 255.255.255.255 destination 1.1.1.5
inet6 fe80::5efe:a14:b4a prefixlen 64 scopeid 0x20
tunnel txqueuelen 1000 (IPIP Tunnel)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 15 dropped 0 overruns 0 carrier 15 collisions 0

tun-0A14014B-0: flags=209 mtu 8936
inet 1.1.1.189 netmask 255.255.255.255 destination 1.1.1.189
inet6 fe80::5efe:a14:b4b prefixlen 64 scopeid 0x20
tunnel txqueuelen 1000 (IPIP Tunnel)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 15 dropped 0 overruns 0 carrier 15 collisions 0

tun-0A14014C-0: flags=209 mtu 8936
inet 1.1.1.251 netmask 255.255.255.255 destination 1.1.1.251
inet6 fe80::5efe:a14:b4c prefixlen 64 scopeid 0x20
tunnel txqueuelen 1000 (IPIP Tunnel)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 15 dropped 0 overruns 0 carrier 15 collisions 0

tun-0A14014D-0: flags=209 mtu 8936
inet 1.1.1.75 netmask 255.255.255.255 destination 1.1.1.75
inet6 fe80::5efe:a14:b4d prefixlen 64 scopeid 0x20
tunnel txqueuelen 1000 (IPIP Tunnel)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 56 bytes 4704 (4.7 KB)
TX errors 15 dropped 0 overruns 0 carrier 15 collisions 0

gcp-spoke1-gw-uscentral1 Route Table

DestinationViaDevNexthop IPNexthop GatewayStatusMetricWeight
default10.20.11.1eth0up0
10.20.1.0/24tun-0A140103-010.20.1.3gcp-transit-gw-uscentral1up1001
tun-0A140141-010.20.1.65gcp-transit-gw-uscentral1up1
tun-0A140142-010.20.1.66gcp-transit-gw-uscentral1up1
tun-0A140143-010.20.1.67gcp-transit-gw-uscentral1up1
tun-0A140144-010.20.1.68gcp-transit-gw-uscentral1up1
tun-0A140145-010.20.1.69gcp-transit-gw-uscentral1up1
tun-0A140146-010.20.1.70gcp-transit-gw-uscentral1up1
tun-0A140147-010.20.1.71gcp-transit-gw-uscentral1up1
tun-0A140148-010.20.1.72gcp-transit-gw-uscentral1up1
tun-0A140149-010.20.1.73gcp-transit-gw-uscentral1up1
tun-0A14014A-010.20.1.74gcp-transit-gw-uscentral1up1
tun-0A14014B-010.20.1.75gcp-transit-gw-uscentral1up1
tun-0A14014C-010.20.1.76gcp-transit-gw-uscentral1up1
tun-0A14014D-010.20.1.77gcp-transit-gw-uscentral1up1
10.20.1.78tun-0A140103-010.20.1.3gcp-transit-gw-uscentral1up01
tun-0A140141-010.20.1.65gcp-transit-gw-uscentral1up1
tun-0A140142-010.20.1.66gcp-transit-gw-uscentral1up1
tun-0A140143-010.20.1.67gcp-transit-gw-uscentral1up1
tun-0A140144-010.20.1.68gcp-transit-gw-uscentral1up1
tun-0A140145-010.20.1.69gcp-transit-gw-uscentral1up1
tun-0A140146-010.20.1.70gcp-transit-gw-uscentral1up1
tun-0A140147-010.20.1.71gcp-transit-gw-uscentral1up1
tun-0A140148-010.20.1.72gcp-transit-gw-uscentral1up1
tun-0A140149-010.20.1.73gcp-transit-gw-uscentral1up1
tun-0A14014A-010.20.1.74gcp-transit-gw-uscentral1up1
tun-0A14014B-010.20.1.75gcp-transit-gw-uscentral1up1
tun-0A14014C-010.20.1.76gcp-transit-gw-uscentral1up1
tun-0A14014D-010.20.1.77gcp-transit-gw-uscentral1up1
10.20.11.0/2410.20.11.1eth0up0
10.20.11.1eth0up0
169.254.0.0/16eth0up0
NameRouteTargetGatewayPriorityTagsStatus
avx-1456e7d7d4354e2a894d403929d2507410.0.0.0/8Instance gcp-spoke1-gw-uscentral1 (zone us-central1-c)gcp-spoke1-gw-uscentral11000active
avx-1d793218436145f482e42b25a7090174172.16.0.0/12Instance gcp-spoke1-gw-uscentral1 (zone us-central1-c)gcp-spoke1-gw-uscentral11000active
avx-99394b1d0aae456997324de23596881b192.168.0.0/16Instance gcp-spoke1-gw-uscentral1 (zone us-central1-c)gcp-spoke1-gw-uscentral11000active
avx-9aeb27276c524a3a97ca59cf26fab8a90.0.0.0/0default-internet-gateway1000avx-gcp-spoke1-vpc-uscentral1-gblactive
default-route-1e4799e08daf4d0b10.20.11.0/24Virtual network gcp-spoke1-vpc-uscentral10active
default-route-7430e4c4273b0d5d0.0.0.0/0default-internet-gateway1000active
peering-route-becf46cc6fbeaf7610.20.1.0/240active
GCP Spoke VPC Route Table

Install BlockChain Quorum Node on AWS EC2 Instance

Prerequisites

Install GoQuorum

[ec2-user@ip-10-101-91-122 ~]$ sudo yum update
[ec2-user@ip-10-101-91-122 ~]$ sudo yum install git
[ec2-user@ip-10-101-91-122 ~]$ sudo yum install go

[ec2-user@ip-10-101-91-122 ~]$ sudo git clone https://github.com/ConsenSys/quorum.git

Cloning into 'quorum'…
remote: Enumerating objects: 11, done.
remote: Counting objects: 100% (11/11), done.
remote: Compressing objects: 100% (7/7), done.
remote: Total 99524 (delta 4), reused 10 (delta 4), pack-reused 99513
Receiving objects: 100% (99524/99524), 156.21 MiB | 23.84 MiB/s, done.
Resolving deltas: 100% (68839/68839), done.
[ec2-user@ip-10-101-87-46 ~]$

[ec2-user@ip-10-101-91-122 quorum]$ sudo make all

build/env.sh go run build/ci.go install
/usr/lib/golang/bin/go install -ldflags -X main.gitCommit=0f15cad38a673d471a6471f4bd0be08445959fcd -X main.gitDate=20201023 -v ./…
github.com/ethereum/go-ethereum/vendor/google.golang.org/grpc/resolver
github.com/ethereum/go-ethereum/vendor/github.com/oklog/run
github.com/ethereum/go-ethereum/vendor/golang.org/x/crypto/curve25519
github.com/ethereum/go-ethereum/vendor/github.com/go-stack/stack
github.com/ethereum/go-ethereum/p2p/enr
github.com/ethereum/go-ethereum/vendor/github.com/aristanetworks/goarista/monotime
github.com/ethereum/go-ethereum/log
github.com/ethereum/go-ethereum/common/mclock
github.com/ethereum/go-ethereum/vendor/github.com/syndtr/goleveldb/leveldb/util
github.com/ethereum/go-ethereum/vendor/github.com/syndtr/goleveldb/leveldb/comparer
github.com/ethereum/go-ethereum/p2p/netutil
github.com/ethereum/go-ethereum/vendor/github.com/syndtr/goleveldb/leveldb/storage
github.com/ethereum/go-ethereum/vendor/github.com/syndtr/goleveldb/leveldb/cache
github.com/ethereum/go-ethereum/vendor/github.com/syndtr/goleveldb/leveldb/filter
github.com/ethereum/go-ethereum/vendor/github.com/golang/snappy
github.com/ethereum/go-ethereum/vendor/github.com/syndtr/goleveldb/leveldb/opt
github.com/ethereum/go-ethereum/vendor/github.com/BurntSushi/toml
github.com/ethereum/go-ethereum/vendor/github.com/allegro/bigcache/queue
github.com/ethereum/go-ethereum/vendor/github.com/syndtr/goleveldb/leveldb/errors
github.com/ethereum/go-ethereum/common/prque
github.com/ethereum/go-ethereum/vendor/github.com/allegro/bigcache
github.com/ethereum/go-ethereum/vendor/github.com/syndtr/goleveldb/leveldb/iterator
github.com/ethereum/go-ethereum/vendor/github.com/syndtr/goleveldb/leveldb/journal
github.com/ethereum/go-ethereum/vendor/github.com/syndtr/goleveldb/leveldb/memdb
github.com/ethereum/go-ethereum/vendor/github.com/syndtr/goleveldb/leveldb/table
github.com/ethereum/go-ethereum/ethdb
github.com/ethereum/go-ethereum/vendor/golang.org/x/sys/unix
github.com/ethereum/go-ethereum/vendor/github.com/steakknife/hamming
github.com/ethereum/go-ethereum/private/engine
github.com/ethereum/go-ethereum/vendor/github.com/hashicorp/golang-lru/simplelru
github.com/ethereum/go-ethereum/vendor/github.com/syndtr/goleveldb/leveldb
github.com/ethereum/go-ethereum/vendor/github.com/hashicorp/golang-lru
github.com/ethereum/go-ethereum/vendor/github.com/steakknife/bloomfilter
github.com/ethereum/go-ethereum/event
github.com/ethereum/go-ethereum/accounts/abi
github.com/ethereum/go-ethereum/vendor/github.com/davecgh/go-spew/spew
github.com/ethereum/go-ethereum/vendor/github.com/elastic/gosigar
github.com/ethereum/go-ethereum/vendor/github.com/deckarep/golang-set
github.com/ethereum/go-ethereum/vendor/github.com/pborman/uuid
github.com/ethereum/go-ethereum/vendor/github.com/rjeczalik/notify
github.com/ethereum/go-ethereum/metrics
github.com/ethereum/go-ethereum/vendor/golang.org/x/crypto/pbkdf2
github.com/ethereum/go-ethereum/vendor/golang.org/x/crypto/scrypt
github.com/ethereum/go-ethereum/vendor/github.com/hashicorp/go-hclog
github.com/ethereum/go-ethereum/vendor/github.com/golang/protobuf/proto
github.com/ethereum/go-ethereum/vendor/golang.org/x/net/context
github.com/ethereum/go-ethereum/p2p/enode
github.com/ethereum/go-ethereum/vendor/golang.org/x/net/internal/timeseries
github.com/ethereum/go-ethereum/vendor/golang.org/x/net/trace
github.com/ethereum/go-ethereum/trie
github.com/ethereum/go-ethereum/vendor/google.golang.org/grpc/grpclog
github.com/ethereum/go-ethereum/vendor/google.golang.org/grpc/connectivity
github.com/ethereum/go-ethereum/vendor/google.golang.org/grpc/credentials/internal
github.com/ethereum/go-ethereum/vendor/google.golang.org/grpc/internal
github.com/ethereum/go-ethereum/vendor/google.golang.org/grpc/metadata
github.com/ethereum/go-ethereum/vendor/google.golang.org/grpc/codes
github.com/ethereum/go-ethereum/vendor/google.golang.org/grpc/encoding
github.com/ethereum/go-ethereum/vendor/google.golang.org/grpc/internal/grpcrand
github.com/ethereum/go-ethereum/vendor/google.golang.org/grpc/internal/envconfig
github.com/ethereum/go-ethereum/vendor/google.golang.org/grpc/internal/backoff
github.com/ethereum/go-ethereum/vendor/google.golang.org/grpc/internal/grpcsync
github.com/ethereum/go-ethereum/vendor/golang.org/x/text/transform
github.com/ethereum/go-ethereum/vendor/golang.org/x/text/unicode/bidi
github.com/ethereum/go-ethereum/vendor/golang.org/x/text/unicode/norm
github.com/ethereum/go-ethereum/core/types
github.com/ethereum/go-ethereum/vendor/golang.org/x/text/secure/bidirule
github.com/ethereum/go-ethereum/vendor/golang.org/x/net/http2/hpack
github.com/ethereum/go-ethereum/vendor/google.golang.org/grpc/credentials
github.com/ethereum/go-ethereum/vendor/google.golang.org/grpc/balancer
github.com/ethereum/go-ethereum/vendor/google.golang.org/grpc/encoding/proto
github.com/ethereum/go-ethereum/vendor/google.golang.org/grpc/balancer/base
github.com/ethereum/go-ethereum
github.com/ethereum/go-ethereum/vendor/github.com/golang/protobuf/ptypes/any
github.com/ethereum/go-ethereum/vendor/github.com/golang/protobuf/ptypes/duration
github.com/ethereum/go-ethereum/vendor/google.golang.org/grpc/balancer/roundrobin
github.com/ethereum/go-ethereum/vendor/github.com/golang/protobuf/ptypes/timestamp
github.com/ethereum/go-ethereum/accounts
github.com/ethereum/go-ethereum/vendor/google.golang.org/genproto/googleapis/rpc/status
github.com/ethereum/go-ethereum/vendor/google.golang.org/grpc/internal/channelz
github.com/ethereum/go-ethereum/vendor/github.com/golang/protobuf/ptypes
github.com/ethereum/go-ethereum/vendor/google.golang.org/grpc/binarylog/grpc_binarylog_v1
github.com/ethereum/go-ethereum/vendor/google.golang.org/grpc/status
github.com/ethereum/go-ethereum/accounts/keystore
github.com/ethereum/go-ethereum/vendor/golang.org/x/net/idna
github.com/ethereum/go-ethereum/vendor/google.golang.org/grpc/internal/binarylog
github.com/ethereum/go-ethereum/vendor/google.golang.org/grpc/internal/syscall
github.com/ethereum/go-ethereum/vendor/google.golang.org/grpc/keepalive
github.com/ethereum/go-ethereum/vendor/google.golang.org/grpc/peer
github.com/ethereum/go-ethereum/vendor/google.golang.org/grpc/stats
github.com/ethereum/go-ethereum/vendor/google.golang.org/grpc/tap
github.com/ethereum/go-ethereum/vendor/google.golang.org/grpc/naming
github.com/ethereum/go-ethereum/vendor/google.golang.org/grpc/resolver/dns
github.com/ethereum/go-ethereum/vendor/google.golang.org/grpc/resolver/passthrough
github.com/ethereum/go-ethereum/vendor/github.com/hashicorp/yamux
github.com/ethereum/go-ethereum/vendor/golang.org/x/net/http/httpguts
github.com/ethereum/go-ethereum/vendor/github.com/mitchellh/go-testing-interface
github.com/ethereum/go-ethereum/vendor/github.com/gballet/go-libpcsclite
github.com/ethereum/go-ethereum/vendor/github.com/status-im/keycard-go/derivationpath
github.com/ethereum/go-ethereum/vendor/golang.org/x/net/http2
github.com/ethereum/go-ethereum/vendor/github.com/wsddn/go-ecdh
github.com/ethereum/go-ethereum/ethdb/leveldb
github.com/ethereum/go-ethereum/ethdb/memorydb
github.com/ethereum/go-ethereum/accounts/scwallet
github.com/ethereum/go-ethereum/vendor/github.com/mattn/go-runewidth
github.com/ethereum/go-ethereum/vendor/github.com/pkg/errors
github.com/ethereum/go-ethereum/vendor/github.com/olekukonko/tablewriter
github.com/ethereum/go-ethereum/vendor/github.com/prometheus/tsdb/fileutil
github.com/ethereum/go-ethereum/common/bitutil
github.com/ethereum/go-ethereum/crypto/ecies
github.com/ethereum/go-ethereum/p2p/discover
github.com/ethereum/go-ethereum/core/rawdb
github.com/ethereum/go-ethereum/vendor/github.com/huin/goupnp/httpu
github.com/ethereum/go-ethereum/vendor/github.com/huin/goupnp/scpd
github.com/ethereum/go-ethereum/vendor/github.com/huin/goupnp/soap
github.com/ethereum/go-ethereum/vendor/github.com/huin/goupnp/ssdp
github.com/ethereum/go-ethereum/vendor/golang.org/x/net/html/atom
github.com/ethereum/go-ethereum/core/state
github.com/ethereum/go-ethereum/vendor/golang.org/x/net/html
github.com/ethereum/go-ethereum/vendor/golang.org/x/text/encoding/internal/identifier
github.com/ethereum/go-ethereum/vendor/golang.org/x/text/encoding
github.com/ethereum/go-ethereum/vendor/google.golang.org/grpc/internal/transport
github.com/ethereum/go-ethereum/vendor/golang.org/x/text/encoding/internal
github.com/ethereum/go-ethereum/vendor/golang.org/x/text/encoding/charmap
github.com/ethereum/go-ethereum/vendor/golang.org/x/text/encoding/japanese
github.com/ethereum/go-ethereum/vendor/golang.org/x/text/encoding/korean
github.com/ethereum/go-ethereum/vendor/golang.org/x/text/encoding/simplifiedchinese
github.com/ethereum/go-ethereum/vendor/google.golang.org/grpc
github.com/ethereum/go-ethereum/vendor/golang.org/x/text/encoding/traditionalchinese
github.com/ethereum/go-ethereum/vendor/golang.org/x/text/internal/utf8internal
github.com/ethereum/go-ethereum/vendor/golang.org/x/text/runes
github.com/ethereum/go-ethereum/vendor/golang.org/x/text/internal/tag
github.com/ethereum/go-ethereum/vendor/golang.org/x/text/encoding/unicode
github.com/ethereum/go-ethereum/vendor/golang.org/x/text/internal/language
github.com/ethereum/go-ethereum/vendor/github.com/jackpal/go-nat-pmp
github.com/ethereum/go-ethereum/vendor/github.com/gorilla/websocket
github.com/ethereum/go-ethereum/vendor/github.com/rs/xhandler
github.com/ethereum/go-ethereum/vendor/github.com/rs/cors
github.com/ethereum/go-ethereum/vendor/golang.org/x/text/internal/language/compact
github.com/ethereum/go-ethereum/consensus/misc
github.com/ethereum/go-ethereum/vendor/golang.org/x/text/language
github.com/ethereum/go-ethereum/vendor/github.com/edsrzf/mmap-go
github.com/ethereum/go-ethereum/vendor/golang.org/x/sys/cpu
github.com/ethereum/go-ethereum/vendor/github.com/hashicorp/go-plugin/internal/plugin
github.com/ethereum/go-ethereum/vendor/google.golang.org/grpc/health/grpc_health_v1
github.com/ethereum/go-ethereum/vendor/github.com/jpmorganchase/quorum-account-plugin-sdk-go/proto
github.com/ethereum/go-ethereum/vendor/github.com/jpmorganchase/quorum-security-plugin-sdk-go/proto
github.com/ethereum/go-ethereum/vendor/google.golang.org/grpc/health
github.com/ethereum/go-ethereum/vendor/golang.org/x/text/encoding/htmlindex
github.com/ethereum/go-ethereum/vendor/golang.org/x/net/html/charset
github.com/ethereum/go-ethereum/crypto/blake2b
github.com/ethereum/go-ethereum/vendor/github.com/hashicorp/go-plugin
github.com/ethereum/go-ethereum/vendor/github.com/huin/goupnp
github.com/ethereum/go-ethereum/crypto/bn256/cloudflare
github.com/ethereum/go-ethereum/vendor/github.com/huin/goupnp/dcps/internetgateway1
github.com/ethereum/go-ethereum/vendor/github.com/huin/goupnp/dcps/internetgateway2
github.com/ethereum/go-ethereum/internal/plugin
github.com/ethereum/go-ethereum/crypto/bn256
github.com/ethereum/go-ethereum/plugin/account
github.com/ethereum/go-ethereum/plugin/security
github.com/ethereum/go-ethereum/vendor/golang.org/x/crypto/ripemd160
github.com/ethereum/go-ethereum/core/vm
github.com/ethereum/go-ethereum/rpc
github.com/ethereum/go-ethereum/accounts/pluggable
github.com/ethereum/go-ethereum/p2p/nat
github.com/ethereum/go-ethereum/vendor/github.com/patrickmn/go-cache
github.com/ethereum/go-ethereum/p2p/discv5
github.com/ethereum/go-ethereum/private/cache
github.com/ethereum/go-ethereum/private/engine/constellation
github.com/ethereum/go-ethereum/private/engine/notinuse
github.com/ethereum/go-ethereum/private/engine/tessera
github.com/ethereum/go-ethereum/vendor/github.com/tv42/httpunix
github.com/ethereum/go-ethereum/p2p
github.com/ethereum/go-ethereum/private
github.com/ethereum/go-ethereum/core/bloombits
github.com/ethereum/go-ethereum/vendor/github.com/tyler-smith/go-bip39/wordlists
github.com/ethereum/go-ethereum/vendor/github.com/tyler-smith/go-bip39
github.com/ethereum/go-ethereum/vendor/github.com/golang/protobuf/protoc-gen-go/descriptor
github.com/ethereum/go-ethereum/vendor/github.com/karalabe/usb
github.com/ethereum/go-ethereum/vendor/github.com/jpmorganchase/quorum-hello-world-plugin-sdk-go/proto
github.com/ethereum/go-ethereum/plugin/helloworld
github.com/ethereum/go-ethereum/vendor/github.com/golang/mock/gomock
github.com/ethereum/go-ethereum/accounts/usbwallet/trezor
github.com/ethereum/go-ethereum/consensus
github.com/ethereum/go-ethereum/consensus/clique
github.com/ethereum/go-ethereum/consensus/ethash
github.com/ethereum/go-ethereum/plugin/gen/proto_common
github.com/ethereum/go-ethereum/vendor/github.com/naoina/go-stringutil
github.com/ethereum/go-ethereum/plugin/initializer
github.com/ethereum/go-ethereum/vendor/github.com/naoina/toml/ast
github.com/ethereum/go-ethereum/core
github.com/ethereum/go-ethereum/signer/storage
github.com/ethereum/go-ethereum/vendor/github.com/naoina/toml
github.com/ethereum/go-ethereum/vendor/golang.org/x/crypto/ssh/terminal
github.com/ethereum/go-ethereum/consensus/istanbul
github.com/ethereum/go-ethereum/vendor/gopkg.in/karalabe/cookiejar.v2/collections/prque
github.com/ethereum/go-ethereum/consensus/istanbul/core
github.com/ethereum/go-ethereum/plugin
github.com/ethereum/go-ethereum/consensus/istanbul/validator
github.com/ethereum/go-ethereum/eth/fetcher
github.com/ethereum/go-ethereum/eth/tracers/internal/tracers
github.com/ethereum/go-ethereum/vendor/gopkg.in/olebedev/go-duktape.v3
github.com/ethereum/go-ethereum/metrics/prometheus
github.com/ethereum/go-ethereum/metrics/exp
github.com/ethereum/go-ethereum/vendor/github.com/fjl/memsize
github.com/ethereum/go-ethereum/eth/downloader
github.com/ethereum/go-ethereum/consensus/istanbul/backend
github.com/ethereum/go-ethereum/core/forkid
github.com/ethereum/go-ethereum/eth/filters
github.com/ethereum/go-ethereum/internal/ethapi
github.com/ethereum/go-ethereum/miner
github.com/ethereum/go-ethereum/eth/gasprice
github.com/ethereum/go-ethereum/vendor/github.com/fjl/memsize/memsizeui
github.com/ethereum/go-ethereum/vendor/github.com/mattn/go-isatty
github.com/ethereum/go-ethereum/vendor/gopkg.in/urfave/cli.v1
github.com/ethereum/go-ethereum/vendor/github.com/mattn/go-colorable
github.com/ethereum/go-ethereum/accounts/pluggable/internal/testutils
github.com/ethereum/go-ethereum/accounts/pluggable/internal/testutils/mock_plugin
github.com/ethereum/go-ethereum/common/fdlimit
github.com/ethereum/go-ethereum/vendor/github.com/howeyc/fsnotify
github.com/ethereum/go-ethereum/vendor/github.com/oschwald/maxminddb-golang
github.com/ethereum/go-ethereum/internal/debug
github.com/ethereum/go-ethereum/vendor/github.com/mohae/deepcopy
github.com/ethereum/go-ethereum/vendor/github.com/apilayer/freegeoip
github.com/ethereum/go-ethereum/vendor/golang.org/x/net/websocket
github.com/ethereum/go-ethereum/les/flowcontrol
github.com/ethereum/go-ethereum/light
github.com/ethereum/go-ethereum/accounts/usbwallet
github.com/ethereum/go-ethereum/dashboard
github.com/ethereum/go-ethereum/vendor/github.com/graph-gophers/graphql-go/errors
github.com/ethereum/go-ethereum/vendor/github.com/graph-gophers/graphql-go/internal/common
github.com/ethereum/go-ethereum/signer/core
github.com/ethereum/go-ethereum/vendor/github.com/graph-gophers/graphql-go/internal/schema
github.com/ethereum/go-ethereum/vendor/github.com/graph-gophers/graphql-go/internal/exec/packer
github.com/ethereum/go-ethereum/vendor/github.com/graph-gophers/graphql-go/introspection
github.com/ethereum/go-ethereum/vendor/github.com/graph-gophers/graphql-go/internal/query
github.com/ethereum/go-ethereum/vendor/github.com/graph-gophers/graphql-go/internal/exec/resolvable
github.com/ethereum/go-ethereum/vendor/github.com/graph-gophers/graphql-go/log
github.com/ethereum/go-ethereum/vendor/github.com/graph-gophers/graphql-go/internal/exec/selected
github.com/ethereum/go-ethereum/vendor/github.com/opentracing/opentracing-go/log
github.com/ethereum/go-ethereum/accounts/external
github.com/ethereum/go-ethereum/vendor/github.com/graph-gophers/graphql-go/internal/validation
github.com/ethereum/go-ethereum/vendor/github.com/opentracing/opentracing-go
github.com/ethereum/go-ethereum/accounts/abi/bind
github.com/ethereum/go-ethereum/node
github.com/ethereum/go-ethereum/vendor/github.com/opentracing/opentracing-go/ext
github.com/ethereum/go-ethereum/vendor/github.com/graph-gophers/graphql-go/trace
github.com/ethereum/go-ethereum/vendor/github.com/graph-gophers/graphql-go/internal/exec
github.com/ethereum/go-ethereum/vendor/github.com/graph-gophers/graphql-go
github.com/ethereum/go-ethereum/contracts/checkpointoracle/contract
github.com/ethereum/go-ethereum/ethclient
github.com/ethereum/go-ethereum/contracts/checkpointoracle
github.com/ethereum/go-ethereum/extension/extensionContracts
github.com/ethereum/go-ethereum/vendor/github.com/graph-gophers/graphql-go/relay
github.com/ethereum/go-ethereum/vendor/github.com/influxdata/influxdb/pkg/escape
github.com/ethereum/go-ethereum/vendor/github.com/influxdata/influxdb/models
github.com/ethereum/go-ethereum/graphql
github.com/ethereum/go-ethereum/extension/privacyExtension
github.com/ethereum/go-ethereum/permission/bind
github.com/ethereum/go-ethereum/vendor/github.com/gogo/protobuf/proto
github.com/ethereum/go-ethereum/vendor/github.com/influxdata/influxdb/client
github.com/ethereum/go-ethereum/metrics/influxdb
github.com/ethereum/go-ethereum/vendor/github.com/eapache/queue
github.com/ethereum/go-ethereum/vendor/github.com/eapache/channels
github.com/ethereum/go-ethereum/vendor/github.com/coreos/go-systemd/journal
github.com/ethereum/go-ethereum/vendor/github.com/coreos/pkg/capnslog
github.com/ethereum/go-ethereum/vendor/github.com/coreos/etcd/pkg/types
github.com/ethereum/go-ethereum/vendor/github.com/coreos/etcd/pkg/fileutil
github.com/ethereum/go-ethereum/vendor/github.com/coreos/etcd/pkg/httputil
github.com/ethereum/go-ethereum/vendor/github.com/coreos/etcd/pkg/ioutil
github.com/ethereum/go-ethereum/vendor/github.com/coreos/etcd/pkg/logutil
github.com/ethereum/go-ethereum/vendor/github.com/coreos/etcd/pkg/pbutil
github.com/ethereum/go-ethereum/vendor/github.com/coreos/etcd/pkg/tlsutil
github.com/ethereum/go-ethereum/vendor/github.com/beorn7/perks/quantile
github.com/ethereum/go-ethereum/vendor/github.com/coreos/etcd/pkg/transport
github.com/ethereum/go-ethereum/vendor/github.com/prometheus/client_model/go
github.com/ethereum/go-ethereum/vendor/github.com/matttproud/golang_protobuf_extensions/pbutil
github.com/ethereum/go-ethereum/vendor/github.com/prometheus/common/internal/bitbucket.org/ww/goautoneg
github.com/ethereum/go-ethereum/vendor/github.com/prometheus/common/model
github.com/ethereum/go-ethereum/vendor/github.com/prometheus/procfs
github.com/ethereum/go-ethereum/vendor/github.com/prometheus/common/expfmt
github.com/ethereum/go-ethereum/vendor/github.com/coreos/go-semver/semver
github.com/ethereum/go-ethereum/vendor/github.com/gogo/protobuf/protoc-gen-gogo/descriptor
github.com/ethereum/go-ethereum/vendor/github.com/coreos/etcd/version
github.com/ethereum/go-ethereum/vendor/github.com/xiang90/probing
github.com/ethereum/go-ethereum/vendor/golang.org/x/time/rate
github.com/ethereum/go-ethereum/vendor/github.com/prometheus/client_golang/prometheus
github.com/ethereum/go-ethereum/vendor/github.com/coreos/etcd/pkg/crc
github.com/ethereum/go-ethereum/vendor/gopkg.in/oleiade/lane.v1
github.com/ethereum/go-ethereum/vendor/golang.org/x/sync/syncmap
github.com/ethereum/go-ethereum/whisper/whisperv6
github.com/ethereum/go-ethereum/vendor/github.com/gogo/protobuf/gogoproto
github.com/ethereum/go-ethereum/vendor/github.com/coreos/etcd/raft/raftpb
github.com/ethereum/go-ethereum/vendor/github.com/coreos/etcd/snap/snappb
github.com/ethereum/go-ethereum/vendor/github.com/coreos/etcd/wal/walpb
github.com/ethereum/go-ethereum/vendor/github.com/coreos/etcd/raft
github.com/ethereum/go-ethereum/common/compiler
github.com/ethereum/go-ethereum/internal/jsre/deps
github.com/ethereum/go-ethereum/vendor/github.com/fatih/color
github.com/ethereum/go-ethereum/vendor/gopkg.in/sourcemap.v1/base64vlq
github.com/ethereum/go-ethereum/vendor/gopkg.in/sourcemap.v1
github.com/ethereum/go-ethereum/vendor/github.com/robertkrimen/otto/token
github.com/ethereum/go-ethereum/vendor/github.com/robertkrimen/otto/file
github.com/ethereum/go-ethereum/vendor/github.com/robertkrimen/otto/dbg
github.com/ethereum/go-ethereum/vendor/github.com/robertkrimen/otto/ast
github.com/ethereum/go-ethereum/vendor/github.com/robertkrimen/otto/registry
github.com/ethereum/go-ethereum/vendor/github.com/coreos/etcd/etcdserver/stats
github.com/ethereum/go-ethereum/vendor/github.com/coreos/etcd/snap
github.com/ethereum/go-ethereum/vendor/github.com/coreos/etcd/wal
github.com/ethereum/go-ethereum/vendor/github.com/robertkrimen/otto/parser
github.com/ethereum/go-ethereum/vendor/github.com/coreos/etcd/rafthttp
github.com/ethereum/go-ethereum/internal/web3ext
github.com/ethereum/go-ethereum/vendor/github.com/peterh/liner
github.com/ethereum/go-ethereum/vendor/github.com/robertkrimen/otto
github.com/ethereum/go-ethereum/signer/rules/deps
github.com/ethereum/go-ethereum/signer/fourbyte
github.com/ethereum/go-ethereum/vendor/github.com/cloudflare/cloudflare-go
github.com/ethereum/go-ethereum/p2p/dnsdisc
github.com/ethereum/go-ethereum/core/asm
github.com/ethereum/go-ethereum/core/vm/runtime
github.com/ethereum/go-ethereum/cmd/evm/internal/compiler
github.com/ethereum/go-ethereum/tests
github.com/ethereum/go-ethereum/vendor/github.com/docker/docker/pkg/reexec
github.com/ethereum/go-ethereum/p2p/simulations/pipes
github.com/ethereum/go-ethereum/p2p/simulations/adapters
github.com/ethereum/go-ethereum/vendor/github.com/julienschmidt/httprouter
github.com/ethereum/go-ethereum/vendor/golang.org/x/crypto/ed25519/internal/edwards25519
github.com/ethereum/go-ethereum/p2p/simulations
github.com/ethereum/go-ethereum/vendor/golang.org/x/crypto/ed25519
github.com/ethereum/go-ethereum/vendor/golang.org/x/crypto/internal/subtle
github.com/ethereum/go-ethereum/vendor/golang.org/x/crypto/internal/chacha20
github.com/ethereum/go-ethereum/vendor/golang.org/x/crypto/poly1305
github.com/ethereum/go-ethereum/cmd/p2psim
github.com/ethereum/go-ethereum/vendor/golang.org/x/crypto/ssh
github.com/ethereum/go-ethereum/internal/jsre
github.com/ethereum/go-ethereum/console
github.com/ethereum/go-ethereum/signer/rules
github.com/ethereum/go-ethereum/cmd/puppeth
github.com/ethereum/go-ethereum/cmd/devp2p
github.com/ethereum/go-ethereum/cmd/rlpdump
github.com/ethereum/go-ethereum/whisper/mailserver
github.com/ethereum/go-ethereum/crypto/bn256/google
github.com/ethereum/go-ethereum/ethdb/dbtest
github.com/ethereum/go-ethereum/internal/cmdtest
github.com/ethereum/go-ethereum/internal/guide
github.com/ethereum/go-ethereum/internal/testlog
github.com/ethereum/go-ethereum/metrics/librato
github.com/ethereum/go-ethereum/whisper/shhclient
github.com/ethereum/go-ethereum/p2p/simulations/examples
github.com/ethereum/go-ethereum/p2p/testing
github.com/ethereum/go-ethereum/permission/contract/gen
github.com/ethereum/go-ethereum/plugin/account/internal/testutils
github.com/ethereum/go-ethereum/plugin/gen
github.com/ethereum/go-ethereum/eth/tracers
github.com/ethereum/go-ethereum/eth
github.com/ethereum/go-ethereum/accounts/abi/bind/backends
github.com/ethereum/go-ethereum/raft
github.com/ethereum/go-ethereum/extension
github.com/ethereum/go-ethereum/les
github.com/ethereum/go-ethereum/permission
github.com/ethereum/go-ethereum/ethstats
github.com/ethereum/go-ethereum/cmd/utils
github.com/ethereum/go-ethereum/cmd/faucet
github.com/ethereum/go-ethereum/mobile
github.com/ethereum/go-ethereum/cmd/abigen
github.com/ethereum/go-ethereum/cmd/bootnode
github.com/ethereum/go-ethereum/cmd/checkpoint-admin
github.com/ethereum/go-ethereum/cmd/clef
github.com/ethereum/go-ethereum/cmd/ethkey
github.com/ethereum/go-ethereum/cmd/evm
github.com/ethereum/go-ethereum/cmd/geth
github.com/ethereum/go-ethereum/cmd/wnode
[ec2-user@ip-10-101-87-46 quorum]$

[ec2-user@ip-10-101-87-46 quorum]$ make test (optional step)

Install Tessera

Tessera is an open-source private transaction manager developed under the Apache 2.0 license and written in Java. The primary application of Tessera is as the private transaction manager for GoQuorum.

[ec2-user@ip-10-101-91-122 ~]$ sudo yum install maven

[ec2-user@ip-10-101-91-122 ~]$ sudo git clone https://github.com/ConsenSys/tessera

[ec2-user@ip-10-101-91-122 ~]$ cd tessera/
[ec2-user@ip-10-101-91-122 tessera]$ mvn install





Start From Scratch

git clone https://github.com/ConsenSys/quorum.git
 cd quorum
 make all


[ec2-user@ip-10-101-91-122 ~]$ export PATH=/home/ec2-user/quorum/build/bin:$PATH

[ec2-user@ip-10-101-91-122 ~]$ mkdir fromscratch
[ec2-user@ip-10-101-91-122 ~]$ cd fromscratch/
[ec2-user@ip-10-101-91-122 fromscratch]$ mkdir node4
[ec2-user@ip-10-101-91-122 fromscratch]$
[ec2-user@ip-10-101-91-122 fromscratch]$
[ec2-user@ip-10-101-91-122 fromscratch]$ mkdir node4
[ec2-user@ip-10-101-91-122 fromscratch]$
[ec2-user@ip-10-101-91-122 fromscratch]$ geth –datadir node4 account new

[ec2-user@ip-10-101-91-122 fromscratch]$ cat genesis.json
{
"alloc": {
"0x679fed8f4f3ea421689136b25073c6da7973418f": {
"balance": "1000000000000000000000000000"
}
},
"coinbase": "0x0000000000000000000000000000000000000000",
"config": {
"homesteadBlock": 0,
"byzantiumBlock": 0,
"constantinopleBlock": 0,
"chainId": 10,
"eip150Block": 0,
"eip155Block": 0,
"eip150Hash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"eip158Block": 0,
"isQuorum": true
},
"difficulty": "0x0",
"extraData": "0x0000000000000000000000000000000000000000000000000000000000000000",
"gasLimit": "0xE0000000",
"mixhash": "0x00000000000000000000000000000000000000647572616c65787365646c6578",
"nonce": "0x0",
"parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"timestamp": "0x00"
}
[ec2-user@ip-10-101-91-122 fromscratch]$
INFO [10-26|14:52:07.533] Maximum peer count ETH=50 LES=0 total=50
INFO [10-26|14:52:07.533] Smartcard socket not found, disabling err="stat /run/pcscd/pcscd.comm: no such file or directory"
Your new account is locked with a password. Please give a password. Do not forget this password.
Password:
Repeat password:
Your new key was generated
Public address of the key: 0x20355BD031Be7a73ba4F2DEaa36d8C906D196997
Path of the secret key file: node4/keystore/UTC--2020-10-26T14-52-43.327226532Z--20355bd031be7a73ba4f2deaa36d8c906d196997
You can share your public address with anyone. Others need it to interact with you.
You must NEVER share the secret key with anyone! The key controls access to your funds!
You must BACKUP your key file! Without the key, it's impossible to access account funds!
You must REMEMBER your password! Without the password, it's impossible to decrypt the key!
[ec2-user@ip-10-101-91-122 fromscratch]$

Edit genesis.json file and then execute following commands

[ec2-user@ip-10-101-91-122 fromscratch]$ bootnode --genkey=nodekey
[ec2-user@ip-10-101-91-122 fromscratch]$ cp nodekey node4
[ec2-user@ip-10-101-91-122 fromscratch]$ bootnode --nodekey=node4/nodekey --writeaddress > node4/enode
[ec2-user@ip-10-101-91-122 fromscratch]$ cat node4/enode
5608478a4ae2251b5a4b077a810a8dc466b23fbab678d42d4f861d5ee4241a640d9dfb417f71515738d68a4a5ae2b3c237c10cade12dc7994072e4f583ea4475
[ec2-user@ip-10-101-91-122 fromscratch]$
Create a file called static-nodes.json

[ec2-user@ip-10-101-91-122 fromscratch]$ geth --datadir new-node-1 init genesis.json
INFO [10-26|15:10:37.664] Maximum peer count ETH=50 LES=0 total=50
INFO [10-26|15:10:37.664] Smartcard socket not found, disabling err="stat /run/pcscd/pcscd.comm: no such file or directory"
ERROR[10-26|15:10:37.665] Failed to enumerate USB devices hub=ledger vendor=11415 failcount=1 err="failed to initialize libusb: libusb: unknown error [code -99]"
ERROR[10-26|15:10:37.667] Failed to enumerate USB devices hub=trezor vendor=21324 failcount=1 err="failed to initialize libusb: libusb: unknown error [code -99]"
ERROR[10-26|15:10:37.667] Failed to enumerate USB devices hub=trezor vendor=4617 failcount=1 err="failed to initialize libusb: libusb: unknown error [code -99]"
ERROR[10-26|15:10:37.667] Failed to enumerate USB devices hub=ledger vendor=11415 failcount=2 err="failed to initialize libusb: libusb: unknown error [code -99]"
ERROR[10-26|15:10:37.667] Failed to enumerate USB devices hub=trezor vendor=21324 failcount=2 err="failed to initialize libusb: libusb: unknown error [code -99]"
ERROR[10-26|15:10:37.667] Failed to enumerate USB devices hub=trezor vendor=4617 failcount=2 err="failed to initialize libusb: libusb: unknown error [code -99]"
INFO [10-26|15:10:37.667] Allocated cache and file handles database=/home/ec2-user/fromscratch/new-node-1/geth/chaindata cache=16.00MiB handles=16
INFO [10-26|15:10:37.675] Writing custom genesis block
INFO [10-26|15:10:37.675] Persisted trie from memory database nodes=1 size=152.00B time=41.819µs gcnodes=0 gcsize=0.00B gctime=0s livenodes=1 livesize=0.00B
INFO [10-26|15:10:37.676] Successfully wrote genesis state database=chaindata hash=ec0542…9665bf
INFO [10-26|15:10:37.676] Allocated cache and file handles database=/home/ec2-user/fromscratch/new-node-1/geth/lightchaindata cache=16.00MiB handles=16
INFO [10-26|15:10:37.683] Writing custom genesis block
INFO [10-26|15:10:37.683] Persisted trie from memory database nodes=1 size=152.00B time=40.001µs gcnodes=0 gcsize=0.00B gctime=0s livenodes=1 livesize=0.00B
INFO [10-26|15:10:37.683] Successfully wrote genesis state database=lightchaindata hash=ec0542…9665bf
[ec2-user@ip-10-101-91-122 fromscratch]$


Reference

Detailed instructions are listed here

https://docs.goquorum.consensys.net/en/latest/Tutorials/Creating-A-Network-From-Scratch/

https://docs.goquorum.consensys.net/en/latest/HowTo/Use/adding_nodes/

https://docs.tessera.consensys.net/en/stable/

https://github.com/ConsenSys/tessera

Deploying BlockChain Quorum on AWS EC2 Instance

Introduction

Quorum is an enterprise blockchain platform. Quorum is a privacy-centric fork of Ethereum client “geth” with several protocol level enhancements to support enterprise business needs. Quorum is an open-source project. The very nature of blockchain or distrubuted ledger provides a secure, shardd platform for decentralized applications (DAPPs) and data. It is cryptographically secure, auditable and immutable.

Quorum Architecture

Quorum provides several enterprise features such as

  • Transaction Privatcy
  • Multiple pluggable consensus mechanism suitable for enterprise use-cases
  • Enterprise grade permission management (access control) for network nodes and participants
  • Enterprise grade performance
Quorum - A blockchain Platform for the Enterprise

Quorum node is a lightweight fork of geth. For more details check the resources provided by JPMorgan Blockchain Center of Excellence and ConSensys. Notice that JPMorgan’s BlockChain platform Quorum is now acquired by ConsenSys.

This article assume that readers would have basic understanding of how BlockChain works. The main purpose of this article to teach how to deploy the Quorum on AWS EC2 instance for testing purposes.

Technical Pre-Requisite

  • The Quroum needs Java version 11
  • Requires Amazon Linux VM 2 (at least t3.xarge – blockchain requires this much)

Java Version Mapping

49 = Java 5
50 = Java 6
51 = Java 7
52 = Java 8
53 = Java 9
54 = Java 10
55 = Java 11
56 = Java 12
57 = Java 13
58 = Java 14

Block Chain Installation Steps on AWS EC2 Instance

  1. Deploy AWS EC2 instance
    1. Must use at least Linux VM 2 (to get amazon-linux-extras)
    2. Size must be at least t3.xlarge for BlockChain proper operations
  2. Install node-js on the EC2 instance
    1. $ curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.34.0/install.sh | bash
    2. $ . ~/.nvm/nvm.sh
    3. $ nvm install node
    4. $ node -e “console.log(‘Running Node.js ‘ + process.version)”
  3. $ sudo yum update
  4. $ sudo amazon-linux-extras install java-openjdk11
  5. $ curl -o- -L https://yarnpkg.com/install.sh | bash (optional)
  6. Install BlockChain Quorum using this URL
    1. $ npm install -g quorum-wizard
    2. $ quorum-wizard -q -v

BlockChain Quorum Installation Output

The Quorum wizard automatically created 3-node raft network with tessera and cakeshop on a single EC2 instance. Following is the output

[ec2-user@ip-10-101-76-122 ~]$ quorum-wizard -v
debug: Showing debug logs
?
Welcome to Quorum Wizard!
This tool allows you to easily create bash, docker, and kubernetes files to start up a quorum network.
You can control consensus, privacy, network details and more for a customized setup.
Additionally you can choose to deploy our chain explorer, Cakeshop, to easily view and monitor your network.
We have 3 options to help you start exploring Quorum:
Quickstart - our 1 click option to create a 3 node raft network with tessera and cakeshop
Simple Network - using pregenerated keys from quorum 7nodes example,
this option allows you to choose the number of nodes (7 max), consensus mechanism, transaction manager, and the option to deploy cakeshop
Custom Network - In addition to the options available in #2, this selection allows for further customization of your network.
Choose to generate keys, customize ports for both bash and docker, or change the network id
Quorum Wizard will generate your startup files and everything required to bring up your network.
All you need to do is go to the specified location and run start.sh
❯ Quickstart (3-node raft network with tessera and cakeshop)
Simple Network
Custom Network
Exit

I used the command with -q parameter which created the Quickstart setup in one go.

[ec2-user@ip-10-101-76-122 ~]$ quorum-wizard -q -v
debug: Showing debug logs
Downloading dependencies…
Downloading quorum 2.7.0 from https://bintray.com/quorumengineering/quorum/download_file?file_path=v2.7.0/geth_v2.7.0_linux_amd64.tar.gz…
Downloading tessera 0.10.5 from https://oss.sonatype.org/service/local/repositories/releases/content/com/jpmorgan/quorum/tessera-app/0.10.5/tessera-app-0.10.5-app.jar…
Downloading cakeshop 0.11.0 from https://github.com/jpmorganchase/cakeshop/releases/download/v0.11.0/cakeshop-0.11.0.war…
Unpacking to /home/ec2-user/.quorum-wizard/bin/quorum/2.7.0/geth
Unpacking to /home/ec2-user/.quorum-wizard/bin/tessera/0.10.5/tessera-app.jar
Unpacking to /home/ec2-user/.quorum-wizard/bin/cakeshop/0.11.0/cakeshop.war
Saved to /home/ec2-user/.quorum-wizard/bin/quorum/2.7.0/geth
Saved to /home/ec2-user/.quorum-wizard/bin/tessera/0.10.5/tessera-app.jar
Saved to /home/ec2-user/.quorum-wizard/bin/cakeshop/0.11.0/cakeshop.war
Building network directory…
Generating network resources locally…
Building qdata directory…
Initializing quorum…
Done
Tessera Node 1 public key:
BULeR8JyUWhiuuCMU/HLA0Q5pzkYT+cHII3ZKBey3Bo=
Tessera Node 2 public key:
QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=
Tessera Node 3 public key:
1iTZde/ndBHvzhcl7V68x44Vx7pl8nwx9LqnM/AfJUg=

Quorum network created
Run the following commands to start your network:
cd network/3-nodes-quickstart
./start.sh
A sample simpleStorage contract is provided to deploy to your network
To use run ./runscript.sh public_contract.js from the network folder
A private simpleStorage contract was created with privateFor set to use Node 2's public key: QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=
To use run ./runscript.sh private_contract.js from the network folder
After starting, Cakeshop will be accessible here: http://localhost:8999
[ec2-user@ip-10-101-76-122 ~]$

Start Quorum Now

[ec2-user@ip-10-101-76-122 ~]$ cd network/3-nodes-quickstart/
[ec2-user@ip-10-101-76-122 3-nodes-quickstart]$
[ec2-user@ip-10-101-76-122 3-nodes-quickstart]$ pwd
/home/ec2-user/network/3-nodes-quickstart

[ec2-user@ip-10-101-76-122 3-nodes-quickstart]$ ./start.sh
Starting Quorum network…
Waiting until all Tessera nodes are running…
Node 1 is not yet listening on tm.ipc
Node 2 is not yet listening on tm.ipc
Node 3 is not yet listening on tm.ipc
Node 1 is not yet listening on http
Node 2 is not yet listening on http
Node 3 is not yet listening on http
Waiting until all Tessera nodes are running…
Node 1 is not yet listening on tm.ipc
Node 2 is not yet listening on tm.ipc
Node 3 is not yet listening on tm.ipc
Node 1 is not yet listening on http
Node 2 is not yet listening on http
Node 3 is not yet listening on http
Waiting until all Tessera nodes are running…
Waiting until all Tessera nodes are running…
All Tessera nodes started
Starting Quorum nodes
Starting Cakeshop
Waiting until Cakeshop is running…
Cakeshop is not yet listening on http
Waiting until Cakeshop is running…
Cakeshop is not yet listening on http
Waiting until Cakeshop is running…
Cakeshop is not yet listening on http

Cakeshop started at http://localhost:8999
Successfully started Quorum network.
Tessera Node 1 public key:
BULeR8JyUWhiuuCMU/HLA0Q5pzkYT+cHII3ZKBey3Bo=
Tessera Node 2 public key:
QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=
Tessera Node 3 public key:
1iTZde/ndBHvzhcl7V68x44Vx7pl8nwx9LqnM/AfJUg=

[ec2-user@ip-10-101-76-122 3-nodes-quickstart]$

Cakeshop Validation

I installed the CLI brower called Lynx with the following command on the same EC2 instance for quick validation

[ec2-user@ip-10-101-76-122 ~]$ sudo yum install lynx
Loaded plugins: extras_suggestions, langpacks, priorities, update-motd
amzn2-core | 3.7 kB 00:00:00
amzn2extra-docker | 3.0 kB 00:00:00
amzn2extra-java-openjdk11 | 3.0 kB 00:00:00
Resolving Dependencies
--> Running transaction check
---> Package lynx.x86_64 0:2.8.8-0.3.dev15.amzn2.0.2 will be installed
--> Processing Dependency: redhat-indexhtml for package: lynx-2.8.8-0.3.dev15.amzn2.0.2.x86_64
--> Running transaction check
---> Package amazonlinux-indexhtml.noarch 0:1-1.amzn2 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
======================================================================================================================================================================================================================
Package Arch Version Repository Size
Installing:
lynx x86_64 2.8.8-0.3.dev15.amzn2.0.2 amzn2-core 1.4 M
Installing for dependencies:
amazonlinux-indexhtml noarch 1-1.amzn2 amzn2-core 4.1 k
Transaction Summary
Install 1 Package (+1 Dependent package)
Total download size: 1.5 M
Installed size: 5.4 M
Is this ok [y/d/N]: y
Downloading packages:
(1/2): amazonlinux-indexhtml-1-1.amzn2.noarch.rpm | 4.1 kB 00:00:00
(2/2): lynx-2.8.8-0.3.dev15.amzn2.0.2.x86_64.rpm | 1.4 MB 00:00:00
Total 10 MB/s | 1.5 MB 00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : amazonlinux-indexhtml-1-1.amzn2.noarch 1/2
Installing : lynx-2.8.8-0.3.dev15.amzn2.0.2.x86_64 2/2
Verifying : amazonlinux-indexhtml-1-1.amzn2.noarch 1/2
Verifying : lynx-2.8.8-0.3.dev15.amzn2.0.2.x86_64 2/2
Installed:
lynx.x86_64 0:2.8.8-0.3.dev15.amzn2.0.2
Dependency Installed:
amazonlinux-indexhtml.noarch 0:1-1.amzn2
Complete!
[ec2-user@ip-10-101-76-122 ~]$
[ec2-user@ip-10-101-76-122 ~]$
[ec2-user@ip-10-101-76-122 ~]$
[ec2-user@ip-10-101-76-122 ~]$ lynx http://localhost:8999

Following is the output from the successful cakeshop installation

Quorum Screen Shots

Interacting with BlockChain Quorum Netowrk

After following the instructions here, you should have a fully generated local Quorum network. Here are some ways you can interact with the network to try out the features of Quorum.

Instructions are mentioned here in the official Getting Started guide

Troubleshooting

JNI Error

During this process I encountered various errors some were very obvious. Following is the message that was not very clear. If you are getting this error then it means tha JRE or Java version is not correct and you need to upgrade it to latest version. In my case the the error was gone once I upgraded it to Java version 11.

Error: A JNI error has occurred, please check your installation and try again
Exception in thread "main" java.lang.UnsupportedClassVersionError: com/quorum/tessera/launcher/Main has been compiled by a more recent version of the Java Runtime (class file version 55.0), this version of the Java Runtime only recognizes class file versions up to 52.0

strat.sh Killed Error

If you are receiving following error then most likely it is because the instance size is too small. Increase the instance size

Cakeshop is not yet listening on http
./start.sh: line 84: 14715 Killed java -Xms128M -Xmx128M -jar $BIN_TESSERA -configfile qdata/c2/tessera-config-09-2.json >> qdata/logs/tessera2.log 2>&1
./start.sh: line 84: 14898 Killed PRIVATE_CONFIG=qdata/c1/tm.ipc nohup $BIN_GETH --datadir qdata/dd1 --nodiscover --rpc --rpccorsdomain=* --rpcvhosts=* --rpcaddr 0.0.0.0 --rpcapi admin,db,eth,debug,miner,net,shh,txpool,personal,web3,quorum,raft,quorumPermission --ws --wsaddr 0.0.0.0 --wsorigins=* --emitcheckpoints --unlock 0 --password qdata/dd1/keystore/password.txt --allow-insecure-unlock --graphql --graphql.port 24000 --graphql.corsdomain=* --graphql.addr 0.0.0.0 --raft --raftport 50401 --permissioned --verbosity 5 --networkid 10 --rpcport 22000 --wsport 23000 --port 21000 2>> qdata/logs/1.log
Waiting until Cakeshop is running…

Cakeshop is taking a long time to start. Look at logs
Waiting until Cakeshop is running…
Cakeshop is not yet listening on http
Cakeshop is taking a long time to start. Look at logs
Waiting until Cakeshop is running…
Cakeshop is not yet listening on http
Cakeshop is taking a long time to start. Look at logs
Waiting until Cakeshop is running…
Cakeshop is not yet listening on http
Cakeshop is taking a long time to start. Look at logs
Waiting until Cakeshop is running…

Selective DNS Traffic Forwarding From On-Prem to Cloud

In some situation it might be desired to send a subset of DNS traffic from on-prem to cloud. One example is sending the S3 bucket traffic from on-prem to Cloud if one has deployed Direct Connect circuit without needing to use public VIF. The most important aspect is that your orginization should not become the autorative DNS else you could draw the INTERNET traffic towards your DNS server.

Following are some techniques

Create on-prem DNS Private Zone

Create a private zone on your on-prem DNS server so that all S3 bucket names resolve to the PrivateS3 private IP address. Note this IP address must be reachable from on-prem either by Direct Connect or VPN over Internet.

Note depending on how application invokes S3 function, for example, by using “wget”, “curl”, “aws s3”, or “aws2 s3”, the generated FQDN name for the S3 object access may be different. There are 3 formats.

  1. bucket-name.s3.region.amazonaws.com
  2. bucket-name.s3-region.amazonaws.com
  3. bucket-name.s3.amazonaws.com (apply to us-east-1 region)

You may need to create a private zone for each region and domain name format. For example, create a zone with domain name s3.us-west-2.amazonaws.com, another zone with domain name s3-us-west-2.amazonaws.com.

https://docs.aviatrix.com/HowTos/privateS3_workflow.html#additional-configuration-create-on-prem-dns-private-zone

CON: Concern is that now you will be creating a private zone for anything other than your company DNS name.

DNS Conditional Forwarder

Use DNS wildcard for record. For example, use *.s3.us-west-2.amazonaws.com that resolves to an A record that is the private IP address of the PrivateS3 internal NLB.

CON: This is a very broad “catcher”. With * as wildcard you forward *.s3.us-west-2.amazonaws.com to NLB, then ALL of s3 buckets for us-west2 will come to NLB. Yeah, you wont be AUTHORITATIVE, but it is lot of unnecessary traffic.

GCP Shared VPC Transit Design and Deploy For Enterprises

Introduction

GCP shared VPC allows an organization to share or extend its vpc-network (you can also call it subnet) from one project (called host) to another project (called service/tenant).

When you use Shared VPC in a project call “X”, you are automatically designating this project “X” as a host project. Now you can attach one or more service projects to host project “X”. The VPC networks in the host project are called Shared VPC networks. Eligible resources from service projects can use subnets you create in the Shared VPC network.

Is Shared VPC a replacement of Transit (Hub-Spoke) Network?

Shared VPC is not a replacement for transit network. Shared VPC is a management and subnet allocation concept. It does not provide enterprise grade routing or traffic engineering capabilities. Shared VPC lets organization administrators delegate administrative responsibilities, such as creating and managing instances, to Service Project Admins while maintaining centralized control over network resources like subnets, routes, and firewalls.

Aviatrix Transit Network Design Patterns with GCP Shared VPC

Aviatrix supports the GCP Shared VPC model and builds the single-cloud and multi-cloud transit networking architecture to provide enterprise grade routing, service insertion, hybrid connectivity and traffic engineering for the workload VMs. There are number of different deployment model possible but we will focus on one design with GCP Shared VPC network that is very popular among enterprises. Here Aviatrix Spoke GW is deployed in non-shared vpc subnet while the workload VMs are deployed in a shared vpc subnet.

For the other design options please check my previous blog.

Design Highlights

  • Selected subnets (Prod and Dev) are shared with the service project
  • Local vpcnet-transit is the VPC with only one subnet called transit-gw-subnet (10.21.4.0).
    • This is the subnet we will use to deploy Aviatrix tranist GW
    • As a best practice, there should not be any workload deployed inside this VPC beside Aviatrix Transit GW
  • Shared VPC Prod contains two subnets in our example
    • 10.21.5.0/24 is for the Aviatrix Spoke GW. This subnet is not shared
    • 10.21.51.0/24 is the subnet shared with the “Prod Service Project”. This is where the workload/app VMs are deployed
  • Similarly a Shared VPC Dev is created with two subnets
    • 10.21.6.0/24 is for Aviatrix Spoke GW. This subnet is not shared
    • 10.21.61.0/24 is the subnet shared with the “Dev Service Project”. This is where the workload/app VMs are deployed

Following diagram shows what we described above

Now let us take a look at the deployment aspects

GCP Shared VPC Subnet Settings

In GCP we are only sharing two subnets from the Shared VPC to the service project. This is the best practice as it gives centralized IT more control over what is shared and what is not. Another option is to share the entire VPC and all its subnets to the service project. These shared subnets are the ones where workload VMs will be deployed.

Transit Gateway

Transit GW is deployed on the 10.21.4.0/24 subnet in the host-project VPC. This subnet is not shared and stays local to the host-project.

Following diagram shows the GCP VPC route table where this Transit GW is being deployed

Deploy Production Spoke Gateway

Use the Multi-Cloud Transit workflow to deploy spoke gateway in production VPC. This subnet is not shared with the service project.

Deploy Development Spoke Gateway

Use the same workflow as before to deploy the spoke GW in development VPC. This subnet is not shared with the service project.

Attach Spoke GWs to Transit GW

Now attach Prod and Dev spoke GWs to Transit GW

Now the spokes are attached to the transit gw and can be seen in the following diagram

Production VPC Routing Table

Name: Global
Route Table ID: Global

NameRouteTargetGatewayPriorityTagsStatus
avx-12a554b6a0fc446e9562c03aff9f2ff20.0.0.0/0default-internet-gateway1000avx-host-project-shared-vpcnet-prod-gblactive
avx-3e231ce7099947bca4f6f68a645731d010.0.0.0/8Instance gcp-spk-gw-host-project-prod-spk-subnet (zone us-east4-c)gcp-spk-gw-host-project-prod-spk-subnet1000active
avx-95ef9a0df3f24863a6cec71e51f2732b172.16.0.0/12Instance gcp-spk-gw-host-project-prod-spk-subnet (zone us-east4-c)gcp-spk-gw-host-project-prod-spk-subnet1000active
avx-cf9d16a2e4ec4c98908aaabd92ff7208192.168.0.0/16Instance gcp-spk-gw-host-project-prod-spk-subnet (zone us-east4-c)gcp-spk-gw-host-project-prod-spk-subnet1000active
default-route-29758d3f2ac8689f10.21.51.0/24Virtual network host-project-shared-vpcnet-prod0active
default-route-3bfe90157ff7a5630.0.0.0/0default-internet-gateway1000active
default-route-fc0595ff4c43a6dc10.21.5.0/24Virtual network host-project-shared-vpcnet-prod0active

Production Spoke GW Routing Table

Name: gcp-spk-gw-host-project-prod-spk-subnet

DestinationViaDevNexthop IPNexthop GatewayStatusMetricWeight
default10.21.5.1eth0up0
10.21.4.0/24tun-23F57229-035.245.114.41gcp-transit-gw-host-project-local-vpcnetup100
10.21.4.3tun-23F57229-035.245.114.41gcp-transit-gw-host-project-local-vpcnetup100
10.21.5.0/2410.21.5.1eth0up0
10.21.5.1eth0up0
10.21.6.0/24tun-23F57229-035.245.114.41gcp-transit-gw-host-project-local-vpcnetup100
10.21.51.0/2410.21.5.1eth0up0
10.21.61.0/24tun-23F57229-035.245.114.41gcp-transit-gw-host-project-local-vpcnetup100
169.254.0.0/16eth0up0

Transit Gateway Routing Table

Name: gcp-transit-gw-host-project-local-vpcnet

DestinationViaDevNexthop IPNexthop GatewayStatusMetricWeight
default10.21.4.1eth0up0
10.21.4.0/2410.21.4.1eth0up0
10.21.4.1eth0up0
10.21.5.0/24tun-23ECF407-035.236.244.7gcp-spk-gw-host-project-prod-spk-subnetup100
10.21.5.2tun-23ECF407-035.236.244.7gcp-spk-gw-host-project-prod-spk-subnetup100
10.21.6.0/24tun-2256383A-034.86.56.58gcp-spk-gw-host-project-dev-spk-subnetup100
10.21.6.2tun-2256383A-034.86.56.58gcp-spk-gw-host-project-dev-spk-subnetup100
10.21.51.0/24tun-23ECF407-035.236.244.7gcp-spk-gw-host-project-prod-spk-subnetup100
10.21.61.0/24tun-2256383A-034.86.56.58gcp-spk-gw-host-project-dev-spk-subnetup100
169.254.0.0/16eth0up0

Transit GW Route Info DB Details

{
  "gateway name": "gcp-transit-gw-host-project-local-vpcnet",
  "segmentation": "disabled",
  "main": {
    "Duplicated CIDRs": [],
    "Best Route DB": {
      "10.21.6.0/24": "{'type': 'vpc', 'table_id': 'main', 'as_path': '', 'as_path_len': '0', 'metric': '50', 'nexthop': '34.86.56.58', 'name': 'gcp-spk-gw-host-project-dev-spk-subnet', 'cidr': '10.21.6.0/24', 'locprf': '0', 'community': ''}",
      "10.21.61.0/24": "{'type': 'vpc', 'table_id': 'main', 'as_path': '', 'as_path_len': '0', 'metric': '50', 'nexthop': '34.86.56.58', 'name': 'gcp-spk-gw-host-project-dev-spk-subnet', 'cidr': '10.21.61.0/24', 'locprf': '0', 'community': ''}",
      "10.21.5.0/24": "{'type': 'vpc', 'table_id': 'main', 'as_path': '', 'as_path_len': '0', 'metric': '50', 'nexthop': '35.236.244.7', 'name': 'gcp-spk-gw-host-project-prod-spk-subnet', 'cidr': '10.21.5.0/24', 'locprf': '0', 'community': ''}",
      "10.21.51.0/24": "{'type': 'vpc', 'table_id': 'main', 'as_path': '', 'as_path_len': '0', 'metric': '50', 'nexthop': '35.236.244.7', 'name': 'gcp-spk-gw-host-project-prod-spk-subnet', 'cidr': '10.21.51.0/24', 'locprf': '0', 'community': ''}"
    },
    "Route Info DB": {
      "10.21.6.0/24": [
        "{'type': 'vpc', 'table_id': 'main', 'as_path': '', 'as_path_len': '0', 'metric': '50', 'nexthop': '34.86.56.58', 'name': 'gcp-spk-gw-host-project-dev-spk-subnet', 'cidr': '10.21.6.0/24', 'locprf': '0', 'community': ''}"
      ],
      "10.21.61.0/24": [
        "{'type': 'vpc', 'table_id': 'main', 'as_path': '', 'as_path_len': '0', 'metric': '50', 'nexthop': '34.86.56.58', 'name': 'gcp-spk-gw-host-project-dev-spk-subnet', 'cidr': '10.21.61.0/24', 'locprf': '0', 'community': ''}"
      ],
      "10.21.5.0/24": [
        "{'type': 'vpc', 'table_id': 'main', 'as_path': '', 'as_path_len': '0', 'metric': '50', 'nexthop': '35.236.244.7', 'name': 'gcp-spk-gw-host-project-prod-spk-subnet', 'cidr': '10.21.5.0/24', 'locprf': '0', 'community': ''}"
      ],
      "10.21.51.0/24": [
        "{'type': 'vpc', 'table_id': 'main', 'as_path': '', 'as_path_len': '0', 'metric': '50', 'nexthop': '35.236.244.7', 'name': 'gcp-spk-gw-host-project-prod-spk-subnet', 'cidr': '10.21.51.0/24', 'locprf': '0', 'community': ''}"
      ]
    },
    "Best Route DB (linklocal)": {
      "10.21.4.0/24": "{'type': 'linklocal', 'table_id': 'main', 'as_path': '', 'as_path_len': '0', 'metric': '0', 'nexthop': '', 'name': 'gcp-transit-gw-host-project-local-vpcnet', 'cidr': '10.21.4.0/24', 'locprf': '0', 'community': ''}",
      "10.21.4.3/32": "{'type': 'linklocal', 'table_id': 'main', 'as_path': '', 'as_path_len': '0', 'metric': '0', 'nexthop': '', 'name': 'gcp-transit-gw-host-project-local-vpcnet', 'cidr': '10.21.4.3/32', 'locprf': '0', 'community': ''}",
      "10.21.6.2/32": "{'type': 'linklocal', 'table_id': 'main', 'as_path': '', 'as_path_len': '0', 'metric': '0', 'nexthop': '', 'name': 'gcp-spk-gw-host-project-dev-spk-subnet', 'cidr': '10.21.6.2/32', 'locprf': '0', 'community': ''}",
      "10.21.5.2/32": "{'type': 'linklocal', 'table_id': 'main', 'as_path': '', 'as_path_len': '0', 'metric': '0', 'nexthop': '', 'name': 'gcp-spk-gw-host-project-prod-spk-subnet', 'cidr': '10.21.5.2/32', 'locprf': '0', 'community': ''}"
    }

Note: Aviatrix UserConnect-6.2.1528 was used to validate this design.

Validation

Transit and spoke VMs deployed successfully in the host project

Workload VMs were deployed in their respective service projects on their respective shared subnets

For validation ssh was enabled on the dev workload VM and ICMP was also enabled. From Prod VM both ICMP and ssh were successful

ssh was also successful as we can see from the following

Aviatrix Spoke GW and Workload VMs in Same GCP Shared VPC Subnets

This pattern is more suited for small deployments, PoC or lab setup where the networking is kept very simple. The Aviatrix transit GW is deployed inside the host-project. The Aviatrix spokes are also deployed inside the host-project but the VPC network (or subnet) is shared with the service/tenant VPC. This same shared VPC network (subnet) is then used by the workload VMs as well. Using same subnet for Aviatrix spoke GW and workload VMs might not be desirable for some organizations.

This article discusses the deployment aspects of first design pattern. 
Second design pattern will be discussed in a different blog.

Deployment Details

Create VPC networks under Host Project

Attach service project with Host Project under Shared VPC option

While attaching the project, you need to select the respective subnet as well. Following screen shows the subnets shared with the service project

You can also list all the “Shared VPC networks” and their attached “Service Projects” as shown in the following screen

Deploy Aviatrix Transit Gateway

Now deploy Aviatrix transit gateway in the host project’s “transit vpc network”

Enable connected transit as shown in the following screen. This make all spoke vpc-networks to communicate with each other. By default it is disabled.

Also assign BGP AS Number to newly deployed Aviatrix Gateway. This step is the best practice and not mandatory. This becomes critical for traffic engineering or if one wants to connect to on-prem router/firewall for eBGP.

Deploy Aviatrix Spoke Gateways

Deploy Aviatrix spoke gateways in the prod and dev shared vpc network. These networks were created inside the central IT host project but were shared to prod and dev service projects. For instance “gcp-spk-gw-host-project-vpcnet-prod” will be deployed as Aviatrix gateway inside the host project. It will consume the compute resources from the central IT host project. This way central IT will have complete control over the transit and spoke gateways and their networking aspects. The service/tenant projects will be deploying their VMs inside their own compute and will be responsible for paying for it but will not have any control over the networking aspects.

Following screen shot shows the spoke gateway deployment. Notice that the Account Name / Project name selected in the drop down menu is “netjoints-host-project”

Similarly the spoke gateway for dev service project was deployed. The following list shows the outcome of transit and spoke gateways

Attach Spoke Gateways to Transit Gateway

Now attach both spokes to the transit gateway to build the complete hub-spoke topology

After the attachment, the spoke list will show the connected transit as follows

Testing and Verification

CoPilot View

If you have Aviatrix CoPilot, then you can visualize the topology that was build on the run-time.

GCP View

GCP host project shows the Aviatrix transit and spoke gateways

GCP Prod and Dev projects would show the Prod and Dev VMs that will be used for testing

shahzad_netjoints_com@service-project-prod-vm1:~$ ping 10.21.12.3
PING 10.21.12.3 (10.21.12.3) 56(84) bytes of data.
64 bytes from 10.21.12.3: icmp_seq=1 ttl=61 time=4.65 ms
64 bytes from 10.21.12.3: icmp_seq=2 ttl=61 time=2.66 ms
64 bytes from 10.21.12.3: icmp_seq=3 ttl=61 time=2.99 ms
64 bytes from 10.21.12.3: icmp_seq=4 ttl=61 time=2.79 ms
^C
--- 10.21.12.3 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 8ms
rtt min/avg/max/mdev = 2.658/3.270/4.647/0.805 ms
shahzad_netjoints_com@service-project-prod-vm1:~$
shahzad_netjoints_com@service-project-prod-vm1:~$ traceroute 10.21.12.3
traceroute to 10.21.12.3 (10.21.12.3), 30 hops max, 60 byte packets
1 gcp-spk-gw-host-project-vpcnet-prod.us-east4-a.c.shahzad-host-project-11.internal (10.21.11.2) 1.648 ms 1.632 ms 1.612
ms
2 * * *
3 * * *
4 * * *
5 10.21.12.3 (10.21.12.3) 5.521 ms 5.499 ms 5.751 ms
shahzad_netjoints_com@service-project-prod-vm1:~$

Onboarding GCP Project in Aviatrix Controller with Restricted Access

Problem Statement

By default GCP Compute Service Account permissions are wide open with the Editor role. Here is how you can see the problem yourself.

Create a new GCP Project

Notice the “Service Accounts” area and notice that there is no account there yet.

Enable Compute API for PCI Service Project

Default GCP Service Account

A default Compute Engine GCP service account (845482233226-compute@developer.gserviceaccount.com) is created as you can see from the following diagram

More details about this default compute engine service account can be seen in the following diagram

IAM Permissions For Default Compute Account

If you go back to IAM area, notice the the default compute account is there. By default it comes with “Editor” permission. Editor is a wide open permission and should not be used in production for a service account

How to Fix this Problem?

The fix is not difficult. There is an automated way and a manual way. I will explain the manual method and have provided URL for the automated way as well.

Change the permission on GCP Default Compute Service Account

When a VM (such as Aviatrix Egress FDQN, User-VPN, Transit, Spoke GW etc.) is deployed in the GCP Project, the GCP project automatically associates a Default Service Account with it. It is the default GCP behavior and cannot be changed. Notice that this account is different than the account we created to on-board the GCP project into Aviatrix Controller

There are two methods to restrict the permission for the default service account

1- Automated Method: Global IAM setting to disable automatic role grant for Default Service Account

2- Manual Method: Change the permission for Default Service Account

1- Automated Method: Global IAM Setting to disable automatic role grant

GCP recommends production customers to disable automatic role grant to default service accounts

https://cloud.google.com/iam/docs/service-accounts

“When a default service account is created, it is automatically granted the Editor role (roles/editor) on your project. This role includes a very large number of permissions. To follow the principle of least privilege, we strongly recommend that you disable the automatic role grant by adding a constraint to your organization policy, or by revoking the Editor role manually.”

https://cloud.google.com/resource-manager/docs/organization-policy/restricting-service-accounts#disable_service_account_default_grants

Some Google Cloud services automatically create default service accounts. When a default service account is created, it is automatically granted the Editor role (roles/editor) on your project.

To improve security, we strongly recommend that you disable the automatic role grant. Use the iam.automaticIamGrantsForDefaultServiceAccounts boolean constraint to disable the automatic role grant.

2- Manual Method: Change the permission for Default Service Account

In my setup I used the manual method to change the permission for default service account. I assigned the least possible access role to this new member PCI. Aviatrix at minimum would need

  • Compute Admin
  • Service Account User
  • Organization Admin and
  • Project IAM Admin role.

Note: Organization Admin and Project IAM Admin roles are only needed for “Shared VPC”. Skip those roles if you are not planning to use “Shared VPC”

Following screen shows the example of editing the permission for a memeber

After you save, it should look like following

Now select/click this “Service Account” area under “IAM & Admin”

Now “Create new key” key for this service account.

The type must be Json

The new key is created and downloaded on your machine now

This Jason key is what we will use to on-board the GCP account in the Aviatrix Controller

Following is the sample of how it looks like

{
  "type": "service_account",
  "project_id": "service-project-pci",
  "private_key_id": "746b9729f7aaeb0762e637e443941a960542124f",
  "private_key": "-----BEGIN PRIVATE KEY-----\nMIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQC1aWbZnR+GHCrM\ncL1ETvam4Q2bb7nXbEtT8aOj6SIt3zJMIN0ycJ40W7QqBS57bBYhF/mskkOEbP60\nkfkKp1ygM5eg5q7+bXINGm5rpDu0VvK7BWvA+lKOGpvJSA3SOEMN+kvr5p3H+iRl\ngE802d0OzR981bWGRodqan3+i3u52K5b4c/25e2nxq0WvTGD+/LcIYZKVTx6PM1X\nfuPNaHhKIk8lMt7HOlT/Y+CJ9/qdPtct/sZak6B0BM5AiurIBYV4Pk87PuhIF+W8\nz0WGtDQ50YvrRWOK6qKFOcs6Z1lZDx1bIUDfj80ItIDw64z9f1wHNC1SEuOHqaJ+\nItFrvBCzAgMBAAECggEAMjL7ceZHrd2nf0xjIz7Sg/UsxcFR5Kmj4pOwG5BMk/L6\nQRSrAOUm8ggaP7J1XVPYf8nZngZPRpq+lIr8JhWPzQjZwX10GRWCBNw2h/THTKzu\nuA/U9G6QX6A/UaBtqqlE7N5BGgMT0B2I6slpoY9T21+pgerFM2Xa3Pig6soAL7mt\nbPY7soSVFshf0EcV3ZkYTxuBXaDdJtLftoimsyJBFYomWHg3WTF/Gp2ua5XC27U1\nDhQ6bR2urQcbcusL+xMeS5ZBo1HyBzJKkQSuHixG7OY62VTct8o89vWIg18oSUaA\nqvcD3WIjh1jNDVHx2JgE1b5INvoLHVZoPGDlZOCa6QKBgQDlgsrBygZLEdtvI9IG\nXcRqIgMI+H3XbIZKYYA4G1Qo+ummsMzIWJyOjotEfanh3SNt/nabUyQjvTezsOHS\n2xvU+R+ISi6GhNLrF+zzvktgwOA8IoCn21gWRLsad4ergDAYDZ6B7q2KWLP9O+DE\nyiN5o/l9l+PATUy7XYaHkhhQVwKBgQDKWXRwmRXnZc3Jb/2sTeJp77i0N163mKVL\n0RNONMFJ2W5i7+Fz9hQ4o/bnGdcPtZbncDHlXCkJEBqx0UfKd6IAW16tvF+gS9OV\n9j9JrqCgaT44mOduMSWkyIqOP4L7Wpp49y36xkfXzkbkmaDShrQpc+KHydmBrNM0\nMQpCD6YZBQKBgQCmQRKDQsdARhVA8x/HANGxWCX+r5LpJHI7G1n4SsOyU+BBob0W\nPCpckiGMYcNYHAr4OObOKXH6ea0J+836IkKNClGvNp1xUHJBXrmE74pG8jD9Hrk3\n3wp2Rx+KUp/yug8cvXDfCninyQ3JGUD/DLaZ/RBTzF1tBhHZgCxdtJTsTQKBgHQM\njb0t7uQA/N60PdYd7OY4t8OTpdzBzLsIs3u8wcXqz2YqkTCCRuKdFrM/nJnD2UHu\nlI8oJdiuxcCJeBTkO6LcxBX73RP/qN9ulKlbX3/gG/E1sDUANsikwuIGBsbFFaae\njF4wbW+VPA9LFHLpElZbweWCnB3E0nQyU+HDO81JAoGBAJ+FLYEqEDwHeskO/us7\nR4pG0A55QpGduQaDtJcW2rQZXhwLOZklucUWVtilUiOGQt9Kk3v31n9R4zhfyS2c\n+RfIs1fHgvEzBhrEUxKchvkcVOSwIOqklk1+wjXvIVlb5zLklyV+cIqaDi7wp26g\nOwMtNj72PO+5w28/4b0rnqTW\n-----END PRIVATE KEY-----\n",


  "client_email": "pciserviceaccount@service-project-pci.iam.gserviceaccount.com",
  "client_id": "106577340982754556535",
  "auth_uri": "https://accounts.google.com/o/oauth2/auth",
  "token_uri": "https://oauth2.googleapis.com/token",
  "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
  "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/pciserviceaccount%40service-project-pci.iam.gserviceaccount.com"
}

Now under Aviatrix Controller –> Access Account, onboard the GCP Project as shown in the following diagram. Leave the Project ID filed blank. Controller will automatically pick the name

Following diagram shows that on-boarding GCP project was a success

Deploy Aviatrix Gateway For GCP Project

Now when I deploy the Aviatrix GW, it will be using the restricted permission to connect to GCP and then when the GW is deployed.

GW Deployment Progress on GCP

Controller shows the following output

[12:39:07] Starting to create GW gcp-spk-gw1-pci.
[12:39:08] Connected to GCE.
[12:39:13] Project check complete.
[12:39:14] License check is complete.
[12:39:23] Updating IGW for new gateway…
[12:39:31] Launching compute instance in GCE….
[12:40:34] GCE compute instance created successfully.
[12:40:34] Updating DB.
[12:40:34] Added GW info to Database.
[12:40:36] AVX SQS Queue created.
[12:40:36] Creating Keys.
[12:41:03] Initializing GW…..
[12:41:03] Copy configuration to GW gcp-spk-gw1-pci done.
[12:41:04] Copy new software to GW gcp-spk-gw1-pci done.
[12:41:05] Copy misc new software to GW gcp-spk-gw1-pci done.
[12:41:05] Copy scripts to GW gcp-spk-gw1-pci done.
[12:41:05] Copy sdk to GW gcp-spk-gw1-pci done.
[12:41:10] Copy libraries to GW gcp-spk-gw1-pci done.
[12:41:10] Installing software ….
[12:41:11] Issuing certificates …
[12:41:27] Issue certificates done

Instance creation can now show you following

On the GCP VM Instance notice that Aviatrix does not assign any service account or allow access to any GCP Cloud API for additional security.

GCP Least Privileged and Restricted Permission for Aviatrix

Aviatrix controller provides unified control and management plane for Google Cloud. Aviatrix allows enterprises to on-board hundreds of GCP projects/accounts into the controller. Once these projects are on-boarded, Aviatrix controller is intelligent to control and manage networking and security across those projects.

On the Aviatrix document page one of the options is to use the “Editor” service account to on-board the project.

https://docs.aviatrix.com/HowTos/CreateGCloudAccount.html

This might not be desirable for many enterprises as they would want to use least privileged service account credentials. In such a situation Aviatrix recommendation is to have at least following roles assigned to service account so that Aviatrix can perform its functions properly. For instance managing the compute resources, route tables, firewall rules, shared service vpc network etc.

  • Compute Admin
  • Service Account User
  • Organization Administrator (optional and required for Shared VPC)
  • Project IAM Admin (optional and required for Shared VPC)

Compute Admin role

NameDescriptionPermissions
roles/compute.adminFull control of all Compute Engine resources. If the user will be managing virtual machine instances that are configured to run as a service account, you must also grant the roles/iam.serviceAccountUser role.compute.* resourcemanager.projects.get resourcemanager.projects.list serviceusage.quotas.get serviceusage.services.get serviceusage.services.list
https://cloud.google.com/compute/docs/access/iam#compute.admin

Service Account User role

NameDescriptionPermissions
roles/iam.serviceAccountUserRun operations as the service account.iam.serviceAccounts.actAs
iam.serviceAccounts.get
iam.serviceAccounts.list resourcemanager.projects.get resourcemanager.projects.list
https://cloud.google.com/compute/docs/access/iam#iam.serviceAccountUser

Aviatrix Ingress Filtering Deployment with AWS ALB (Application Load Balancer)

  • This setup requires only a single VPC for testing purposes
  • Aviatrix ingress filtering gateway (aka public subnet filter PSF) is deployed in the public subnet
  • External ALB deployed in the same public subnet as AVX-PSF-GW
  • WordPress App was launched in the non-routable private subnet (vpc2-subnet1)
  • Bonus testing with Internal ALB
    • Test Windows EC2 was launched in a private subnet (vpc2-subnet2).
    • A pubic IP address was assigned so I can RDP into it and test the internal ALB functionality
    • The 0/0 route table pointed to ENI of PSF-GW which acts as a NAT-GW for not only to bring the traffic in but also to send out towards the public Internet

Following is the final topology

Deployment Screen Shots with External ALB

Aviatrix PSF GW Deployment

General Info
Gateway name: aws-psfgw1-vpc2-uswest2
Cloud type: 1
Account name: shahzad-aws
Region: us-west-2
Gateway subnet AZ: us-west-2a
Ingress IGW route table ID: rtb-0cc03b16fef17a6d8
Gateway subnet CIDR: 10.102.0.0/26
Gateway route table ID: rtb-0fa7cfd751d1f0381
Guard duty enforced: yes

PSF GW Raw Config

General Information:
  Account Name: shahzad-aws
  Gateway Name: aws-psfgw1-vpc2-uswest2
  Gateway Original Name: aws-psfgw1-vpc2-uswest2
  VPC ID: vpc-0a6933729014dc26e
  Region: us-west-2
  Primary CIDR: 10.102.0.0/16
  CIDRs: 10.102.0.0/16
  Subnet CIDR: 10.102.0.0/26, ID: subnet-026eabb64d25a6bce
  Type: vpc_legacy
  GW Instance Public IP: 44.241.30.250
  GW EBS encryption: True
  GW Instance Private IP: 10.102.0.32
  GW Instance Size: t3.micro
  Direct Internet: yes
  Designated gateway: No
  Extended public CIDRs: None
  Single AZ gateway HA: yes
  monitor subnets: disable
  ActiveMesh mode: no
  Stateful Firewall: Disabled
  Private S3: Disabled
  Egress Control: Disabled
  summarized_cidrs: None
  public_dns_server: 8.8.8.8
  SNAT Enabled: no
  VPN Access: disabled
Subnet Information:
  subnet-01c282c7b4931a641  us-west-2a 10.102.12.0/24
  subnet-08acde6fb33c7dc0b  us-west-2b 10.102.20.0/24
  subnet-026eabb64d25a6bce  us-west-2a 10.102.0.0/26
  subnet-0073a332e7bf9e6a4  us-west-2a 10.102.11.0/24
  subnet-090552a3e46312188  us-west-2a 10.102.19.0/24

AWS Configuration Details

AWS VPC2 was used for this configuration.

VPC2 Route Table is shown as follows

Following screen shows VPC2 Route Table. It has an “Ingress Routing” table that was programmed by Aviatrix Controller. Aviatrix uses AWS Ingress Routing feature to deliver this functionality

External ALB Config

Click here to access the WordPress App using the External ALB. This link will not work after my test lab is destroyed.

Internal ALB Configuration

This step is optional and only required if some internal team wants to access the same farm of web-server behind the ALB.

Testing for Internal ALB

Following WordPress EC2 was deployed for testing

AMI ID  ami-02ddad6f7544a1442
Platform details  Linux/UNIX
AMI name  bitnami-wordpress-5.5.1-0-linux-debian-10-x86_64-hvm-ebs-7d426cb7-9522-4dd7-a56b-55dd8cc1c8d0-ami-06dd595c4559434b3.4
Termination protection Disabled
Launch time  Mon Sep 21 2020 01:03:16 GMT-0700 (Pacific Daylight Time) (about 12 hours)
AMI location  aws-marketplace/bitnami-wordpress-5.5.1-0-linux-debian-10-x86_64-hvm-ebs-7d426cb7-9522-4dd7-a56b-55dd8cc1c8d0-ami-06dd595c4559434b3.4
 

I did RDP into the Windows jumb machine and did a traceroute. It shows that I am routed internally and not going towards the Internet

From the same jumb box machine I used browser to access wordpress using the internal ALB

http://internal-aws-alb-internal-spk2-uswest2-1069035580.us-west-2.elb.amazonaws.com

Aviatrix Public Cloud Sandbox Starter – Spin up Cloud Networks in Minutes – CLI Mode

Kickstart deploys cloud and multi-cloud networks in minutes without any efforts. Once the hub/spoke transit network is built in the cloud, it will act as core networking layer on which one can add more use-cases as needed later.

The light weight automation script deploys Aviatrix controller and an Aviatrix transit architecture in AWS (and optionally in Azure). Everything is self contained in a docker image. You do not need to install anything beside docker run time on your laptop/desktop/VM/instance.

Cost

Customer/students are responsible for paying all the cost for running the instances in the Cloud (AWS/Azure/GCP/OCI/etc) and Aviatrix tunnel cost

The estimated cost for introductory lab is USD $1 per hour. Additional use-cases/labs would require additional cost depending on the instances deployed and Aviatrix tunnel build. Aviatrix cost breakdown is listed on AWS marketplace when you subscribe to the Aviatrix Controller

OpenSource

Code is OpenSource and available to public at https://github.com/AviatrixSystems/terraform-solutions/tree/master/kickstart/

Important Note

  • This procedure works the best for brand new Aviatrix Controller deployment
  • It is not recommended to launch the controller if one deployed already
  • If you have previously deployed Aviatrix Controller under the AWS account, you will receive following errors. You need to manually remove those roles and policies before moving forward
Error: Error creating IAM Role aviatrix-role-ec2: EntityAlreadyExists: Role with name aviatrix-role-ec2 already exists.

Error: Error creating IAM Role aviatrix-role-app: EntityAlreadyExists: Role with name aviatrix-role-app already exists.

Error: Error creating IAM policy aviatrix-assume-role-policy: EntityAlreadyExists: A policy called aviatrix-assume-role-policy already exists. Duplicate names are not allowed.

Error: Error creating IAM policy aviatrix-app-policy: EntityAlreadyExists: A policy called aviatrix-app-policy already exists. Duplicate names are not allowed.

Brief Deployment Instructions

Before you start the deployment process, you need to have following

  1. AWS accounts with root access
  2. AWS Access Key ID
  3. AWS Secret Access Key
  4. Subscribe to Aviatrix Controller software in AWS marketplace
  5. Install Dockers and make sure Docker Desktop is running in your Mac / Linux or Windows / VM / EC2 during the deployment process
  6. Run the CLI command % docker run -it aviatrix/kickstart:latest bash on your laptop/desktop/VM/EC2
  7. Follow the prompt to deploy the Aviatrix Controller and hub-spoke transit network

Detailed Deployment Instructions

Step#1: Install Docker

If you already have dockers running, then skip this step.

Install docker desktop on your laptop/desktop/VM/EC2/etc. 

Step#2: Run the Docker Container

Run this command on your machine % docker run -it aviatrix/kickstart:latest bash

shahzadali@shahzad-ali ~ % docker run -it aviatrix/kickstart:latest bash
Unable to find image 'aviatrix/kickstart:latest' locally
latest: Pulling from aviatrix/kickstart

18ffc243a628: Pull complete
9736576402f3: Pull complete
cb464b6dee45: Pull complete

Digest: sha256:11f3.........
Status: Downloaded newer image for aviatrix/kickstart:latest
   #
  # #    #    #     #      ##     #####  #####      #    #    #
 #   #   #    #     #     #  #      #    #    #     #     #  #
#     #  #    #     #    #    #     #    #    #     #      ##
#######  #    #     #    ######     #    #####      #      ##
#     #   #  #      #    #    #     #    #   #      #     #  #
#     #    ##       #    #    #     #    #    #     #    #    #
#    #
#   #       #     ####   #    #   ####    #####    ##    #####    #####
#  #        #    #    #  #   #   #          #     #  #   #    #     #
###         #    #       ####     ####      #    #    #  #    #     #
#  #        #    #       #  #         #     #    ######  #####      #
#   #       #    #    #  #   #   #    #     #    #    #  #   #      #
#    #      #     ####   #    #   ####      #    #    #  #    #     #
                                                             ___.----.____
                                                     __,--(_,-'       ,-'
                                                 _,-'               ,-'
                                             _,-'                ,-'
                                          ,-'    ()           ,-'
                                       ,-'    ()           ,-'
                                    ,-'  __..--""       ,-'
                                 ,-'.--""   ,-'      ,-'
              |\         __..--""        ,-'      ,-':
              | \__..--""     ______  ,-'     _,-'   :
         __..--""         ,-'\_____/-'    _,-'       :
 __..--""              ,-' ,-'  ,-'   _,-'____/      :
   `---...___       ,-' ,-'  ,-'  _,-'    _,-'       :
             ```-,-' ,-'  ,-' _,-'    _,-'           :
              |--\,-'___,-'--"" ___,-'-...___        :
              |_..--""                       ```---..:
--> Going to get your AWS API access keys. They are required to launch the Aviatrix controller in AWS. They stay local to this container and are not shared. Access keys can be created in AWS console under Account -> My Security Credentials -> Access keys for CLI, SDK, & API access.
--> Enter AWS access key ID: AKIAIWZQNXNAYK2NRV5A
--> Enter AWS secret access key: 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
--> Do you want to launch the controller? (y/n)? y
--> Generating SSH key for the controller...
--> Done.
--> OK.


--> Go to https://aws.amazon.com/marketplace/pp?sku=b03hn7ck7yp392plmk8bet56k and subscribe to the Aviatrix platform. Click on "Continue to subscribe", and accept the terms. Do NOT click on "Continue to Configuration". Press any key once you have subscribed.
--> Now opening the settings file for the controller. You can leave the defaults or change to your preferences. Press any key to continue. In the text editor, press :wq when done.
--> The controller user configuration is now complete. Now going to launch the controller instance in AWS. The public IP of the controller will be shared with Aviatrix for tracking purposes. Press any key to continue. Close the window, or press Ctrl-C to abort.
Initializing modules...
- avtx_controller_instance in aviatrix-controller-build
- avtx_iam_role in aviatrix-controller-iam-roles
Initializing the backend...
Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "http" (hashicorp/http) 1.2.0...
- Downloading plugin for provider "aws" (hashicorp/aws) 3.6.0...
The following providers do not have any version constraints in configuration,
so the latest version was installed.
To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.
* provider.aws: version = "~> 3.6"
* provider.http: version = "~> 1.2"
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
module.avtx_iam_role.data.http.iam_policy_assume_role: Refreshing state...
module.avtx_iam_role.data.http.iam_policy_ec2_role: Refreshing state...
module.avtx_controller_instance.data.http.avx_iam_id: Refreshing state...
data.aws_caller_identity.aws_account: Refreshing state...
module.avtx_controller_instance.data.aws_region.current: Refreshing state...
module.avtx_iam_role.data.aws_caller_identity.current: Refreshing state...
module.avtx_iam_role.aws_iam_role.aviatrix-role-ec2: Creating...
module.avtx_iam_role.aws_iam_policy.aviatrix-assume-role-policy: Creating...
module.avtx_iam_role.aws_iam_policy.aviatrix-app-policy: Creating...
aws_key_pair.avtx_ctrl_key: Creating...
aws_vpc.avtx_ctrl_vpc: Creating...
module.avtx_controller_instance.aws_eip.controller_eip[0]: Creating...
module.avtx_iam_role.aws_iam_role.aviatrix-role-app: Creating...
aws_key_pair.avtx_ctrl_key: Creation complete after 1s [id=avtx-ctrl-key]
module.avtx_controller_instance.aws_eip.controller_eip[0]: Creation complete after 2s [id=eipalloc-07d733f600622dbb9]
module.avtx_iam_role.aws_iam_role.aviatrix-role-ec2: Creation complete after 3s [id=aviatrix-role-ec2]
module.avtx_iam_role.aws_iam_instance_profile.aviatrix-role-ec2_profile: Creating...
module.avtx_iam_role.aws_iam_policy.aviatrix-app-policy: Creation complete after 4s [id=arn:aws:iam::972532942650:policy/aviatrix-app-policy]
aws_vpc.avtx_ctrl_vpc: Creation complete after 5s [id=vpc-013d2e5ec7c72d056]
aws_internet_gateway.gw: Creating...
module.avtx_controller_instance.aws_security_group.AviatrixSecurityGroup: Creating...
aws_subnet.avtx_ctrl_subnet: Creating...
module.avtx_iam_role.aws_iam_role.aviatrix-role-app: Creation complete after 5s [id=aviatrix-role-app]
module.avtx_iam_role.aws_iam_role_policy_attachment.aviatrix-role-app-attach: Creating...
aws_subnet.avtx_ctrl_subnet: Creation complete after 2s [id=subnet-05fd444e1ae8befad]
module.avtx_iam_role.aws_iam_policy.aviatrix-assume-role-policy: Creation complete after 7s [id=arn:aws:iam::972532942650:policy/aviatrix-assume-role-policy]
module.avtx_iam_role.aws_iam_role_policy_attachment.aviatrix-role-ec2-attach: Creating...
module.avtx_iam_role.aws_iam_role_policy_attachment.aviatrix-role-app-attach: Creation complete after 2s [id=aviatrix-role-app-20200913020110879300000001]
module.avtx_iam_role.aws_iam_instance_profile.aviatrix-role-ec2_profile: Creation complete after 4s [id=aviatrix-role-ec2]
aws_internet_gateway.gw: Creation complete after 3s [id=igw-01c58c44031894c8e]
aws_default_route_table.default: Creating...
module.avtx_controller_instance.aws_security_group.AviatrixSecurityGroup: Creation complete after 3s [id=sg-050f2a2e9ed5df58b]
module.avtx_controller_instance.aws_security_group_rule.ingress_rule: Creating...
module.avtx_controller_instance.aws_security_group_rule.egress_rule: Creating...
module.avtx_controller_instance.aws_network_interface.eni-controller[0]: Creating...
module.avtx_iam_role.aws_iam_role_policy_attachment.aviatrix-role-ec2-attach: Creation complete after 1s [id=aviatrix-role-ec2-20200913020112299200000002]
module.avtx_controller_instance.aws_security_group_rule.ingress_rule: Creation complete after 1s [id=sgrule-2634938991]
aws_default_route_table.default: Creation complete after 1s [id=rtb-02df83a8f605d6690]
module.avtx_controller_instance.aws_security_group_rule.egress_rule: Creation complete after 2s [id=sgrule-2822486500]
module.avtx_controller_instance.aws_network_interface.eni-controller[0]: Creation complete after 2s [id=eni-0498ad1bd3a12ebc3]
module.avtx_controller_instance.aws_instance.aviatrixcontroller[0]: Creating...
module.avtx_controller_instance.aws_instance.aviatrixcontroller[0]: Still creating... [9s elapsed]
module.avtx_controller_instance.aws_instance.aviatrixcontroller[0]: Still creating... [19s elapsed]
module.avtx_controller_instance.aws_instance.aviatrixcontroller[0]: Still creating... [29s elapsed]
module.avtx_controller_instance.aws_instance.aviatrixcontroller[0]: Still creating... [39s elapsed]
module.avtx_controller_instance.aws_instance.aviatrixcontroller[0]: Creation complete after 47s [id=i-080e2edfe63019a4d]
module.avtx_controller_instance.aws_eip_association.eip_assoc[0]: Creating...
module.avtx_controller_instance.aws_eip_association.eip_assoc[0]: Creation complete after 3s [id=eipassoc-071ace306b7a1c0bd]
Apply complete! Resources: 19 added, 0 changed, 0 destroyed.

Outputs:
aws_account = 912345678912
controller_private_ip = 10.255.0.10
controller_public_ip = 3.231.68.241
--> Controller successfully launched.
AWS_ACCOUNT: 912345678912
CONTROLLER_PRIVATE_IP: 10.255.0.10
CONTROLLER_PUBLIC_IP: 3.231.68.241
{"controllerIP":"3.231.68.241"}
{}
--> Waiting 5 minutes for the controller to come up... Do not access the controller yet.
 1 second(s))
--> Enter recovery email: shahzad@aviatrix.com
--> Enter new password:
--> Confirm new password:
{'results': 'User login:admin in account:admin has been authorized successfully on controller 3.231.68.241. - Please check email confirmation.', 'return': True, 'CID': 'zWUI1XBEEF7iUy5BRINr'}
Connecting to Controller
b'{"return":true,"results":"User login:admin password has been changed successfully on controller 3.231.68.241."}'
{'results': 'User login:admin in account:admin has been authorized successfully on controller 3.231.68.241. - Please check email confirmation.', 'return': True, 'CID': '6VywJCnncq7zdhG5TeZO'}
b'{"return":true,"results":"admin email address has been successfully added"}'
b'{"return":true,"results":"true"}'
Created AWS Access Account:  b'{"return":true,"results":"An email confirmation has been sent to shahzad@aviatrix.com"}'
Upgrading controller. It can take several minutes
b'{"return":true,"results":"userConnect has been upgraded to version UserConnect-6.0.2483. Please log out and login again for the new changes to take effect."}'
--> Controller is ready. Do not manually change the controller version while Kickstart is running.
--> Do you want to launch the Aviatrix transit in AWS? (y/n)?
--> Now opening the settings file for the AWS deployment. You can leave the defaults or change to your preferences. You only need to complete the AWS settings. Go to https://raw.githubusercontent.com/AviatrixSystems/terraform-solutions/master/solutions/img/kickstart.png to view what is going to be launched. In the text editor, press :wq when done.

--> Now going to launch gateways in AWS. Press any key to continue.
Initializing the backend...
Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "aws" (hashicorp/aws) 3.6.0...
- Downloading plugin for provider "aviatrix" (terraform-providers/aviatrix) 2.15.1...
The following providers do not have any version constraints in configuration,
so the latest version was installed.
To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.
* provider.aws: version = "~> 3.6"
Warning: registry.terraform.io: For users on Terraform 0.13 or greater, this provider has moved to AviatrixSystems/aviatrix. Please update your source in required_providers.
Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
data.aws_availability_zones.az_available: Refreshing state...
aviatrix_vpc.aws_transit_vpcs["aws_transit_vpc"]: Creating...
aviatrix_vpc.aws_spoke_vpcs["aws_spoke2_vpc"]: Creating...
aviatrix_vpc.aws_spoke_vpcs["aws_spoke1_vpc"]: Creating...
aviatrix_vpc.aws_transit_vpcs["aws_transit_vpc"]: Still creating... [10s elapsed]
aviatrix_vpc.aws_spoke_vpcs["aws_spoke2_vpc"]: Still creating... [10s elapsed]
aviatrix_vpc.aws_spoke_vpcs["aws_spoke1_vpc"]: Still creating... [10s elapsed]
aviatrix_vpc.aws_transit_vpcs["aws_transit_vpc"]: Creation complete after 12s [id=AWS-EW1-Transit-VPC]
aviatrix_transit_gateway.aws_transit_gw: Creating...
aviatrix_vpc.aws_spoke_vpcs["aws_spoke2_vpc"]: Still creating... [20s elapsed]
aviatrix_vpc.aws_spoke_vpcs["aws_spoke1_vpc"]: Still creating... [20s elapsed]
aviatrix_vpc.aws_spoke_vpcs["aws_spoke1_vpc"]: Creation complete after 22s [id=AWS-EW1-Spoke1-VPC]
aviatrix_transit_gateway.aws_transit_gw: Still creating... [10s elapsed]
aviatrix_vpc.aws_spoke_vpcs["aws_spoke2_vpc"]: Still creating... [30s elapsed]
aviatrix_vpc.aws_spoke_vpcs["aws_spoke2_vpc"]: Creation complete after 32s [id=AWS-EW1-Spoke2-VPC]
aviatrix_transit_gateway.aws_transit_gw: Still creating... [20s elapsed]
aviatrix_transit_gateway.aws_transit_gw: Creation complete after 2m22s [id=AWS-EW1-Transit-GW]
aviatrix_spoke_gateway.aws_spoke_gws["spoke1"]: Creating...
aviatrix_spoke_gateway.aws_spoke_gws["spoke2"]: Creating...
d]
aviatrix_spoke_gateway.aws_spoke_gws["spoke1"]: Creation complete after 4m13s [id=AWS-EW1-Spoke1-GW]
aviatrix_spoke_gateway.aws_spoke_gws["spoke2"]: Still creating... [4m20s elapsed]
aviatrix_spoke_gateway.aws_spoke_gws["spoke2"]: Creation complete after 4m22s [id=AWS-EW1-Spoke2-GW]
Warning: Resource targeting is in effect

You are creating a plan with the -target option, which means that the result
of this plan may not represent all of the changes requested by the current
configuration.

The -target option is not for routine use, and is provided only for
exceptional situations such as recovering from errors or mistakes, or when
Terraform specifically suggests to use it as part of an error message.
Warning: Applied changes may be incomplete
The plan was created with the -target option in effect, so some changes
requested in the configuration may have been ignored and the output values may
not be fully updated. Run the following command to verify that no other
changes are pending:
    terraform plan

Note that the -target option is not suitable for routine use, and is provided
only for exceptional situations such as recovering from errors or mistakes, or
when Terraform specifically suggests to use it as part of an error message.
Apply complete! Resources: 6 added, 0 changed, 0 destroyed.
--> Do you want to launch test EC2 instances in the AWS Spoke VPCs? (y/n)? y
--> Re-opening the settings file. Make sure your key pair name is correct under aws_ec2_key_name. This is your own key pair, not Aviatrix keys for controller or gateways. Also make sure you are in the region where you launched the Spoke gateways. Press any key to continue.
--> Make sure that your AWS quota allows us to have more that 5 Elastic IPs. You can check your quota and request an increase at https://console.aws.amazon.com/servicequotas if needed. Press any key to continue.
--> Launching instances now
data.aws_availability_zones.az_available: Refreshing state...
data.aws_ami.amazon-linux-2: Refreshing state...
aviatrix_vpc.aws_spoke_vpcs["aws_spoke1_vpc"]: Refreshing state... [id=AWS-EW1-Spoke1-VPC]
aviatrix_vpc.aws_spoke_vpcs["aws_spoke2_vpc"]: Refreshing state... [id=AWS-EW1-Spoke2-VPC]
aws_security_group.icmp_ssh["aws_spoke1_vpc"]: Creating...
aws_security_group.icmp_ssh["aws_spoke2_vpc"]: Creating...
aws_security_group.icmp_ssh["aws_spoke1_vpc"]: Creation complete after 7s [id=sg-02313afee80050388]
aws_security_group.icmp_ssh["aws_spoke2_vpc"]: Creation complete after 7s [id=sg-091514a4744f83eaf]
aws_instance.test_instances["spoke2_vm"]: Creating...
aws_instance.test_instances["spoke1_vm"]: Creating...

Warning: Resource targeting is in effect
You are creating a plan with the -target option, which means that the result
of this plan may not represent all of the changes requested by the current
configuration.

The -target option is not for routine use, and is provided only for
exceptional situations such as recovering from errors or mistakes, or when
Terraform specifically suggests to use it as part of an error message.
Warning: Applied changes may be incomplete
The plan was created with the -target option in effect, so some changes
requested in the configuration may have been ignored and the output values may
not be fully updated. Run the following command to verify that no other
changes are pending:
    terraform plan

Note that the -target option is not suitable for routine use, and is provided
only for exceptional situations such as recovering from errors or mistakes, or
when Terraform specifically suggests to use it as part of an error message.

--> Do you want to launch the Aviatrix transit in Azure? (y/n)? n
--> Aviatrix Kickstart is done. Your controller IP is 3.231.68.241.
root@c6de98c3284e:~#

Destroying the AWS Transit LAB

Step#1

Inside the docker image, go inside the mana folder and terraform destroy

root@fe60ea0b0ed2:~/mcna# terraform destroy


data.aws_availability_zones.az_available: Refreshing state...
data.aws_ami.amazon-linux-2: Refreshing state...
aviatrix_vpc.aws_spoke_vpcs["aws_spoke2_vpc"]: Refreshing state... [id=AWS-EW1-Spoke2-VPC]
aviatrix_vpc.aws_spoke_vpcs["aws_spoke1_vpc"]: Refreshing state... [id=AWS-EW1-Spoke1-VPC]
aviatrix_vpc.aws_transit_vpcs["aws_transit_vpc"]: Refreshing state... [id=AWS-EW1-Transit-VPC]
aws_security_group.icmp_ssh["aws_spoke2_vpc"]: Refreshing state... [id=sg-08cfa3c48064d464d]
aws_security_group.icmp_ssh["aws_spoke1_vpc"]: Refreshing state... [id=sg-02c9ec75bbab8990d]
aviatrix_transit_gateway.aws_transit_gw: Refreshing state... [id=AWS-EW1-Transit-GW]
aws_instance.test_instances["spoke2_vm"]: Refreshing state... [id=i-0c285bede90cd3428]
aws_instance.test_instances["spoke1_vm"]: Refreshing state... [id=i-0fafa7e8959faa1c9]
aviatrix_spoke_gateway.aws_spoke_gws["spoke1"]: Refreshing state... [id=AWS-EW1-Spoke1-GW]
aviatrix_spoke_gateway.aws_spoke_gws["spoke2"]: Refreshing state... [id=AWS-EW1-Spoke2-GW]


Do you really want to destroy all resources?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.
  Enter a value: yes


aviatrix_spoke_gateway.aws_spoke_gws["spoke1"]: Destroying... [id=AWS-EW1-Spoke1-GW]
aviatrix_spoke_gateway.aws_spoke_gws["spoke2"]: Destroying... [id=AWS-EW1-Spoke2-GW]
aws_security_group.icmp_ssh["aws_spoke1_vpc"]: Destroying... [id=sg-02313afee80050388]
aws_security_group.icmp_ssh["aws_spoke2_vpc"]: Destroying... [id=sg-091514a4744f83eaf]
aws_security_group.icmp_ssh["aws_spoke2_vpc"]: Destruction complete after 2s
aws_security_group.icmp_ssh["aws_spoke1_vpc"]: Destruction complete after 2s
aviatrix_spoke_gateway.aws_spoke_gws["spoke2"]: Still destroying... [id=AWS-EW1-Spoke2-GW, 10s elapsed]
aviatrix_spoke_gateway.aws_spoke_gws["spoke1"]: Still destroying... [id=AWS-EW1-Spoke1-GW, 10s elapsed]
aviatrix_spoke_gateway.aws_spoke_gws["spoke2"]: Still destroying... [id=AWS-EW1-Spoke2-GW, 20s elapsed]
aviatrix_spoke_gateway.aws_spoke_gws["spoke1"]: Still destroying... [id=AWS-EW1-Spoke1-GW, 20s elapsed]


Destroy complete! Resources: 8 destroyed.
root@c6de98c3284e:~/mcna#

Step#2


In the controller folder use terraform destroy

root@c6de98c3284e:~/mcna# cd ../controller/
root@c6de98c3284e:~/controller# terraform destroy
module.avtx_iam_role.data.http.iam_policy_assume_role: Refreshing state...
module.avtx_iam_role.data.http.iam_policy_ec2_role: Refreshing state...
module.avtx_controller_instance.data.http.avx_iam_id: Refreshing state...
module.avtx_iam_role.data.aws_caller_identity.current: Refreshing state...
Plan: 0 to add, 0 to change, 19 to destroy.

Do you really want to destroy all resources?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.
  Enter a value: yes

aws_default_route_table.default: Destroying... [id=rtb-02df83a8f605d6690]
aws_default_route_table.default: Destruction complete after 0s
Destroy complete! Resources: 19 destroyed.
root@c6de98c3284e:~/controller#

Aviatrix Transit Network Design Options for GCP Shared VPC

What is GCP Shared VPC?

GCP shared VPC allows an organization to share or extend its vpc-network (you can also call it subnet) from one projects (called host) to another project (called service/tenant).

When you use Shared VPC, you designate a project as a host project and attach one or more other service projects to it. The VPC networks in the host project are called Shared VPC networks. Eligible resources from service projects can use subnets in the Shared VPC network.

Is Shared VPC a replacement of Transit (Hub-Spoke) Network?

Shared VPC is not for transit networking. It does not provide any enterprise grade routing or traffic engineering capabilities. Shared VPC lets organization administrators delegate administrative responsibilities, such as creating and managing instances, to Service Project Admins while maintaining centralized control over network resources like subnets, routes, and firewalls.

Aviatrix Transit Network Design Patterns with GCP Shared VPC

Aviatrix supports the GCP Shared VPC model and build the cloud and multi-cloud transit networking architecture to provide enterprise grade routing, service insertion, hybrid connectivity and traffic engineering for the workload VMs. There are number of different deployment model possible but we will focus on two designs with GCP Shared VPC network.

Design Pattern#1 – Aviatrix Spoke and Workload VMs in the Shared VPC Network

This is suited for small, PoC or lab deployments where the networking is kept very simple

Deployment details for this design pattern are discussed here

Design Pattern #2 – Aviatrix Transit in Host Project and Workload VMs in Shared VPC Network

  • This is the recommended model for the enterprises
  • The Aviatrix Transit and Spoke GWs are deployed inside the host-project in their respective vpc network
  • These vpc networks are not shared but stay local to the host project
  • The workloads vpc networks are created inside the host project and shared with the service/tenant VPCs using the GCP shared VPC network

Deployment details for this design pattern is discussed here

Check Point CloudGuard IaaS in AWS with Quick Failover

Introduction

Aviatrix release 6.0 introduced Firewall Instances Health Check Enhancement. This enhancement checks a firewall instance’s health by pinging its LAN interface from the connecting Aviatrix FireNet gateway. An alternative option to check health through firewall’s management interface. ICMP health check to the Firewall LAN interface improves firewall failure detection time and detection accuracy.

This enhancement is available for Aviatrix FireNet deployment with Aviatrix Multi-Cloud Global Transit both in AWS and Azure. This enhancement is also available for AWS-TGW based designs as well.

In this article we will take a look at this enhancement in details with Check Point CloudGuard Firewall.

Aviatrix Transit FireNet Design Pattern and Toplogy

Following is the Aviatrix Transit FireNet design pattern used to demonstrate the functionality.

  • Aviatrix Controller – version 6.0
  • Check Point CloudGuard Security Manager deployed using Cloud Formation template in AWS – version R8040
  • Check Point Smart Console – version R80.40
  • Check Point CloudGuard IaaS Firewall with Threat Prevention – R80.40-294.595
    • Check Point Cloud Guard IaaS Security Gateways (Firewalls) were deployed directly from the Aviatrix Controller

Aviatrix Controller Controls and Manages the Life Cycle of Firewall Instances

Aviatrix controller manages the complete life cycle of Check Point firewall from infrastructure perspective. Aviatrix controller

  • Deploy/Delete firewall instances
  • Inspect and sync routes with the Firewall instances
  • Propagates and install the routes in Check Point CloudGuard IaaS firewalls. 
  • Enable/Disable Fail Open or Fail Close policy
  • Enable/Disable inspection
  • Enable/Disable Egress traffic via Firewall instances
  • Here you can see option to enable or disable the Health Check option for firewalls
  • Excludes certain subnets/CIDS from firewall inspection
  • Enable/Disable LAN side ICMP Health Check

Enable LAN Side ICMP Health Check in Aviatrix Controller

Following screen shots shows how to enable the LAN side ICMP health Check.

Firewall Network –> Advance –> Click the 3 vertical dots

The expanded view shows the firewall deployed by the Aviatrix controller and towards the end of screen shot, one can enable/disable LAN side Health Check.

Verify LAN Side ICMP Health Check via Smart Console

From Check Point logs and Monitoring section, notice that the ICMP health check is initiated every 5 second from the Aviatrix Transit FireNet gateways. The 5 second setting is the default and cannot be changed.