LAB6 – GCP Remote User VPN / Client VPN

Business Objective

An important security requirement for GCP VPCs is to effectively control remote user access in a policy based manner. The cloud and the COVID-19 pandemic makes most users “remote.” Not only for employees who are out of the office, the “remote” label can be applied to developers, contractors, and partners whether they’re in the office or around the globe.

Note: User VPN, Client VPN or OpenVPN are interchangeable terms.

Remote User VPN / Client VPN Overview

While a bastion host using an SSH tunnel is an easy way to encrypt network traffic and provide direct access, most companies looking for more robust networking will want to invest in a SMAL based VPN solution. Because …

  • Single instance VPN servers in each VPC results in tedious certificate management
    • No centralized enforcement give rise to questions like “who can access what VPC?”
    • With more than dozen users and more than a few VPCs, management and auditing of the user access can become a major challenge

What’s needed is an easily managed, secure, cost-effective solution. Aviatrix provides a cloud-native and feature-rich client VPN solution.

  • The solution is based on OpenVPN® and is compatible with all OpenVPN® clients
  • In addition, Aviatrix provides its own client that supports SAML authentication directly from the client
  • Each VPN user can be assigned to a profile with access privileges – down to hosts, protocols and ports
  • Any Identity provider auth for LDAP/AD, Duo, Okta, Centrify, MFA, Client SAML and other integrations
  • Centralized visibility of all users, connection history and all certificates across your network.

LAB Topology and Objective

  • This LAB is not dependent on any previous labs.
    • This LAB will build on the topology we have deployed already in the previous LABs. Following is what we have deployed already.
  • A GCP Spoke gateway (gcp-spoke-vpn) is already deployed in the gcp-spoke-vpn-vpc.
    • This is needed to make sure remote users, employees, developers or partners have a clear demarcation point (called Cloud Access layer in MCNA architecture) before they access the enterprise or corporate resources/workloads/VMs/etc.

Objective

  • Students will use their laptops to connect to this lap topology using an Aviatrix SAML client VPN and will become part of this topology. This will allow them to access any resources using the private IP address

Deploy Smart SAML Remote User VPN Solution

Deploy User VPN

Controller –> Gateway –> Create New (with the following information)

While creating this gateway, you must select “VPN Access” checkbox. This will make this gateway as OPENVPN Gateway for Aviatrix User VPN Solution

Common Mistake

The process could take upto ~10 minute to complete. It is hard to predict the deployment time even when you are deploying it in the same region and same cloud all the time. 

Once the gateway is deployed, you can see the status and the GCP LB address that was created as part of the automation.

After the gateway deployment, the topology looks like following

GCP TCP LB Configuration (Reference Only)

Following screen shots show the TCP LB details in GCP that was created by Aviatrix automation. LB helps in scaling out the solution without any disruption to the user profile or certificates.

Notice: Students do not have access to this details. It is shared here fro reference purposes only.

Profile Based Zero-Trust Access Control

Each VPN user can be assigned to a profile that is defined by access privileges to network, host, protocol and ports. The access control is dynamically enforced when a VPN user connects to the public cloud via an Aviatrix VPN gateway.

Create a new profile: Controller –> OpenVPN –> Profile

Create a policy to allow users access to only VMs in gcp-spoke-vpc1.

Now add a remote uswr and assign profile to it. Make sure to provide correct email address here.

Add a New VPN User

Controller –> OPENVPN –> Add a New VPN User


Download the .ovpn profile file from the Aviatrix Controller

Now download the Aviatrix OpenVPN Client: https://docs.aviatrix.com/Downloads/samlclient.html

MAC: https://s3-us-west-2.amazonaws.com/aviatrix-download/AviatrixVPNClient/AVPNC_mac.pkg
Windows: https://s3-us-west-2.amazonaws.com/aviatrix-download/AviatrixVPNClient/AVPNC_win_x64.exe
Linux: Check the Download link here

Now Open VPN client is connected to your network.

Testing and Verification

  • Ping VM in the gcp-spoke-vpc-2
    • This should not ping because as per the zero-trust profile, the remote users are not allowed to access any resources expect in gcp-spoke-vpc-1
  • Ping the VM in the gcp-spoke-vpc-1
    • Most likely it will not ping because the “gcp-spoke-vpn” VPC is not assigned to any MCNS Domain yet
shahzadali@shahzad-ali ~ % ping 10.20.11.130
PING 10.20.11.130 (10.20.11.130): 56 data bytes
92 bytes from gcp-spoke-vpn.c.cne-pod24.internal (10.20.19.2): Time to live exceeded
Vr HL TOS  Len   ID Flg  off TTL Pro  cks      Src      Dst
 4  5  00 5400 7c18   0 0000  01  01 544d 192.168.19.6  10.20.11.130 

Request timeout for icmp_seq 0
92 bytes from gcp-spoke-vpn.c.cne-pod24.internal (10.20.19.2): Time to live exceeded
Vr HL TOS  Len   ID Flg  off TTL Pro  cks      Src      Dst
 4  5  00 5400 b6da   0 0000  01  01 198b 192.168.19.6  10.20.11.130 

Request timeout for icmp_seq 1
92 bytes from gcp-spoke-vpn.c.cne-pod24.internal (10.20.19.2): Time to live exceeded
Vr HL TOS  Len   ID Flg  off TTL Pro  cks      Src      Dst
 4  5  00 5400 fcc1   0 0000  01  01 d3a3 192.168.19.6  10.20.11.130 

^C
--- 10.20.11.130 ping statistics ---
3 packets transmitted, 0 packets received, 100.0% packet loss
shahzadali@shahzad-ali ~ % 

Now change MCNS setting and assign gcp-spoke-vpn to Green domain.

Now the new topology will look like following

Now connectivity is established and ping will start working as you can see in the following output as well.

Request timeout for icmp_seq 7
92 bytes from gcp-spoke-vpn.c.cne-pod24.internal (10.20.19.2): Time to live exceeded
Vr HL TOS  Len   ID Flg  off TTL Pro  cks      Src      Dst
 4  5  00 5400 2b8e   0 0000  01  01 a4d7 192.168.19.6  10.20.11.130 

Request timeout for icmp_seq 8
Request timeout for icmp_seq 9
Request timeout for icmp_seq 10
64 bytes from 10.20.11.130: icmp_seq=11 ttl=60 time=70.931 ms
64 bytes from 10.20.11.130: icmp_seq=12 ttl=60 time=63.498 ms
64 bytes from 10.20.11.130: icmp_seq=13 ttl=60 time=62.943 ms
64 bytes from 10.20.11.130: icmp_seq=14 ttl=60 time=69.129 ms
64 bytes from 10.20.11.130: icmp_seq=15 ttl=60 time=62.002 ms
64 bytes from 10.20.11.130: icmp_seq=16 ttl=60 time=68.655 ms
^C
--- 10.20.11.130 ping statistics ---
17 packets transmitted, 6 packets received, 64.7% packet loss
round-trip min/avg/max/stddev = 62.002/66.193/70.931/3.477 ms
shahzadali@shahzad-ali ~ % 

Conclusion

  • Aviatrix User VPN is a powerful solution
  • MCNS provides additional security beyond the profile based user-vpn

LAB5 – Overlapping Subnet / IP (Duplicate IP) Solution in GCP

Objective

ACE Enterprise in GCP wants to connect to different partners to consume SaaS services. These partners could be present in physical DC or Branches, or in VPC/VNET in the cloud such as GCP/AWS/Azure/etc. ACE cannot dictate or control the IPs/Subnets/CIDR those partners have configured and must support “Bring Your own IP” which might overlap with what is already configured in GCP.

In our topology GCP Spoke3 VPC subnet is overlapping with AWS Spoke2 VPC subnet (10.42.0.0). We need to make sure that VM in GCP Spoke3 VPC is able to communicate with EC2 in AWS Spoke2 VPC.

Topology Modifications

In order for this lab to work, we are simulating AWS Spoke2 VPC as a remote site/branch

  • Step#1: Detach the aws-spoke2 gateway from transit
  • Step#2: Delete the aws-spoke2-gw
  • Step#3: Deploy a standard Aviatrix Gateway (s2c-gw) in aws-spoke2-vpc

Step#1

Controller –> Multi-Cloud Transit –> Setup –> Detach AWS VPC2

Step#2

Controller –> Gateway –> Highlight AWS VPC2 Spoke GW –> Delete

Step#3

Controller –> Gateway –> Add New (use the following screen)

This getaway could be any router or firewall or vpn device in the on-prem or cloud location.

Topology

  • Following diagram shows the topology after the modifications are done
  • In the topology notice that there is no path between gcp-spoke3-vpc and s2c-gw in aws-spoke2-vpc
  • Also notice the local and remote virtual IPs allocated for the VPCs/Sites that are overlapping. This is needed so that these overlapping VPCs/Sites can communicate with each other using those virtual IPs
    • This is possible using Aviatrix patented technology where as a an enterprise you do not need to worry about programming advanced NAT rather Aviatrix intent based “Mapped NAT” policy will automatically take care of all the background VPC/VNET route programming, Gateway Routing, Secure Tunnel Creation, Certificate Exchange, SNAT, DNAT, etc.

Configuration

Bi-directional setup is needed for this to work. We will create two connections in coming configuration steps.

Connection From GCP to Partner Site (AWS) – Leg1

Controller –> SITE2CLOUD –> Add a new connection (with following details)

VPC ID / VNet Name: vpc-gcp-spoke-3
Connection Type: Mapped
Connection Name: partner1-s2c-leg1
Remote Gateway Type: Aviatrix
Tunnel Type: Policy-based
Primary Cloud Gateway: gcp-spoke-3
Remote Gateway IP Address: Check Controller’s Gateway section to find the public ip address
Remote Subnet (Real): 10.42.0.0/24
Remote Subnet (Virtual): 192.168.10.0/24
Local Subnet (Real): 10.42.0.0/24
Local Subnet (Virtual): 192.168.20.0/24

This will create the first leg of the connection from Cloud (GCP) to Partner Site (AWS). Tunnel will stay down until the other end is configured.

Download the Configuration

Aviatrix Controller provides a template that can be used to configure the remote router/firewall. Click on the Site-to-Cloud connection that you have just created. Click “EDIT” and then download the configuration for Aviatrix.

Downloaded file (vpc-vpc-gcp-spoke-3~-~cne-pod24-partner1-s2c-leg1.txt) contents just for reference purposes. We will import this file in next step.

{
  "ike_ver": "1",
  "name": "partner1-s2c-leg1",
  "type": "mapped",
  "tunnel_type": "policy",
  "peer_type": "avx",
  "ha_status": "disabled",
  "null_enc": "no",
  "private_route_enc": "no",
  "PSK": "Password123!",
  "ph1_enc": "AES-256-CBC",
  "ph2_enc": "AES-256-CBC",
  "ph1_auth": "SHA-256",
  "ph2_auth": "HMAC-SHA-256",
  "ph1_dh_group": "14",
  "ph2_dh_group": "14",
  "ph2_lifetime": "3600",
  "remote_peer": "34.86.77.74",
  "remote_peer_private_ip": "10.42.0.2",
  "local_peer": "54.71.151.196",
  "remote_subnet_real": "10.42.0.0/24",
  "local_subnet_real": "10.42.0.0/24",
  "remote_subnet": "192.168.20.0/24",
  "local_subnet": "192.168.10.0/24",

  "tun_name": "",
  "highperf": "false",
  "ovpn": "",
  "enable_bgp": "false",
  "bgp_local_ip" : "",
  "bgp_remote_ip" : "",
  "bgp_local_asn_number": "0",
  "bgp_remote_as_num": "0",
  "bgp_neighbor_ip_addr": "",
  "bgp_neighbor_as_num": "0",
  "tunnel_addr_local": "",
  "tunnel_addr_remote": "",
  "activemesh": "yes"
}

Close the dialogue box now.

Connection Partner Site (AWS) to GCP AWS – Leg2

Now we need to create a new connection and import the file we just downloaded

Controller –> SITE2CLOUD –> Add a new connection –> Select “aws-west-2-spoke-2” VPC –> Import

VPC ID / VNet Name: aws-west-2-spoke-2
Connection Type: auto-populated (Mapped)
Connection Name: partner1-s2c-leg2
Remote Gateway Type: Aviatrix
Tunnel Type: auto-populated (Policy-based)
Algorithms: auto-populated (do not change these settings)
Primary Cloud Gateway: auto-populated (aws-spoke2-vpc-s2c-gw)
Remote Gateway IP Address: auto-populated
Remote Subnet (Real): auto-populated (10.42.0.0/24)
Remote Subnet (Virtual): auto-populated (192.168.20.0/24)
Local Subnet (Real): auto-populated (10.42.0.0/24)
Local Subnet (Virtual): auto-populated (192.168.10.0/24)

Now it will take about a minute for both tunnels to come up

This makes is look like following

Verification

Now you can ping the overlapping subnet VM on the AWS side from GCP. Make sure to use the virtual subnet for AWS with the last octet of the VM being the same.

ubuntu@vm-gcp-spoke-3:~$ ifconfig
ens4 Link encap:Ethernet HWaddr 42:01:0a:2a:00:82
inet addr:10.42.0.130 Bcast:10.42.0.130 Mask:255.255.255.255
inet6 addr: fe80::4001:aff:fe2a:82/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1460 Metric:1
RX packets:1492666 errors:0 dropped:0 overruns:0 frame:0
TX packets:871540 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:518583431 (518.5 MB) TX bytes:111566034 (111.5 MB)

ubuntu@vm-gcp-spoke-3:~$ ping 192.168.10.84
PING 192.168.10.84 (192.168.10.84) 56(84) bytes of data.
64 bytes from 192.168.10.84: icmp_seq=1 ttl=62 time=83.8 ms
64 bytes from 192.168.10.84: icmp_seq=2 ttl=62 time=83.5 ms
64 bytes from 192.168.10.84: icmp_seq=3 ttl=62 time=83.6 ms
64 bytes from 192.168.10.84: icmp_seq=4 ttl=62 time=83.6 ms
64 bytes from 192.168.10.84: icmp_seq=5 ttl=62 time=84.2 ms
^C
--- 192.168.10.84 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4006ms
rtt min/avg/max/mdev = 83.554/83.784/84.203/0.391 ms
ubuntu@vm-gcp-spoke-3:~$

This concludes the lab. Final topology looks like following

Note: This lab is not depend on previous labs.

LAB4 – GCP FQDN Based Egress Security

This lab will demonstrate how to provide Fully Qualified Domain Name (FQDN) based Egress Filtering security using Aviatrix. Only those FQDNs will be allowed which are permitted in the configured policy.

Egress FQDN Filtering Overview

Aviatrix FQDN Egress is a highly available security service specifically designed for workloads or applications in
the public clouds.

Aviatrix Egress FQDN Filtering is Centrally managed by the Aviatrix Controller and executed on Aviatrix FQDN gateway
in the VNET/VPC/VCN in the distributed or centralized architecture. All the internet bound traffic (TCP/UDP
including HTTP/HTTPS/SFTP) is first discovered and based on the results admin can create egress filters using
whitelist or blacklist model.

Egress FQDN filtering allows organizations to achieve PCI compliance by limiting application’s access to
approved FQDNs. This is a common replacement for SQUID proxy type of manual solutions. There are several
ways to deploy Egress FQDN filtering depending on requirements.

This lab will use existing GCP Spoke GWs to provide filtering to protect instances that are on private subnet but require Egress Security. For more scalable solution, enterprises opt for a dedicated Egresss FQDN GW rather than using the existing Spoke GW for this function.

Topology

  • The workload in gcp-spoke2-vpc will follow zero trust security model
    • Workload/VM in gcp-spoke2-vpc will only have access to https://*.ubuntu.com and https://*.google.com FQDN.
    • Rest of the traffic will be blocked with the base zero trust policy
  • We will configure the gcp-spoke2-gw as Egress FQDN GW as well to enforce this security policy
  • We will use VM in gcp-spoke3-vpc as “Jump Host” for this testing

Enable Egress FQDN Filtering

Controller –> Security –> Egress Control –> New TAG

Controller –> Security –> Egress Control –> Egress FQDN Filter –> Edit “Allow-List” TAG –> Add New

Now click “SAVE”, then “UPDATE” and CLOSE.

Now make sure that the base policy is “White” which stands for “Zero Trust”. This will make sure only the FQDNs in the “Allowed List” are accessible and rest of the FQDNs are blocked.

Now we will attach this filter policy to gcp-spoke-2-gw and then enable it.

It will look like following

The status is still disabled and now we need to enable it.

Testing and Verification

We have completed the following topology

  • ssh into the gcp-spoke3 VM using its public ip address (vm_gcp_public_ip_spoke3)
    • User: ubuntu / pass: Password123!
  • Then from there ssh into the gcp-spoke2 VM using its private ip address (vm_gcp_private_ip_spoke2)
    • User: ubuntu / pass: Password123!
  • gcp-spoke2 is where we have enforced the Egress FQDN policy
    • Since both spoke2 and spoke3 are in Blue segment, they can communicate to each others. If you would try to ssh into gcp-spoke2-vm from gcp-spoke1-vm, it will not work
shahzadali@shahzad-ali ~ % ssh ubuntu@34.86.180.56

ubuntu@34.86.180.56's password:
Welcome to Ubuntu 16.04.7 LTS (GNU/Linux 4.15.0-1087-gcp x86_64)
Documentation: https://help.ubuntu.com
Management: https://landscape.canonical.com
Support: https://ubuntu.com/advantage
3 packages can be updated.
0 updates are security updates.
New release '18.04.5 LTS' available.
Run 'do-release-upgrade' to upgrade to it.
*** System restart required ***
Last login: Sat Jan 2 16:41:56 2021 from 172.124.233.126
To run a command as administrator (user "root"), use "sudo ".
See "man sudo_root" for details.

ubuntu@vm-gcp-spoke-3:~$ ifconfig

ens4 Link encap:Ethernet HWaddr 42:01:0a:2a:00:82
inet addr:10.42.0.130 Bcast:10.42.0.130 Mask:255.255.255.255
inet6 addr: fe80::4001:aff:fe2a:82/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1460 Metric:1
RX packets:1461966 errors:0 dropped:0 overruns:0 frame:0
TX packets:846760 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:510003616 (510.0 MB) TX bytes:107570824 (107.5 MB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:126 errors:0 dropped:0 overruns:0 frame:0
TX packets:126 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:14552 (14.5 KB) TX bytes:14552 (14.5 KB)
ubuntu@vm-gcp-spoke-3:~$


ubuntu@vm-gcp-spoke-3:~$ ssh ubuntu@10.20.12.130

ubuntu@10.20.12.130's password:
Welcome to Ubuntu 16.04.7 LTS (GNU/Linux 4.15.0-1087-gcp x86_64)
Documentation: https://help.ubuntu.com
Management: https://landscape.canonical.com
Support: https://ubuntu.com/advantage
3 packages can be updated.
0 updates are security updates.
New release '18.04.5 LTS' available.
Run 'do-release-upgrade' to upgrade to it.
*** System restart required ***
Last login: Sat Jan 2 16:42:30 2021 from 10.20.12.2
To run a command as administrator (user "root"), use "sudo ".
See "man sudo_root" for details.
ubuntu@vm-gcp-spoke-2:~$
ubuntu@vm-gcp-spoke-2:~$

ubuntu@vm-gcp-spoke-2:~$ wget https://www.google.com
--2021-01-02 17:46:12-- https://www.google.com/
Resolving www.google.com (www.google.com)… 74.125.197.147, 74.125.197.103, 74.125.197.104, …
Connecting to www.google.com (www.google.com)|74.125.197.147|:443… connected.
HTTP request sent, awaiting response… 200 OK
Length: unspecified [text/html]
Saving to: ‘index.html.21’
index.html.21 [ <=> ] 12.54K --.-KB/s in 0s
2021-01-02 17:46:12 (29.4 MB/s) - ‘index.html.21’ saved [12844]

ubuntu@vm-gcp-spoke-2:~$ wget https://cloud.google.com
--2021-01-02 17:46:59-- https://cloud.google.com/
Resolving cloud.google.com (cloud.google.com)… 74.125.20.113, 74.125.20.102, 74.125.20.100, …
Connecting to cloud.google.com (cloud.google.com)|74.125.20.113|:443… connected.
HTTP request sent, awaiting response… 200 OK
Length: 706920 (690K) [text/html]
Saving to: ‘index.html.22’
index.html.22 100%[==============================================================>] 690.35K 3.44MB/s in 0.2s
2021-01-02 17:47:00 (3.44 MB/s) - ‘index.html.22’ saved [706920/706920]
ubuntu@vm-gcp-spoke-2:~$

ubuntu@vm-gcp-spoke-2:~$ wget https://www.ubuntu.com
--2021-01-02 17:48:34-- https://www.ubuntu.com/
Resolving www.ubuntu.com (www.ubuntu.com)… 91.189.88.180, 91.189.88.181, 91.189.91.45, …
Connecting to www.ubuntu.com (www.ubuntu.com)|91.189.88.180|:443… connected.
HTTP request sent, awaiting response… 301 Moved Permanently
Location: https://ubuntu.com/ [following]
--2021-01-02 17:48:35-- https://ubuntu.com/
Resolving ubuntu.com (ubuntu.com)… 91.189.88.180, 91.189.91.44, 91.189.91.45, …
Connecting to ubuntu.com (ubuntu.com)|91.189.88.180|:443… connected.
HTTP request sent, awaiting response… 200 OK
Length: 121017 (118K) [text/html]
Saving to: ‘index.html.23’
index.html.23 100%[==============================================================>] 118.18K 319KB/s in 0.4s
2021-01-02 17:48:36 (319 KB/s) - ‘index.html.23’ saved [121017/121017]
ubuntu@vm-gcp-spoke-2:~$

Now if we try to access any other FQDN, that should fail

ubuntu@vm-gcp-spoke-2:~$ wget https://www.espn.com
--2021-01-02 17:49:27-- https://www.espn.com/
Resolving www.espn.com (www.espn.com)… 13.224.10.82, 13.224.10.114, 13.224.10.88, …
Connecting to www.espn.com (www.espn.com)|13.224.10.82|:443… connected.


ubuntu@vm-gcp-spoke-2:~$ wget https://www.cnn.com
--2021-01-02 17:51:00-- https://www.cnn.com/
Resolving www.cnn.com (www.cnn.com)… 151.101.1.67, 151.101.65.67, 151.101.129.67, …
Connecting to www.cnn.com (www.cnn.com)|151.101.1.67|:443… connected.

Egress FQDN Stats on Controller

Controller –> Security –> FQDN Stats

Per Gateway Stats

Egress FQDN Search

==============================
Search results on Gateway gcp-spoke-2
==============================
2021-01-02T16:42:54.990606+00:00 GW-gcp-spoke-2-34.83.204.207 avx-nfq: AviatrixFQDNRule3[CRIT]nfq_ssl_handle_client_hello() L#291  Gateway=gcp-spoke-2 S_IP=10.20.12.130 D_IP=13.224.10.88 hostname=www.espn.com state=MATCHED drop_reason=BLACKLISTED Rule=*.espn.com,SourceIP:IGNORE;0;0;443
2021-01-02T16:43:12.620897+00:00 GW-gcp-spoke-2-34.83.204.207 avx-nfq: AviatrixFQDNRule0[CRIT]nfq_ssl_handle_client_hello() L#291  Gateway=gcp-spoke-2 S_IP=10.20.12.130 D_IP=13.224.10.82 hostname=www.espn.com state=MATCHED drop_reason=BLACKLISTED Rule=*.espn.com,SourceIP:IGNORE;0;0;443
2021-01-02T17:49:27.437085+00:00 GW-gcp-spoke-2-34.83.204.207 avx-nfq: AviatrixFQDNRule0[CRIT]nfq_ssl_handle_client_hello() L#291  Gateway=gcp-spoke-2 S_IP=10.20.12.130 D_IP=13.224.10.82 hostname=www.espn.com state=NO_MATCH drop_reason=NOT_WHITELISTED

2021-01-02T17:49:41.679243+00:00 GW-gcp-spoke-2-34.83.204.207 avx-nfq: message repeated 7 times: [ AviatrixFQDNRule0[CRIT]nfq_ssl_handle_client_hello() L#291  Gateway=gcp-spoke-2 S_IP=10.20.12.130 D_IP=13.224.10.82 hostname=www.espn.com state=NO_MATCH drop_reason=NOT_WHITELISTED]
2021-01-02T17:49:55.759092+00:00 GW-gcp-spoke-2-34.83.204.207 avx-nfq: AviatrixFQDNRule0[CRIT]nfq_ssl_handle_client_hello() L#291  Gateway=gcp-spoke-2 S_IP=10.20.12.130 D_IP=13.224.10.82 hostname=www.espn.com state=NO_MATCH drop_reason=NOT_WHITELISTED
2021-01-02T17:50:02.669462+00:00 GW-gcp-spoke-2-34.83.204.207 avx-nfq: message repeated 6 times: [ AviatrixFQDNRule0[CRIT]nfq_ssl_handle_client_hello() L#291  Gateway=gcp-spoke-2 S_IP=10.20.12.130 D_IP=13.224.10.82 hostname=www.espn.com state=NO_MATCH drop_reason=NOT_WHITELISTED]
2021-01-02T17:50:05.926066+00:00 GW-gcp-spoke-2-34.83.204.207 avx-nfq: AviatrixFQDNRule0[CRIT]nfq_ssl_handle_client_hello() L#291  Gateway=gcp-spoke-2 S_IP=10.20.12.130 D_IP=13.224.10.82 hostname=www.espn.com state=NO_MATCH drop_reason=NOT_WHITELISTED

CoPilot Egress FQDN Stats

https://copilot-pod24.mcna.cc/#/login   user: copilot / pass: Copilot123!  (Read-Only on Controller)

CoPilot –> Security –> Egress

CoPilot Live Status

CoPilot Search

LAB3 – GCP Multi-Cloud Network Segmentation (MCNS)

It is important to provide security compliance and fulfill audit requirements by using various methods and network segmentation is one of them. Providing Network Security segmentation is a critical business requirement. Aviatrix MCNS is helping many customers who achieved this requirement.

So far we have built following topology

Our objective in this lab to segment VPCs in GCP based on the workload. Here are the business requirements

  • There are two types of workload present in GCP called Green and Blue
  • The workload in Blue and Green must not be allowed to communicate to each other
  • Workloads within Blue and Green segments must be allowed to communicate with each other
  • These segments must also extend to AWS as well.
  • These segments should also extend to the on-prem data centers to provide segmentation for hybrid connectivity and their respective workload deployed in the on-premise DC locations.

Following is how the final topology will look like after all the business objectives are met

Enable MCNS on Transit gateways

Controller –> Multi-Cloud Transit –> Segmentation –> Plan –> Enable for “gcp-transit”

Controller –> Multi-Cloud Transit –> Segmentation –> Plan –> Enable for “aws-transit-gw-us-west-2”

Create Multi-Cloud Security Domains (MCSD)

Create two MCSD Green and Blue. These two domains are not connected to each other by default.

Controller –> Multi-Cloud Transit –> Segmentation –> Plan –> Create MCSD

Repeat it again for Blue

Controller –> Multi-Cloud Transit –> Segmentation –> Plan –> Create MCSD

MCNS Connection Policy

The following screen shows that Green and Blue are not connected as per their Security or “Connection Policy”.

Assign GCP and AWS VPC to MCSD

In order to enforce the intent/policy we just created, we need to assign VPCs to their respective Security Domain based on the business policy.

  • gcp-spoke-1 :: Green
  • gcp-spoke-2 :: Blue
  • gcp-spoke-3 :: Blue
  • gcp-to-dc-route-1 :: Green
  • aws-spoke-1 :: Green
  • aws-spoke-2 :: Blue
  • aws-to-dc-router-2 :: Green

Controller –> Multi-Cloud Transit –> Segmentation –> Build –> Associate Aviatrix Spoke to MCSD

Repeat this step as per the business requirement

Controller –> Multi-Cloud Transit –> Segmentation –> List –> Domains to verify the configuration

Verify the Connectivity

Now Ping from vm_gcp_private_ip_spoke1 (Green Segment) to other test machines (as listed below) and check the connectivity

  • vm_gcp_private_ip_spoke2 Blue Segment (10.20.12.130) – should not work
  • vm_gcp_private_ip_spoke3 Blue Segment (10.42.0.130) – should not work
  • vm_aws_private_ip_spoke1 Green Segment (10.101.0.84) – should work
ubuntu@vm-gcp-spoke-1:~$ ping 10.20.12.130
PING 10.20.12.130 (10.20.12.130) 56(84) bytes of data.
From 10.20.11.2 icmp_seq=1 Time to live exceeded
From 10.20.11.2 icmp_seq=2 Time to live exceeded
From 10.20.11.2 icmp_seq=3 Time to live exceeded
^C
--- 10.20.12.130 ping statistics ---
3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2003ms

ubuntu@vm-gcp-spoke-1:~$ ping 10.42.0.130
PING 10.42.0.130 (10.42.0.130) 56(84) bytes of data.
From 10.20.11.2 icmp_seq=1 Time to live exceeded
From 10.20.11.2 icmp_seq=2 Time to live exceeded
From 10.20.11.2 icmp_seq=3 Time to live exceeded
From 10.20.11.2 icmp_seq=4 Time to live exceeded
^C
--- 10.42.0.130 ping statistics ---
4 packets transmitted, 0 received, +4 errors, 100% packet loss, time 3003ms

ubuntu@vm-gcp-spoke-1:~$ ping 10.101.0.84
PING 10.101.0.84 (10.101.0.84) 56(84) bytes of data.
64 bytes from 10.101.0.84: icmp_seq=1 ttl=60 time=63.3 ms
64 bytes from 10.101.0.84: icmp_seq=2 ttl=60 time=61.3 ms
64 bytes from 10.101.0.84: icmp_seq=3 ttl=60 time=61.5 ms
64 bytes from 10.101.0.84: icmp_seq=4 ttl=60 time=61.3 ms
^C
--- 10.101.0.84 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 61.376/61.914/63.303/0.806 ms
ubuntu@vm-gcp-spoke-1:~$ 

Now keep the ping running from gcp-spoke-1 VM to 10.20.12.130 and change the policy to connect Green and Blue. Notice that ping starts working.

Now change the policy as it was before so that Blue is not allowed to Green

LAB2 – GCP Multi-Cloud Network Transit / Hub-Spoke

In this lab, we will build the hub and spoke network in GCP. All the GCP VPCs and their respective subnets are already created for you to save time.

GCP Spoke-2 GW VPC Network/Subnet Creation

This step is already done for you so please do not attempt.

Controller can create those subnets using API/Terraform as well. It makes it easy to make it part of your CI/CD pipleline.

Controller –> Useful Tools –> Create A VPC –> Add New
Controller –> Useful Tools –> Create A VPC –> Add New Row

You should view the VPCs created for you already.

Controller –> Useful Tools –> VPC Tracker

Now click “VIEW GCLOUD SUBNET” to see further details

Deploy GCP Spoke-2 Gateway

Since we have all the subnets created properly, it is time to deploy the GCP Spoke-2 GW using the Controller.

Controller –> Multi-Cloud Transit –> Setup

Connect GCP Spokes to GCP Transit

At this stage we have 4 spokes in GCP

  • gcp-spoke1 –> connected to gcp transit gw
  • gpc-spoke2 –> not connected to gcp transit gw
  • gcp-spoke3 –> connected to gpc transit gw (this will be use for user-vpn later in the lab)
  • gcp-spoke4 –> not connected to gcp transit gw

Login to CoPilot (user: Copilot / pass: Copilot123!) now to see the connectivity in real-time

In this part of the lab, we will connect the remaining spoke gateway to aviatrix transit-gw

Controller –> Multi-Cloud Transit –> Setup –> Attach Spoke Gateway to Transit Network

Spoke#2

Spoke#3

Now notice the topology changed and all GCP spokes are connected now

Veryify GCP Spoke Connectivity

Now Ping from vm_gcp_private_ip_spoke1 (user: ubuntu / pass: Password123!) to other test machines (as listed below) and check the connectivity

  • vm_gcp_private_ip_spoke2 (10.20.12.130) – should work
    • using Aviatrix hub/spoke
  • vm_gcp_private_ip_spoke3 (10.42.0.130) – should work
    • using Aviatrix hub/spoke

Connect AWS Spoke to AWS-Transit

If we look at the AWS side, we have aws-spoke1 connected to aws-transit but the spoke2 is not yet connected as shown in the topology below

Now we will use Aviatrix Controller to establishe this connection. The aws-spoke2 gateway is already deployed for you

Controller –> Multi-Cloud Transit –> Setup –> Attach Spoke Gateway to Transit Network

After this setup, following is the topology

Veryify AWS Spoke Connectivity

Now Ping from vm_gcp_private_ip_spoke1 (user: ubuntu / pass: Password123!) to AWS test EC2s (as listed below) and check the connectivity

  • vm_aws_private_ip_spoke1 (10.101.0.84) – should work
    • This works because the traffic is routed via the dc-router-1. There is a private link between dc-router-1 and dc-router-2 connecting GCP and AWS using services like Equinix/Megaport/Pureport/Epsilon/etc.
    • Use tracepath or traceroute ($sudo apt install traceroute) to confirm
  • vm_aws_private_ip_spoke2 (10.42.0.84) – should not work
    • Because this subnet overlaps with the gcp spoke-vpc3 subnet
    • We will fix this issue later in the lab

Following output shows a traffic route from gcp-spoke-1 test VM to AWS test EC2 in aws-spoke1

ubuntu@vm-gcp-spoke-1:~$ tracepath 10.101.0.84
 1?: [LOCALHOST]                                         pmtu 1460
 1:  gcp-spoke-1.c.cne-pod24.internal                      1.563ms 
 1:  gcp-spoke-1.c.cne-pod24.internal                      0.260ms 
 2:  gcp-spoke-1.c.cne-pod24.internal                      0.256ms pmtu 1396
 2:  10.20.3.2                                             1.846ms 
 3:  10.20.3.2                                             0.775ms pmtu 1390
 3:  169.254.100.2                                        51.410ms 
 4:  172.16.0.2                                           55.184ms 
 5:  10.10.0.93                                           75.906ms 
 6:  10.101.0.70                                          77.206ms 

Multi-Cloud Transit Peering (Connection GCP and AWS)

Now we have setup all the gateway and build the hub-spoke (transit) in GCP, it is time to connect GCP with AWS now. This peering is secure and encrypted and takes care of number of configuration options and best practices that are needed for such a complex connectivity. You will appreciate the simplicity of doing that.

Select Multi-Cloud Transit –> Transit Peering –> ADD NEW

Note that the order of cloud selection does not matter here. After few seconds, the status will be green as shown in the screen shot below.

Verify Transit Connectivity

Ping from vm_gcp_private_ip_spoke1 (user: ubuntu / pass: Password123!) to GCP and AWS VMs (as listed below) and make sure connectivity works

  • vm_gcp_private_ip_spoke2 (10.20.12.130) – should work
    • using Aviatrix hub/spoke
  • vm_gcp_private_ip_spoke3 (10.42.0.130) – should work
    • using Aviatrix hub/spoke
  • vm_aws_private_ip_spoke2 (10.42.0.84) – should not work
    • Because this subnet overlaps with the gcp spoke-vpc3 subnet
    • We will fix this issue later in the lab
  • vm_aws_private_ip_spoke1 (10.101.0.84) – should work
    • This time the packet will use gcp to aws aviatrix transit gateway route for connectivity
    • Use tracepath or traceroute ($sudo apt install traceroute) to confirm
    • In case this link goes down, the connectivity will be provided using the DC router DCI link as backup. This setup is important for Business Continuity and Disaster Recover
ubuntu@vm-gcp-spoke-1:~$ traceroute 10.101.0.84
traceroute to 10.101.0.84 (10.101.0.84), 30 hops max, 60 byte packets
1 gcp-spoke-1.c.cne-pod24.internal (10.20.11.2) 1.322 ms 1.354 ms 1.413 ms
2 10.20.3.2 (10.20.3.2) 2.684 ms 2.708 ms 2.710 ms
3 10.10.0.93 (10.10.0.93) 62.072 ms 62.032 ms 62.032 ms
4 10.101.0.70 (10.101.0.70) 61.975 ms 61.981 ms 61.895 ms

Controller –> Multi-Cloud Transit –> List –> Transit Gateway –> gcp-transit –> Show Details

Following screen shows the best path available to reach 10.101.0.84

Controller –> Multi-Cloud Transit –> List –> Transit Gateway –> gcp-transit –> Show Details –> Route Info DB Details

Following screen shows that 10.101.0.0/24 was received from two paths and transit peering was selected as the best path

This completes the transit topology and testing. You can verify it in CoPilot as well.

LAB1 – Google Cloud and Aviatrix Testing/POC LAB Guide

Introduction

This document is the lab guide for GCP Test Flight Project. Anyone with basic GCP knowledge is the audience. It is GCO focused with connection to AWS as optional component of the cloud.

Topology

Following is the starting topology. Some components are pre-built to save time.

Once you finish all lab steps, following is what it will look like

Main Use-Cases Covered

  • Cloud and Multi-Cloud Transit (Hub and Spoke) connectivity
  • Multi-Cloud Network Segmentation (MCNS)
  • Egress FQDN
  • User-VPN
  • Multi-Cloud Transit with AWS
  • Policy Based Remote User SAML/SSL VPN
  • Hybrid / On-Premise Connectivity (S2C)
  • Traffic Engineering with SD and BGP advance knobs
  • Day2 Operations, troubleshooting and monitoring (Aviatrix CoPilot)

Warning / Pre-Requisite / Notes

  • Do not change the password of any device or server in the lab pod
  • Do not change controller password
  • In most of the places:
    • The Aviatrix Controller is referred to as “Controller”
    • The Aviatrix Gateway is referred to as “Gateway”

LAB1 – Verify Connectivity

Make sure the lab is in good standing. Verify the following tasks by logging into the Aviatrix Controller UI. Make sure you log in to your own pod. Pod name is displayed on top.

Make sure you have resources deployed and matches to following


The GCP Project is already on-boarded to save time in the Aviatrix Controller under Accounts Access Account


Now change the email address to your corporate email address under Accounts –> Account Users –> admin (do not change the Controller password)

Aviatrix gateway are pre-deployed to save time. Make sure all gateways are up and running.

Check the transit gateway under Multi-Cloud Transit –> List –> Transit

Check the spoke gateway under Multi-Cloud Transit –> List –> Spoke

Verify GCP VM SSH Connectivity

GCP VMs only requires a password. Password is Password123!
There is no .pem file needed to login to them. Login to vm_gcp_public_ip_spoke1. This IP address is provided in the LAB POD file

shahzadali@shahzad-ali Pem Files % ssh ubuntu@35.224.13.215
ubuntu@35.224.13.215's password:
Welcome to Ubuntu 16.04.7 LTS (GNU/Linux 4.15.0-1087-gcp x86_64)
 
 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage
 
11 packages can be updated.
0 updates are security updates.
 
New release '18.04.5 LTS' available.
Run 'do-release-upgrade' to upgrade to it.
 
 
Last login: Sat Nov 28 16:42:18 2020 from 172.124.233.126
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.
 
ubuntu@vm-gcp-test-spoke-1:~$

ubuntu@vm-gcp-spoke-1:~$ ifconfig

ens4 
Link encap:Ethernet HWaddr 42:01:0a:14:0b:82
inet addr:10.20.11.130 Bcast:10.20.11.130 Mask:255.255.255.255
inet6 addr: fe80::4001:aff:fe14:b82/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1460 Metric:1
RX packets:1025064 errors:0 dropped:0 overruns:0 frame:0
TX packets:663466 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:511233248 (511.2 MB) TX bytes:81897766 (81.8 MB)

lo 
Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
ubuntu@vm-gcp-spoke-1:~$

Now Ping from vm_gcp_private_ip_spoke1 to other test machines (as listed below) and check the connectivity

  • vm_gcp_private_ip_spoke2 (10.20.12.130) – should not work
  • vm_gcp_private_ip_spoke3 (10.42.0.130) – should not work
  • vm_aws_private_ip_spoke1 (10.101.0.84) – should work
  • vm_aws_private_ip_spoke2 (10.42.0.84) – should not work

Can you guess why it worked or did not work?

Verify AWS EC2 SSH Connectivity

ssh into AWS VM in Spoke1using its public ip address and .pem file (the address is provided in the lab pod file you received). If you get the following error, then please fix your .pem file permission first.

shahzadali@shahzad-ali Pem Files % ssh ubuntu@35.163.104.122 - instance_priv_key.pem
ubuntu@35.163.104.122: Permission denied (publickey).
 
shahzadali@shahzad-ali Pem Files % chmod 400 instance_priv_key.pem

ssh using the user-name and .pem file again and ping the second AWS instance

shahzadali@shahzad-ali Desktop % ssh ubuntu@34.217.68.104 -i instance_priv_key_gcp_pod24.pem
Welcome to Ubuntu 16.04.5 LTS (GNU/Linux 4.4.0-1072-aws x86_64)
Documentation: https://help.ubuntu.com
Management: https://landscape.canonical.com
Support: https://ubuntu.com/advantage
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
79 packages can be updated.
0 updates are security updates.
New release '18.04.5 LTS' available.
Run 'do-release-upgrade' to upgrade to it.
*** System restart required ***
Last login: Thu Dec 31 20:36:52 2020 from 172.124.233.126
To run a command as administrator (user "root"), use "sudo ".
See "man sudo_root" for details.
ubuntu@ip-10-101-0-84:~$

Ping from vm_aws_private_ip_spoke1 to …

  • vm_gcp_private_ip_spoke1 (10.20.11.130) – should work
  • vm_gcp_private_ip_spoke2 (10.20.12.130) – should not work
  • vm_gcp_private_ip_spoke3 (10.42.0.130) – should not work
  • vm_aws_private_ip_spoke2 (10.42.0.84) – should not work

Verify On-Prem Router SSH Connectivity

ssh into the on-prem dc-router-1 (we are using Cisco CSR to simulate it). User is admin and Password is “Password123”

shahzadali@shahzad-ali Desktop % ssh admin@54.219.225.218
Password: 


dc-router-1#show ip int brief
Interface              IP-Address      OK? Method Status                Protocol
GigabitEthernet1       192.168.20.162  YES DHCP   up                    up      
Loopback0              10.20.11.254    YES TFTP   up                    up      
Tunnel1                169.254.100.2   YES TFTP   up                    up      
Tunnel42               172.16.0.1      YES TFTP   up                    up      
VirtualPortGroup0      192.168.35.101  YES TFTP   up                    up      
dc-router-1#

From dc-router-1 ping gcp and aws instances private ip addresses

  • vm_gcp_private_ip_spoke1 (10.20.11.130) – should no work
  • vm_gcp_private_ip_spoke2 (10.20.12.130) – should not work
  • vm_gcp_private_ip_spoke3 (10.42.0.130) – should not work
  • vm_aws_private_ip_spoke1 (10.101.0.84) – should not work
  • vm_aws_private_ip_spoke2 (10.42.0.84) – should not work

ssh into the on-prem dc-router-1 (we are using Cisco CSR to simulate it). User is admin and Password is “Password123”

shahzadali@shahzad-ali Desktop % ssh admin@54.193.196.247
The authenticity of host '54.193.196.247 (54.193.196.247)' can't be established.
RSA key fingerprint is SHA256:fi8bbpJc8LCE32dn9RL1EIDzznl+mgQ5V5u5vR/hxFo.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '54.193.196.247' (RSA) to the list of known hosts.
Password: 
dc-router-2#
dc-router-2#show ip int brief 
Interface              IP-Address      OK? Method Status                Protocol
GigabitEthernet1       192.168.10.120  YES DHCP   up                    up      
Tunnel1                169.254.101.2   YES TFTP   up                    up      
Tunnel42               172.16.0.2      YES TFTP   up                    up      
VirtualPortGroup0      192.168.35.101  YES TFTP   up                    up      
dc-router-2#

From dc-router-2 ping gcp and aws instances private ip addresses

  • vm_gcp_private_ip_spoke1 (10.20.11.130) – should no work
  • vm_gcp_private_ip_spoke2 (10.20.12.130) – should not work
  • vm_gcp_private_ip_spoke3 (10.42.0.130) – should not work
  • vm_aws_private_ip_spoke1 (10.101.0.84) – should not work
  • vm_aws_private_ip_spoke2 (10.42.0.84) – should not work

This completes the verification lab1. We will now move to other use-cases.