Secure S3 Bucket Access Over Direct Connect Private VIF

Problem Statement

  • The AWS recommended and native solution to access S3 buckets (via Direct Connect (DX) from on-prem location) is to use Public VIF
  • Accessing S3 bucket over DX Public VIF can pose serious security threats to enterprises
  • AWS advertises the entire S3 Public subnet range (one or all the regions) to on-prem which implies that …
    • All on-prem users can upload to any S3 bucket, including to their personal S3 buckets on their own personal accounts, leading to confidential data leakage
    • Potentially higher utilization of DX circuit (non-compliant data) that could choke the DX and may incur higher charges ($$$)


The solution proposed here not only works for the traffic coming from on-prem location but also allows secure connectivity to S3 traffic coming from other AWS VPCs or other public clouds as well (Multi-Cloud scenario)

Following are the high level steps to implement this solution

  • Create a dedicated S3 VPC (as spoke)
  • Create S3 end-point in the S3 VPC
  • Deploy Aviatrix Global Transit and attach the S3 VPC (as spoke) to it
  • Deploy Aviatrix S3 Gateway in dedicated S3 VPC
  • Enable private S3 feature in Aviatrix Controller
    • The Controller automatically configures the AWS NLB and load balances multiple AVX S3 GW for high availability, redundancy and performance
  • Apply the security policy to allow S3 access from specific CIDRS
    • Controller enforce zero-trust S3 security policy
    • The CIDRs specified in the policy are allowed to access S3 (rest are blocked by default)
  • Create on-prem DNS private zone to point the S3 bucket FQDN to private IP

Production Topology

Following is the enterprise topology to solve the this business challenge

Traffic Flow

  • Business requirement is that on-prem corporate resources, such as laptop/desktop, can access the S3 Bucket in AWS
    • This access must be encrypted over DX link at line rate
    • The encryption should allow maximum utilization of 10G DX cuircuit
  • There is an optional Aviatrix CloudN hardware appliance in the topology
    • CloudN HPE (High Performance Encryption appliance) provide end-to-end and line-rate encryption of all the traffic crossing over the AWS DX link
    • The traffic over DX link is not encrypted by default device that is why it is important to use CloudN appliance without compromising the throughput (Cloud Native IPSec encyrption is limited to 1.25 Gbps only)
  • S3 bucket traffic goes from on-prem DX link to Aviatrix Transit GW (AVX-TR-GW)
    • This is possible because the on-prem DNS server is configured to send S3 bucket traffic towards S3 VPC
  • The S3 VPC (AVX-SPK-GW) is attached to AVX-TR-GW as a spoke
  • S3 bucket traffic goes to AVX-S3-GW (AVX-SPK-GW)
  • AVX-S3-GW inspect the traffic and if the policy allows it, then it forwards the traffic to S3 end-point configured inside the S3 VPC

Deployment Details

In this deployment

  • Aviatrix Transit and Spoke gateways are already deployed in their respective VPCs
  • Aviatrix Transit and Spoke peering is already configured
  • On-Prem to direct connect connectivity is already established

Tip: You can test this solution even if you do not have a DX circuit. What you can do is to deploy another Aviatrix Transit GW in another Cloud or region. This second Transit could be treated as on-prem location. Now peer both Aviatrix Transit gateways together to establish connectivity.

1 – Create AWS S3 Endpoint

S3 endpoint created in AWS

AWS S3 Endpoint details

2- Deploy Aviatrix S3 Gateway

Deploy two Aviatrix generic or standard gateway from the “Gateway” left navigation tab in the S3-Spoke-VPC.

S3 Gateways
Aviatrix GWs designated for s3 traffic

3- Configure S3 Bucket Access Parameter

Under Security –> Private S3 configure the S3 bucket access parameters.

  • Select the gateway name first
  • The select the source CIDR
    • This creates access policy such that only CIDRs mentioned here are allowed to access certain S3 buckets
  • Now specify the bucket name. In my case I am using “ as my bucket name
    • You can find this name by loggin into your AWS console
Aviatrix S3 private access parameters

You have to repeat the same steps for second S3 gateway (this second gateway is optional and for high availability). You can specify more than two gateway as well, depending on your load requirement

aws console to find out s3 bucket name
Tip: The S3 object URL is only visible after upload a file to S3 bucket

4- AWS ELB Created

In the background, an AWS ELB –> NLB is created and two S3 gateways are being load balanced. Following screens show the NLB configuration in AWS console done by Aviatrix Controller

NLB Created by aviatrix controller

The listener is listening on https:443 for s3 bucket

NLB is load balancing between two s3 gateway

Testing and Validation

Following screen is taken from the on-prem laptop. Notice that a ping to my S3 bucket is returning the private IP address of AWS NLB IP address. Which basically takes the request to one of the Aviatrix S3 gateways.

Pinging [] with 32 bytes of data:
Request timed out.
Request timed out.
Request timed out.

Ping to show public s3 fqdn returns the private nlb ip address

The "S3 Bucket FQDN Name Resolution IP:" is actually the IP address of the private interface of AWS NLB. This is the IP address that should be resolved to in your on-prem DNS for S3 bucket FQDN (i.e")

Now type the S3 Bucket FQDN in the browser on your on-prem laptop/desktop/server

Following screen shows that we can successfully access the S3 public FQDN using the private VIF

S3 Bucket FQDN Access Visibility

You can also check the stats of the FQDN access because in the background the S3 private access feature is using the Egress FQDN URL.

s3 fqdn stats for entire deployment

s3 fqdn stats for one specific gateway

Design Notes

  • Aviatrix Controller will automatically deploy the AWS-NLB per region
  • There is no limit to the number of Aviatrix-S3-GW you can deploy. These gateways are Active/Active and being load balanced by the AWS-NLB
  • Even if you deploy one S3-GW, you will notice an AWS-NLB deployed. This is default behavior and cannot be changed. Because for production, our recommendation is to deploy at least two S3-GWs in two different AZs within the same region
  • It is customer’s responsibility to pick the right size of S3-GW depending on the traffic needs
    • Aviatrix Controller does not create or manage auto-scaling group
    • Adding new gateway of different size is just a matter of one click in the Controller
  • Using the source CIDR security policy model, the rest of the access is zero-trust for S3 FQDN.
    • Different on-prem teams could segment the traffic based on VLAN or Account they own in the AWS
  • Aviatrix strongly recommends to use additional S3 security provided by AWS
    • Remember it is a shared security model

Multi-Cloud Network and Security (MCN) Architecture: Ownership vs SaaS Approach

When it comes to building and running the Cloud Networks, predominantly there are two distinct approaches available to Network/Security Engineers, Architects, and decision-makers.

  1. Owning the Network and Security Architecture Approach
  2. As a Service (SaaS) Approach

Let’s take a look at the following point and understand both approaches. In the end, an enterprise must pick the approach that would solve their business challenges and will satisfy the compliance, governance, and audit requirement.

Owning the Architecture Approach

  • Enterprises should own the architecture end-to-end. Do not fall into the traps of the early days of Cloud adoption where shadow IT and DevOp guys took control and started building networking on their own
  • Almost all the Enterprises I talk to, want to own, control, and management control, data and operations plane. Similar to the way they were owning the on on-prem networking and security
  • How would you get the deep level of monitoring, logs and visibility from the SaaS platform? What I have seen that if enterprise do not own the platform, then they are at the mercy (SLA) of SaaS provider

Trust Factors

  • How much do you trust a SaaS based Multi-Cloud Networking and Security provider?
  • You have to trust your CSP (AWS/Azure/GCP/etc) I get that. Buy should you add an extra layer of trust as an enterprise?
    • It is trust ( hardware) over trust (cloud hperplane) over trust (cloud provider security model) over trust (multi-cloud provider SaaS platform).

Competition Factor

  • Are you ok sitting next to multiple tenants on the same SaaS platform? One of them might be your competitors
    • Again, this is something you have to decide as an enterprise.
    • There is a reason that some retail customers are not hosting their applications on AWS and going to Azure. You could apply the same logic here as well.
    • If this SaaS goes down, you and your competitor both goes down. Not good because where is your competitive advantage then?

Pace of Innovation

  • Pace of innovation might be slow
    • If there is a feature an enterprise need, then in the SaaS model, it will be hard for enterprise to ask to add that feature into the product.
    • Typically SaaS providers need to support and enable a good number of tenants and it is not easy for them to quickly build and release new features


  • In the SaaS offering, some one else dictates its own terms and conditions. It is hard for you as an enterprise to dicated and create your own policies, governance and operational model.


Following people helped me review and write this blog
Hammad Alam

GCP Networking Best Practices

Delete Default VPC and Subnets

You should delete the deafult VPC and Subnets created by GCP automatically. Once you have the defult subnet and VPC deleted, following is how it looks like. In the screen shot below notice that all the default subnets are gone and you can only see the ones I created manually with “Mode = Custom”

The reason to delete the default is that most likely they could overlap either with on-prem deployment or other Clouds (such as AWS/Azure). As an architect you should maintain your own good IP address assignment hygiene.

For example if you look at my Multi-Cloud Networking IP scheme you will notice there is a theme there. Pretty much same as we used to do in On-prem networking world. This help us troubleshoot and manage easily.

The lab I have built is at a very small scale. You might want to adjust these ranges based on your future growth plans and strategy.

Private Google Access

When you create a VPC, you should enable your subnets to access Google Services using the Private IP addressing as well. To access services like storage bucket or big-query services.

GCP Shared VPC Concepts

Warning: Setting up GCP Shared VPC is not easy. It requires various API rights and roles before you are allowed to create it from the UI. I had to do lots of hit and trials. The GCP documentation is not clear and not straightforward. I am the super admin for my GCP organization called and still I had to enable many roles in different places to create a shared service VPC

There are two types (host and service) of projects in GCP

  • Host Project(s)
    • Host project are created using Shared VPC option
    • Simply put, host project shares subnets with the service projects
    • In majority of the cases, one host project is enough
  • Service Projects
    • This is where actual VMs are being deployed
    • The VM is being deployed in a subnet that is shared by “Shared VPC Host Project”

Following diagram illustrate the concept and relation between Shared VPC host project and Service Prpject

Service project to multiple VPCs

Setting up Shared VPC Host Project

First step to create a standard GCP Project that will be treated as Host Project. It is good practice to name it Host Project.

Then enable GCP Shared VPC

Select “Setup Shared VPC”

Save & Continue will take you to following screen.

In the above screen shot, I am sharing all my subnets. Depending on your org. policy, you might want to share few but not all.

Following are some requirements where a shared service VPC (aka Host Project) is needed

  • Use Shared VPC for administration of multiple working groups
  • Use multiple host projects if resource requirements exceed the quota of a single project
  • Use multiple host projects if you need separate administration policies for each VPC network
  • Create a shared services VPC if multiple VPC networks need access to common resources but not each other

Now create VPC in shared services

Now that the Host Project owner has created the subnets, he/she can share them with the Service Project as necessary. Following screen shows members being added to the Shared VPC and their roles assigned according to your org. policy.

Service Projects need to have Compute Engine API enabled to be configured as service projects.

Now in the next step, we will attach the service project called “Service Project A” to the shared VPC.

Select the name of the service project from the shared VPC UI as shown below

Finally you can see in the following screen that Service Project is successfully attached with the Shared VPC and all subnets are being shared

Following screen shot shows the view from the Service Project-A side of the house. You can see the networks shared to this service project. Now the compute resources can consume these subnets.