Skip to content

Overlapping IP – Customized NAT

You may be asked to connect 2 different VPCs / VNETs which were created by someone else. They are important to production environment but … they have overlapping CIDR. In legacy world solution is relatively simple NAT. In the cloud that’s not that easy to do. Not all of the native constructs support NAT when connecting different resources. Doing native peering is not allowed to do that when overlapping is there. 

Here comes the rescue – AVIATRIX NAT capability. There are many NAT features  (will be covered in a different post):  

 
It is also quite flexible when it comes to standalone AVX GW. Standalone is not our goal here right? We want to have proper MCNA architecture (Multi Cloud Network Architecture) 
 

With MCNA we have the concepts of Transit Gateway (TrGW) and Spoke Gateway (SpGW). Architecture is straightforward and simplifies everything but there is one gotcha there …
What is supported on TrGW might not be supported on SpGW and likewise. Spoke Gateway is the place you should consider doing NAT. Today to make it simple and easy to simulate, we will be focusing on standalone. 

 Customized NAT – packet flow

It took me a while to understand a packet flow how Aviatrix GW processes packets 🙂
Lesson learned, when you understand it, it gets relatively simple.For Linux guys that might be pretty easy and straightforward but here you go …

When Packet arrives on ingress interface (eth0): 

  1. DESTINATION NAT rule is evaluated DST IP is changed accordingly
  2. ROUTE DECISION – routing for new DST address is pointing to Tunnel interface
  3. POST ROUTING – Source NAT takes place 
 

Configuration

Lets assume the following:
  • we want standalone AVX GW
  • our far end (router) is also a standalone AVX GW – but of course it doesn’t have to be Aviatrix
STEP 1 – creating VPC – repeat it for both VPCs
 
STEP 2 – create AVX GW

STEP 3 - create VPN connections

Here are a few gotchas (conncetion for GW1): 

  • Remote Subnet – we need to split it to 2 smaller as /24 is considered as our LOCAL and that would redirect traffic back to VPC / VNET
  • use Routed-based – for Policy based SA (crypto acl) would not match that easily 

For connection 2 – from GW2 – we see traffic NATed .

As you can see below – they get UP – but traffic will not work yet


STEP 4 - define NAT rules (SRC and DST)

Getting back to order of operations. Destination NAT is 1st – that is why we have INTERFACE (eth0) selected. After this translation we are making routing decision and know that it should be “connection”. That is why we are not specifying any interface for Source NAT. We need to use CONNECTION there.

For Destination NAT don’t forget to check (should be by default) APPLY ROUTE ENTRY.  If your AVX GW is a spoke one (connected to TrGW) all RFC1918 routes are injected by default. This testing environment is build with standalone GW so we need it to populate UDR (AZURE). That check box does it automatically 🙂 


TESTING

My VMs are in public subnets so I’m able to login there with public IPs. Lets check connectivity … from VPC1 to VPC2 

Success !!! – what is more we are able to see it directly on GW1 (doing packet capture). Pretty cool as not all CSP allow you to do such troubleshooting. Capture, ping, traceroute is all there on AVX GW.

Now the other way round…

Failure… 


STEP 5 - NAT for other direction

We can apply the same logic the other way round:

Here is one issue to resolve and also very interesting lesson on how AZURE behaves in traffic filtering. Communication is not working but that is expected as we don’t have routing configured yet on UDR (192.168.10.X -> 10.10.10.4 [AVX GW]). However, when packet was leaving AVX GW (dst: 10.10.10.5, src: 192.168.10.111) it was not reaching VPC1-VM1 at all … 

Took me a while understand and find a workaround / solution: 

Adding “ALLOW ICMP” entry on VPC1-VM1 helped. Traffic was getting to VM (could see it on tcpdump) but of course there was no return path yet. Default rules were not allowing inbound ICMP? Why it was working on VPC2-VM1 then? The same default NSG rules are applied and communication was working from LEFT to RIGHT. 

I removed ICMP entry and added final ROUTE – that also helped. I could see traffic finally getting to VPC1-VM1 (and response was also working at the same time). For testing I added this rule manually on UDR. How to enforce it from STANDALONE AVX GW? 

  1. add more DNAT rules with “APPLY ROUTE ENTRY
  2. edit CONNECTION “REMOTE SUBNETS” – AVX GW injects routes into VNET for all REMOTES (except 10.10.10.0/25 and 10.10.10.128/25 as it is considered LOCAL)

 

It gave me a conclusion that at some point AZURE does kinda uRPF check and starts treating this as “VirtualNetwork” where 2nd NSG rule applies.

Final TEST:
SUCCESS !!!


TERRAFORM - Lets SCALE OUT our configuration

We can easily make our tested config bigger with the help of automation tool like Terraform. All documentation for Aviatrix NAT definition can be found HERE

The above code produces the following NAT definitions. We achieved one directional NAT for that but no problem we can do it the other way round if needed. I also encourage reading and testing MARK function – it also simplifies that a little bit but that’s maybe for another post. 


Summary

We have achieved NAT – both directions. You can see that customized NAT is very flexible on Aviatrix GW. That is for sure. Not that easy to do (at the beggining) – I believe there is still some room for improvement but I’m confident it will get better. Cool thing is that you don’t have to click it – that doesn’t scale well. Terraform helps a lot – copy and paste – that simple 🙂

On another post I will cover different scenarios – MAPPED NAT … stay tuned.

Leave a Reply

Your email address will not be published. Required fields are marked *