In a prior blog post, we discussed how Big Network extended AWS’ Virtual Private Cloud (VPC) to Digital Ocean. Today, we are going to explore strategies to extend AWS VPC on-premise using Edge Lite. Organizations that are looking to shift centralized workloads for Internet of Things (IoT) related use cases can leverage this solution to modernize the infrastructure stack related to their core infrastructure, while maintaining an Ethernet handoff to devices in the field.
For example, consider a case of a remote video surveillance network. There are three major components involved:
The surveillance cameras - deployed on-premise at the “edge” of the network. Surveillance cameras are deployed everywhere - retail shops, commercial office buildings, roadways, etc - these devices are typical powered with Power over Ethernet (PoE) and queue their video stream data to get picked up by a Network Video Recorder (NVR).
The Network Video Recorder (NVR) - formerly deployed on-premise with the surveillance cameras, the NVR is now moving to a fully cloud deployed and managed service. Some NVR systems rely on the Public Cloud while others use Private Cloud, Bare Metal as a Service, or traditional colocated / enterprise hosted datacenter environments.
The Network from Surveillance Camera to Network Video Recorder - there are numerous variants of network deployment from Surveillance Camera to the NVR - including fixed lease lines (such as Frame Relay, ATM, and MPLS services), to modern dynamic bandwidth services such as Ethernet, VPLS, or Network as a Service (NaaS), to tunnels such as IPSEC over the Internet.
This article will focus on the network from the surveillance camera to the network video recorder, with an assumption that the NVR platform is being hosted in an AWS VPC. For the sake of this article, we will assume the following requirements:
The NVRs are deployed in a private VPC, therefore, there is no external Internet access to the VPC natively, other than via various network gateways.
Surveillance cameras are deployed to locations with one or more Internet connections which can be a mix of dedicated internet access (DIA), broadband (DSL, cable, FTTP), or mobile (LTE / 5G) so long as the unlying bandwidth capacity is sufficient to meet the video streaming needs plus overhead.
The networking methods will be tunnels over the internet with full encryption.
There are three methodologies for deploying an AWS VPC extension to Edge Lite:
Non-redundant Layer 2 Connection: Useful for simple applications where there is flexibility in adjusting existing network IP numbering.
Non-redundant Layer 3 Connection: Useful for simple applications where network IP numbering cannot be changed.
Redundant Layer 3 Connection: Useful for high availability applications where uptime is critical.
First and foremost, a Cloud Network must be created. This Cloud Network will be assigned the IP range 172.16.0.0/24.
Next, gateway and aggregation services are provided at AWS using an EC2 instance running our Headless Linux Client (bn-cli). Instructions for deploying the Headless Linux Client are available in our knowledge base. The EC2 instance must be configured at the AWS level with “source_destination_check=false” to allow packets from/to the 172.16.0.0/24 subnet to traverse the instance and VPC. In addition to this AWS level parameter, IP forwarding must be enabled in the Linux kernel via sysctl - “sysctl -w net.ipv4.ip_forward=1”. Once deployed, it is important to adjust the Cloud Network configuration so that the EC2 based gateway receives a static IP assignment, as this will become the default gateway for devices deployed on-premise (assume 172.16.0.1 for this example).
At the premise, Edge Lite is deployed using our standard on-boarding procedure. Edge Lite should be configured with a Local Network to bridge the Cloud Network to the LAN interface. Services such as Local Breakout are not needed. DHCP service may be used to provide DHCP server functionality to devices on-premise, but we must ensure that there is a single DHCP server instance per Cloud Network (unless a DHCP blocking rule is deployed in the Cloud Network).
Devices on premise can now be deployed to connect to the Edge Lite LAN port via a Layer 2 switch. Devices should be numbered from IPs inside of 172.16.0.0/24 adjusting to exclude the IP addresses assigned to the Headless Linux client or Edge Lite Local Services. For this example, we can assume that 172.16.0.10-172.16.0.254 are available. Devices should be configured with 172.16.0.1 as their default gateway.
Finally, at the VPC level, a static route is required to be installed so that the VPC knows how to route packets sourced from 172.16.0.0/24 via the EC2 deployed Headless Linux Gateway. For this, we need to use AWS VPC routes. The VPC route should direct traffic to 172.16.0.0/24 via the EC2 instance created.
With these steps completed, connectivity between on-premise cameras numbered in 172.16.0.0/24 and NVR services in the VPC at 10.10.0.0/16 is available.
First and foremost, a Cloud Network must be created. This Cloud Network will be assigned the IP range 172.16.0.0/24.
Next, gateway and aggregation services are provided at AWS using an EC2 instance running our Headless Linux Client (bn-cli). Instructions for deploying the Headless Linux Client are available in our knowledge base. The EC2 instance must be configured at the AWS level with “source_destination_check=false” to allow packets from/to the 172.16.0.0/24 subnet to traverse the instance and VPC. In addition to this AWS level parameter, IP forwarding must be enabled in the Linux kernel via sysctl - “sysctl -w net.ipv4.ip_forward=1”. Once deployed, it is important to adjust the Cloud Network configuration so that the EC2 based gateway receives a static IP assignment, as this will become the default gateway for devices deployed on-premise (assume 172.16.50.1 for this example).
At the premise, Edge Lite is deployed using our standard on-boarding procedure. Edge Lite should be configured with a Local Network to bridge the Cloud Network to the LAN interface. Services such as Local Breakout and DHCP are not needed. An Edge Dashboard can be deployed to provide a convenient point of ICMP monitoring for the Edge device.
Assuming an existing on-premise Layer 3 switch, the uplink for this device should be a routed switch port connected to the LAN side of the Edge Lite. The Layer 3 switch should be given an interface address in the range of the cloud network, such as 172.16.0.10 with a default route to the Headless Linux client at 172.16.0.1. Multiple locations via multiple Edge Lites can be joined to the same Cloud Network for scale, each receiving IP addresses from this cloud network. For this example, we can assume that 172.16.0.10-172.16.0.254 are available.
At the VPC level, a static route is required to be installed so that the VPC knows how to route packets sourced from 172.16.0.0/24 AND 172.16.50.0/24 via the EC2 deployed Headless Linux Gateway. For this, we need to use AWS VPC routes. The VPC route should direct traffic to 172.16.0.0/24 AND 172.16.50.0/24 via the EC2 instance created.
Finally, at the EC2 instance, a static route is required to forward traffic for 172.16.50.0/24 via the WAN interface IP on the Layer 3 switch. For multiple sites, a static route per location would be required.
With these steps completed, connectivity between on-premise cameras numbered in 172.16.50.0/24 and NVR services in the VPC at 10.10.0.0/16 is available.
In this methodology, we assume a default AWS VPC numbered from 10.10.0.0/16. Similar to the prior case, Cloud Networks will be used to provide a point-to-point or sets of point-to-multipoint connections to a Layer 3 switch deployed at the customer premise. However, we will use 2x Cloud Networks, 2x EC2 instances running the Headless Linux Client, and 2x Edge Lites per site to build full redundancy.
In this example, the 1st cloud network is numbered from 172.16.50.0/24, solely providing a “transit” network from the existing on-premise IP range (assuming 172.16.50.0/24) to the AWS VPC. All Edge Lites considered PRIMARY will be associated with this Cloud Network.
A 2nd cloud network is numbered from 172.16.1.0/24, providing a second “transit” network. All Edge Lites considered BACKUP will be associated with this Cloud Network.
Next, gateway and aggregation services are provided at AWS using an EC2 instance running our Headless Linux Client (bn-cli). Instructions for deploying the Headless Linux Client are available in our knowledge base. The EC2 instance must be configured at the AWS level with “source_destination_check=false” to allow packets from/to the 172.16.0.0/24 subnet to traverse the instance and VPC. In addition to this AWS level parameter, IP forwarding must be enabled in the Linux kernel via sysctl - “sysctl -w net.ipv4.ip_forward=1”. Once deployed, it is important to adjust the Cloud Network configuration so that the EC2 based gateway receives a static IP assignment, as this will become the default gateway for devices deployed on-premise (assume 172.16.0.1 for this example).
A second gateway and aggregation instance is deployed at AWS, except it will be connected to the BACKUP cloud network and should be assigned a static IP from the 2nd cloud network, such as 172.16.1.1
At the premise, 2x Edge Lites are deployed using our standard on-boarding procedure. Edge Lite #1 should be configured with a Local Network to bridge the PRIMARY Cloud Network to the LAN interface. Edge Lite #2 should be configured with a Local Network to bridge the BACKUP Cloud Network to the LAN interface. Services such as Local Breakout and DHCP are not needed. An Edge Dashboard can be deployed to provide a convenient point of ICMP monitoring for the Edge device.
At the existing on-premise Layer 3 switch, 2x uplink WAN ports are defined, each of which is a routed switch port connected to the LAN side of an Edge Lite. The PRIMARY WAN port should be given an interface address in the range of the PRIMARY cloud network, such as 172.16.0.10 with a default route to the Headless Linux client at 172.16.0.1. The BACKUP WAN port should be given an interface address in the range of the BACKUP cloud network, such as 172.16.1.10 with a default route to the Headless Linux client at 172.16.1.1.
Multiple locations via multiple Edge Lites can be joined to the same Cloud Networks for scale, each receiving IP addresses from their relevant cloud networks.
At the VPC level, a set of static routes are required to be installed so that the VPC knows how to route packets sourced from 172.16.0.0/24 and 172.16.50.0/24 via the PRIMARY EC2 deployed Headless Linux Gateway. Similarly, a second set of static routes are required to be installed so that the VPC knows how to route packets sourced from 172.16.1.0/24 and 172.16.50.0/24 via the BACKUP EC2 deployed Headless Linux Gateway. Depending upon your VPC, this BACKUP path my delivery the return path traffic to the site in question.
Finally, at the PRIMARY EC2 instance, a static route is required to forward traffic for 172.16.50.0/24 via the PRIMARY WAN interface IP on the Layer 3 switch. Similarly, at the BACKUP EC2 instance, a static route is required to forward traffic for 172.16.50.0/24 via the BACKUP WAN interface IP on the Layer 3 switch
With these steps completed, connectivity between on-premise cameras numbered in 172.16.50.0/24 and NVR services in the VPC at 10.10.0.0/16 is available.
Big Network does not provide any guidance or guarantees for related AWS cost considerations. As with any AWS related solution, egress transfer pricing applies.
However, the above deployment methodology, including EC2 instances, is highly cost competitive to AWS solutions, such as AWS Site to Site VPN. According to AWS documentation on March 13, 2023, Site to Site Connection Fees are $36/mo alone!
In addition, the Big Network solution above DOES NOT require the use of IPSEC, which additionally, removes the need for Static IP addresses at the customer premise site. Connections from the AWS VPC to the premise are automatically discovered via Big Network, even when deployed behind CGNAT Carrier Grade NAT) and NAT (Network Address Translation). Removing the need for Static IP addresses from ISP connections will often reduce the associated costs of the entire solution.
In this article, we have provided the framework methodology for three methods to extend AWS VPC to the premise using Big Network, the Headless Linux Client, and Edge Lite. For more information about Edge Lite, please see our website. For technical support or more information, drop us a line via our contact form.