Sie sind auf Seite 1von 79

Reference Architecture Guide

for AWS
RELEASE 5 | AUGUST 2019


Table of Contents
Preface.................................................................................................................................................................................... 1
What's New in This Release............................................................................................................................................................................... 2

Purpose of This Guide......................................................................................................................................................... 3


Audience................................................................................................................................................................................................................. 3
Related Documentation...................................................................................................................................................................................... 4

Introduction.......................................................................................................................................................................... 5

Amazon Web Services Concepts and Services............................................................................................................. 7


AWS Top Level Concepts.................................................................................................................................................................................... 7
Virtual Network..................................................................................................................................................................................................... 9
Virtual Compute.................................................................................................................................................................................................. 13
AWS Virtual Network Access methods.......................................................................................................................................................... 14
AWS VPC Peering............................................................................................................................................................................................... 19
AWS Resiliency.................................................................................................................................................................................................... 23

Design Details: VM-Series Firewall on AWS...............................................................................................................26


VM-Series on AWS Models.............................................................................................................................................................................. 26
VM-Series Licensing........................................................................................................................................................................................... 27
VM-Series on AWS Integration....................................................................................................................................................................... 28
Managing AWS Deployments with Panorama.............................................................................................................................................. 30
Security Policy Automation with VM Monitoring........................................................................................................................................32
Prisma Public Cloud Infrastructure Protection for AWS............................................................................................................................34
Networking in the VPC...................................................................................................................................................................................... 36
AWS Traffic Flows............................................................................................................................................................................................... 39

Design Models....................................................................................................................................................................50
Choosing a Design Model................................................................................................................................................................................. 50
Single VPC Model............................................................................................................................................................................................... 51
Transit Gateway Model...................................................................................................................................................................................... 53
Transit VPC Model.............................................................................................................................................................................................. 64

Summary..............................................................................................................................................................................76

Palo Alto Networks


Preface

Preface

GUIDE TYPES
Overview guides provide high-level introductions to technologies or concepts.

Reference architecture guides provide an architectural overview for using Palo Alto Networks® technologies to provide
visibility, control, and protection to applications built in a specific environment. These guides are required reading prior
to using their companion deployment guides.

Deployment guides provide decision criteria for deployment scenarios, as well as procedures for combining Palo Alto
Networks technologies with third-party technologies in an integrated design.

DOCUMENT CONVENTIONS

Notes provide additional information.

Cautions warn about possible data loss, hardware damage, or compromise of security.

Blue text indicates a configuration variable for which you need to substitute the correct value for your environment.

In the IP box, enter 10.5.0.4/24, and then click OK.

Bold text denotes:

• Command-line commands;

# show device-group branch-offices

• User-interface elements.

In the Interface Type list, choose Layer 3.

• Navigational paths.

Navigate to Network > Virtual Routers.

• A value to be entered.

Enter the password admin.

Palo Alto Networks 1


Preface

Italic text denotes the introduction of important terminology.


An external dynamic list is a file hosted on an external web server so that the firewall can import
objects.

Highlighted text denotes emphasis.


Total valid entries: 755

ABOUT PROCEDURES
These guides sometimes describe other companies' products. Although steps and screen-shots were up-to-date at the
time of publication, those companies might have since changed their user interface, processes, or requirements.

GETTING THE LATEST VERSION OF GUIDES


We continually update reference architecture and deployment guides. You can access the latest version of this and all
guides at this location:

https://www.paloaltonetworks.com/referencearchitectures

WHAT'S NEW IN THIS RELEASE


Palo Alto Networks® made the following changes since the last version of this guide:

• Changed logging service to Cortex™ Data Lake

• Changed RedLock® to Prisma™ Public Cloud

• Modified table for VM-series mapping to AWS virtual machine sizes

• Added design model for AWS Transit Gateway

Comprehensive revision history for this guide

Palo Alto Networks 2


Purpose of This Guide

Purpose of This Guide


This guide describes reference architectures for securing network infrastructure using Palo Alto Networks VM-Series
virtualized next-generation firewalls running PAN-OS® 9.0 within the Amazon Web Services (AWS) public cloud.

This guide:

• Provides an architectural overview for using VM-Series firewalls to provide visibility, control, and protection to
your applications built on an AWS Virtual Public Cloud (VPC) environment.

• Links the technical design aspects of AWS and the Palo Alto Networks solutions and then explores technical
design models. The design models begin with a single VPC for organizations getting started and scales to meet
enterprise-level operational environments.

• Provides a framework for architectural discussions between Palo Alto Networks and your organization.

• Is required reading prior to using the VM-Series on AWS Deployment Guides. The deployment guides provide
decision criteria for deployment scenarios, as well as procedures for enabling features of the AWS and the Palo
Alto Networks VM-Series firewalls in order to achieve an integrated design.

AUDIENCE
This guide is written for technical readers, including system architects and design engineers, who want to deploy the
Palo Alto Networks VM-Series firewalls and Panorama™ within a public cloud datacenter infrastructure. It assumes the
reader is familiar with the basic concepts of applications, networking, virtualization, security, and high availability, as
well as a basic understanding of network and data center architectures.

A working knowledge of networking and policy in PAN-OS is required in order to be successful.

Palo Alto Networks 3


Purpose of This Guide

RELATED DOCUMENTATION
The following documents support this guide:

• Palo Alto Networks Security Operating Platform Overview—Introduces the various components of the Security
Operating Platform and describes the roles they can serve in various designs

• Deployment Guide for AWS Single VPC—Details deployment scenarios for the Single Virtual Private Cloud design
model, which is well-suited for initial deployments and proof of concepts of Palo Alto Networks VM-Series on
AWS. This guide describes deploying VM-Series firewalls to provide visibility and protection for inbound and
outbound traffic for the VPC.

• Deployment Guide for AWS Transit Gateway—Details the deployment steps of a Transit Gateway design model.
This model leverages the Transit Gateway and three security dedicated VPCs with dual firewalls per security
VPC to support inbound, outbound, and east-west traffic flows.

• Deployment Guide for AWS Transit VPC—Details the deployment of a central Transit VPC design model. This
model provides a hub-and-spoke design for centralized and scalable firewall services for resilient outbound and
east-west traffic flows. This guide describes deploying the VM-Series firewalls to provide protection and visibili-
ty for traffic flowing through the Transit VPC.

• Deployment Guide for Panorama on AWS—Details the deployment of Palo Alto Networks Panorama management
node in the AWS VPC. The guide includes Panorama High Availability deployment option and Logging Service
setup.

Palo Alto Networks 4


Introduction

Introduction
Organizations are adopting AWS to deploy applications and services on a public cloud infrastructure for a variety of
reasons including:

• Business agility—Infrastructure resources are available when and where they are needed, minimizing IT staffing
requirements and providing faster, predictable time-to-market. Virtualization in both public and private cloud
infrastructure has permitted IT to respond to business requirements within minutes instead of days or weeks.

• Better use of resources—Projects are more efficient and there are fewer operational issues, permitting employ-
ees to spend more time adding business value. Employees have the resources they need when they need to
bring value to the organization.

• Operational vs capital expenditure—Costs are aligned directly with usage, providing a utility model for IT
infrastructure requiring little-to-no capital expense. Gone are the large capital expenditures and time delays
associated with building private data center infrastructure.

Although Infrastructure as a Service (IaaS) providers are responsible for ensuring the security and availability of their
infrastructure, ultimately, organizations are still responsible for the security of their applications and data. This require-
ment does not differ from on-premises deployments. What does differ are the specific implementation details of how
to properly architect security technology in a public cloud environment such as AWS.

Figure 1 Security responsibility in the IaaS environment

Public cloud environments use a decentralized administration framework that often suffers from a corresponding
lack of any centralized visibility. Additionally, compliance within these environments is complex to manage. Incident
response requires the ability to rapidly detect and respond to threats; however, public cloud capabilities are limited in
these areas.

The Palo Alto Networks VM-Series firewall deployed on AWS has the same features, benefits, and management as the
physical next-generation firewalls you have deployed elsewhere in your organization. The VM-Series can be integrated
with the Palo Alto Networks Security Operating Platform, which prevents successful cyberattacks by harnessing
analytics to automate routine tasks and enforcement.

Palo Alto Networks 5


Introduction

Prisma Public Cloud (formerly RedLock) offers comprehensive and consistent cloud infrastructure protection that en-
ables organizations to effectively transition to the public cloud by managing security and compliance risks within their
public cloud infrastructure. Prisma Public Cloud makes cloud-computing assets harder to exploit through proactive
security assessment and configuration management by utilizing industry best practices. Prisma Public Cloud enables
organizations to implement continuous monitoring of the AWS infrastructure and provides an essential, automated,
up-to-date status of security posture that they can use to make cost effective, risk-based decisions about service
configuration and vulnerabilities inherent in cloud deployments.

Organizations can also use Prisma Public Cloud to prevent the AWS infrastructure from falling out of compliance and
to provide visibility into the actual security posture of the cloud to avoid failed audits and subsequent fines associated
with data breaches and non-compliance.

Palo Alto Networks 6


Amazon Web Services Concepts and Services

Amazon Web Services Concepts and Services


To understand how to construct secure applications and services using Amazon Web Services, it is necessary to under-
stand the components of AWS Infrastructure Services. This section describes the components used by this reference
architecture.

AWS TOP LEVEL CONCEPTS


A cloud compute architecture requires consideration of where computing resources are located, how they can com-
municate effectively, and how to provide effective isolation such that an outage in one part of the cloud does not
impact the entire cloud compute environment. The AWS global platform offers the ability to architect a cloud compute
environment with scale, resilience, and flexibility.

Regions
Regions enable you to place services and applications in proximity to your customers, as well as to meet governmental
regulatory requirements for customer data residency. Regions represent AWS physical data center locations distributed
around the globe. A region consists of several physically separate and co-located data center buildings, which provides
maximum fault-tolerance and stability. Communications between regions can be facilitated over the AWS backbone,
which provides redundant encrypted transport, or the public internet. When using public internet between regions
over the internet, encrypted transport, such as IPsec tunnels, is required to ensure confidentiality of communications.

Regions are specified beginning with two letter geographic region (us—United States, eu—European Union, ap—Asia
Pacific, sa—South America, ca—Canada, cn—China), followed by regional destination (east, west, central, south,
northeast, southeast), followed by a number to distinguish multiple regions in a similar geography. Examples of region
designations: us-west-2 (Oregon), ap-northeast-1 (Tokyo), and eu-central-1 (Frankfurt).

Virtual Private Cloud


A VPC is a virtual network associated with your Amazon account and isolated from other AWS users. You create a VPC
within a region of your choice, and it spans multiple, isolated locations for that region, called Availability Zones. The VPC
closely resembles a traditional data center in that you can provision and launch virtual machines and services into the
VPC.

When creating a new VPC, you specify a classless inter-domain (CIDR) IPv4 address block and optionally, Amazon-
provided IPv6 CIDR address block. Your VPC can operate in dual-stack mode: your resources can communicate over
IPv4, IPv6, or both. IPv4 and IPv6 addresses are independent of each other; you must configure routing and security in
your VPC separately for IPv4 and IPv6. All resources provisioned within your VPC use addresses from this CIDR block.
This guide uses IPv4 addressing. The IPv4 address block can be any valid IPv4 address range with a network prefix in
the range of /16 (65535 hosts) to /28 (16 hosts). The chosen IPv4 address block can then be broken into subnets for
the VPC. The actual number of host addresses available to you on any subnet is less than the maximum because some
addresses are reserved for AWS services. After the VPC is created, the IPv4 CIDR block cannot be changed; however,
secondary address ranges can be added. It’s recommended you choose a CIDR prefix that exceeds your anticipated
address space requirements for the lifetime of the VPC. There are no costs associated with a VPC CIDR address block,
and your VPC is only visible to you. Many other customer VPCs can use the same CIDR block without issue.

Palo Alto Networks 7


Amazon Web Services Concepts and Services

The primary considerations when choosing VPC CIDR block are the same as with any enterprise network:

• Anticipated number of IP addresses needed within a VPC

• IPv4 connectivity requirements across all VPCs

• IP address overlap in your entire organization—that is, between your AWS environment and your organization
on-premises IP addressing or other IaaS clouds that you may use

Unlike enterprise networks that are mostly static and where network addressing changes can be difficult, cloud infra-
structure tends to be highly dynamic, which minimizes the need to anticipate growth requirements very far into the
future. Instead of upgrading the resources in a VPC, many cloud deployments build new resources for an upgrade and
then delete the old ones. Regardless of network address size, the general requirement for communications across the
enterprise network is for all network addresses to be unique; the same requirement applies across your VPCs. When
you create new VPCs, consider using unique network address space for each, to ensure maximum communications
compatibility between VPCs and back to your organization.

Most VPC IP address ranges fall within the private (non-publicly routable) IP address ranges specified in RFC 1918;
however, you can use publicly routable CIDR blocks for your VPC. Regardless of the IP address range of your VPC,
AWS does not support direct access to the Internet from your VPC’s CIDR block, including a publicly-routable CIDR
block. You must set up Internet access through a gateway service from AWS or a VPN connection to your organizations
network.

Availability Zones
Availability Zones provide a logical data center within a region. They consist of one or more physical data centers, are
interconnected with low-latency network links, and have separate cooling and power. No two Availability Zones share
a common facility. A further abstraction is that Availability Zone-mapping to physical data centers can differ according
to AWS account. Availability Zones provide inherent fault tolerance, and well architected applications are distributed
across multiple Availability Zones within a region. Availability Zones are designated by a letter (a,b,c) after the region.

Figure 2 Example of Availability Zones within a region

VPC us-west-2 (Oregon)

Data Center 5

Data Center 2 Data Center 4

Data Center 1 Data Center 3 Data Center 6

us-west-2a us-west-2b us-west-2c


Availability Zone Availability Zone Availability Zone

Palo Alto Networks 8


Amazon Web Services Concepts and Services

VIRTUAL NETWORK
When you create a VPC, you must assign an address block for all of the resources located in the VPC. You can then
customize the IP address space in your VPC to meet your needs on a per-subnet basis. All networking under your
control within the VPC is at Layer 3 and above. Any Layer 1 or Layer 2 services are for AWS network operation and
cannot be customized by the customer. This means that a machine cluster that needs to communicate between the
members with services like multicast, must use an IP based multicast and not a Layer 2–based operation.

Subnets
A subnet identifies a portion of its parent VPC CIDR block as belonging to an Availability Zone. A subnet is unique to
an Availability Zone and cannot span Availability Zones, and an Availability Zone can have many subnets. At least one
subnet is required for each Availability Zone desired within a VPC. The Availability Zone of newly created instances and
network interfaces is designated by the subnet with which they are associated at the time of creation. A subnet prefix
length be as large as the VPC CIDR block (VPC with one subnet and one Availability Zone) and as small as /28 prefix
length. Subnets within a single VPC cannot overlap.

Subnets are either public subnets, which means they are associated with a route table that has internet connectivity via
an internet gateway (IGW), or they are private subnets that have no route to the internet. Newly created subnets are
associated with the main route table of your VPC. In Figure 3, subnets 1 and 2 are public subnets, and subnets 3 and 4
are private subnets.

Route Tables
Route tables provide source-based control of Layer 3 forwarding within a VPC environment, which is different than
traditional networking where routing information is bidirectional and might lead to asymmetric routing paths. Subnets
are associated with route tables, and subnets receive their Layer 3 forwarding policy from their associated route table.
A route table can have many subnets attached, but a subnet can only be attached to a single route table. All route
tables contain an entry for the entire VPC CIDR in which they reside; the implication is any host (instance within your
VPC) has direct Layer 3 reachability to any other host within the same VPC. This behavior has implications for network
segmentation as route tables cannot contain more specific routes than the VPC CIDR block. Any instance within a VPC
is able to communicate directly with any other instance, and traditional network segmentation by subnets is not an
option. An instance references the route table associated with its subnet for the default gateway but only for desti-
nations outside the VPC. Host routing changes on instances are not necessary to direct traffic to a default gateway,
because this is part of route table configuration. Routes external to your VPC can have a destination that directs traffic
to a gateway or another instance.

Route tables can contain dynamic routing information learned from Border Gateway Protocol (BGP). Routes learned
dynamically show in a route table as Propagated=YES.

A cursory review of route tables might give the impression of VRF-like functionality, but this is not the case. All route
tables contain a route to the entire VPC address space and do not permit segmentation of routing information less
than the entire VPC CIDR address space within a VPC. Traditional network devices must be configured with a default
gateway in order to provide a path outside the local network. Route tables provide a similar function without the need
to change instance configuration to redirect traffic. Route tables are used in these architectures to direct traffic to the
VM-Series firewall without modifying the instance configurations.

Palo Alto Networks 9


Amazon Web Services Concepts and Services

Note in Figure 3 that both route tables 1 and 2 contain the entire VPC CIDR block entry, route table 1 has a default
route pointing to an internet gateway (IGW), and route table 2 has no default route. A route to 172.16.0.0/16 was
learned via BGP, which is reachable via its virtual private gateway (VGW). Subnets 1 and 2 are assigned to Availability
Zone 2a, subnet 3 is assigned to Availability Zone 2b, and subnet 4 is assigned to Availability Zone 2c.

Figure 3 Subnets and route tables

VPC us-west-2 (Oregon)


Internet Gateway (IGW) VPN Gateway (VGW)

Destination Target Status Propagated Destination Target Status Propagated


10.1.0.0/16 local Active No 10.1.0.0/16 Local Active No
0.0.0.0/0 igw Active No 172.16.0.0/16 vgw Active Yes
Route Table 1 Route Table 2

10.1.1.0/24 10.1.2.0/24 10.1.3.0/24 10.1.4.0/24

Subnet 1 Subnet 2 Subnet 3 Subnet 4

us-west-2a us-west-2b us-west-2c


Availability Zone Availability Zone Availability Zone
10.1.0.0/16

There are limits to how many routes can be in a route table. The default limit of non-propagated routes in the table is
50, and this can be increased to a limit of 100; however, this might impact network performance. The limit of propagat-
ed routes (BGP advertised into the VPC) is 100, and this cannot be increased. Use IP address summarization upstream
or a default route to address scenarios where more than 100 propagated routes might occur.

Network Access Control Lists


Every route table contains a route to the entire VPC, so to restrict traffic between subnets within your VPC, you must
use Network Access Control Lists (ACLs). Network ACLs provide Layer 4 control of source/destination IP addresses and
ports, inbound and outbound from subnets. When a VPC is created, there is a default network ACL associated with
it, which permits all traffic. Network ACLs do not provide control of traffic to Amazon reserved addresses (first four
addresses of a subnet) nor of link local networks (169.254.0.0/16), which are used for VPN tunnels.

Palo Alto Networks 10


Amazon Web Services Concepts and Services

Network ACLs:

• Are applied at subnet level.

• Have separate inbound and outbound policies.

• Have allow and deny action rules.

• Are stateless—bidirectional traffic must be permitted in both directions.

• Are order dependent—the first match rule (allow/deny) applies.

• Apply to all instances in the subnet.

• Do not filter traffic between instances within the same subnet.

Security Groups
Security groups (SGs) provide a Layer 4 stateful firewall for control of the source/destination IP addresses and ports that
are permitted. SGs are applied to an instance’s network interface. Up to five SGs can be associated with a network
interface. Amazon Machine Images (AMIs) available in the AWS Marketplace have a default SG associated with them.

Security groups:

• Apply to the instance network interface.

• Allow action rules only—there is no explicit deny action.

• Have separate inbound and outbound policies.

• Are stateful—return traffic associated with permitted traffic is also permitted.

• Rules are order independent.

• Enforce at layer 3 and layer 4 (standard protocol port numbers)

SGs define only network traffic that should be explicitly permitted. Any traffic not explicitly permitted is implicitly
denied. You cannot configure an explicit deny action. SGs have separate rules for inbound and outbound traffic from
an instance network interface. SGs are stateful, meaning that return traffic associated with permitted inbound or
outbound traffic rules is also permitted. SGs can enforce on any protocol that has a standard protocol number; to
enforce traffic at the application layer requires a next generation firewall in the path. When you create a new SG, the
default setting contains no Inbound rule and the Outbound rule permits all traffic. The effect of this default is to permit
all network traffic originating from your instance outbound, along with its associated return traffic, and to permit no
external traffic inbound to an instance. Figure 4 illustrates how network ACLs are applied to traffic between subnets
and SGs are applied to instance network interfaces.

Palo Alto Networks 11


Amazon Web Services Concepts and Services

Figure 4 Security groups and network access control lists

VPC us-west-2 (Oregon)

Network ACL Network ACL Network ACL


Subnet1 Subnet3 Subnet4

Security Group Security Group Security Group Security Group


(Public) (DB Servers) (App Servers) (Web Servers)

Host Host Host Host Host Host

Subnet 1 Subnet 2 Subnet 3

us-west-2a us-west-2b us-west-2c

10.1.0.0/16

Source and Destination Check


Source and destination checks are enabled by default on all network interfaces within a VPC. Source/Dest Check
validates whether traffic is destined to, or originates from, an instance and prevents any traffic that does not meet this
validation. A network device, like a firewall, that wishes to forward traffic between its network interfaces within a VPC
must have the Source/Dest Check disabled on all interfaces that are capable of forwarding traffic.

Figure 5 Source and destination check

src: 10.0.2.15
dst: 10.0.1.10
x
x
Network
Host Host
Function

src: 10.0.1.10
10.0.1.10 10.0.2.15
dst: 10.0.2.15

Source/Dest. Check – Enabled (default)

Palo Alto Networks 12


Amazon Web Services Concepts and Services

VIRTUAL COMPUTE
Computing services and related resources like storage are virtual resources that can be created or destroyed on
demand. A virtual machine or virtual server is assigned to an availability zone in a VPC that is located within a region.
The assignment of where the virtual machine is located can be configured by the customer or by AWS based on default
parameters. Most organizations plan where resources will be assigned according to an overall design of an application’s
access and availability requirements.

Instance
An instance, also known as Elastic Compute (EC2), represents a virtual server that is deployed within a VPC. Much like
their physical counterparts, the virtual server on which your instance resides has various performance characteristics:
number of CPUs, memory, and number of interfaces. You have the option to optimize the instance type based your
application performance requirements and cost. You can change the instance type for instances that are in the stopped
state. Hourly operating costs are based on instance type and region.

Amazon Machine Image


Amazon Machine Images are virtual machine images available in the Amazon Marketplace for deployment as a VPC
instance. AWS publishes many AMIs that contain common software configurations for public use. In addition, members
of the AWS developer community have published their own custom AMIs. You can also create your own custom AMI
or AMIs; doing so enables you to quickly and easily start new instances that have everything you need.

Amazon EC2 Key Pairs


AWS uses public key authentication for access to new instances. A key pair can be used to access many instances, and
key pairs are only significant within a region. You can generate a key pair within a VPC and download the private key, or
you can import the public key in order to create a key pair. Either way, it’s very important that the private key is protect-
ed, because it’s the only way to access your instance. When you create an instance, you specify the key pair to be used,
and you must confirm your possession of the matching key at the time of instance creation. New AMIs have no default
passwords, and you use the key pair to access your instance. AMIs have a default Security Group associated with them
and instructions for how to access the newly created instance.

Note

Ensure you save your key file in a safe place, because it is only way to access your in-
stances, and there is no option to download a second time. If you lose the file, the only
option is to destroy your instances and recreate them. Protect your keys; anyone with
the file can also access your instances.
When creating a new instance from a Marketplace AMI, AWS offers only instance types
supported by the AMI. The instance type chosen determines the CPU, memory, and
maximum number of network interfaces available. For more information, see Amazon
EC2 Instance Types in the AWS documentation.

Palo Alto Networks 13


Amazon Web Services Concepts and Services

Elastic IP Address
Elastic IP addresses are public IP addresses that belong to AWS or a customer-owned IP address pool. Public IP ad-
dresses are associated with network interface of an instance in your VPC. After they are associated, a network address
translation is created in the VPC IGW that provides 1:1 translation between the public IP address and the network
interface VPC private IP address. When an instance is in the stopped state the instance remains associated to it unless
it is intentionally moved. When the instance is started again, the same Elastic IP address remain associated unless it
was intentionally moved, or the instance is deleted. This permits predictable IP addresses for direct internet access to
your instance network interface. An Elastic IP address is assigned to a region and cannot be associated to an instance
outside of that region.

Elastic Network Interface


Elastic network interfaces (ENIs) are virtual network interfaces that are attached to instances and appear as network
interfaces to its instance host. When virtual compute instances are initialized, they always have a single ethernet
interface (eth0) assigned and active. In situations where an instance needs multiple interfaces, like a firewall running on
the instance, ENIs are used to add more ethernet interfaces. There is a maximum number of network interfaces (default
+ ENI) that can be assigned per instance type. As an example, the c5.xlarge instance commonly used for the Palo Alto
Networks VM-Series firewall supports 4 network interfaces.

An elastic network interface can include the following attributes:

• A primary private IPv4 address from the address range of your VPC

• One dynamic or elastic public IPv4 address per private IP address

• One or more IPv6 addresses

• One or more SGs

• Source/destination check

• A MAC address

Within the same Availability Zone, ENIs can be attached and reattached to another instance up to the maximum
number of interfaces supported by the instance type, and the ENI characteristics are then associated with its newly
attached instance.

AWS VIRTUAL NETWORK ACCESS METHODS


Access to and from the VPC environment in AWS is required for many use cases. Virtual application servers in the
VPC need to download their applications, patches, and data from existing customer data centers or vendor sites on
the internet. AWS-based application servers serving web content to users need inbound access from the internet or
remote private network access. There are many access methods available with AWS; the below sections cover the most
used.

Palo Alto Networks 14


Amazon Web Services Concepts and Services

Internet Gateway
An internet gateway (IGW) provides a NAT mapping of an internal VPC IP address to a public IP address owned by AWS.
The IP address is mapped to an instance for inbound and/or outbound network access. The public IP address may be:

• Random and dynamic, assigned to an instance at startup and returned to the pool when the instance is stopped.
Every time the instance is turned up, a new address is assigned from a pool.

• Random and assigned to an instance as part of a process. This address stays with the instance, whether the
instance is stopped or running, unless it is intentionally assigned to another instance or deleted and returned to
the pool. This is known as an Elastic IP address.

This 1:1 private/public IP address mapping is part of a network interface configuration of each instance. After creating
a network interface, you can then associate a dynamic or an Elastic IP to create the 1:1 NAT associated between both
public and VPC private IP addresses. For internet connectivity to your VPC, there must be an associated IGW. The
VPC internet gateway is a horizontally scaled, redundant, and highly available VPC component. After it is associated,
the IGW resides in all Availability Zones of your VPC, available to map to any route table/subnet where direct internet
access is required.

Network Translation Gateway


In contrast to an IGW, which also does network address translation for connections to the internet, the Network
Translation (NAT) gateway enables 1:many public to private IP address mapping. The NAT gateway is designed to allow
instances to connect out to the internet but prevents any internet hosts from initiating an inbound connection to the
instance.

Figure 6 IGW and VGW in route table

Destination Target Status Propagated


10.1.0.0/16 local Active No
172.16.0.0/16 vgw Active Yes
0.0.0.0/0 igw Active No
Route Table

Virtual Private Gateway


The VGW provides a VPN concentrator service attached to a VPC for creation of IPsec tunnels. The tunnels provide
confidentiality of traffic in transit and can be terminated on almost any device capable of supporting IPsec. A VGW
creates IPsec tunnels to a customer gateway; these constitute the two tunnel endpoints, each with a public IP address
assigned. Like with IGWs, the VGW resides in all Availability Zones of your VPC, available to map to any route table/
subnet where VPN network access is required. You must map to the remote-site routes in the route table statically or
they can be dynamically learned.

Customer Gateway
A customer gateway (CGW) identifies the target IP address of a peer device that terminates IPsec tunnels from the
VGW. The customer gateway is typically a firewall or a router; either must be capable of supporting an IPsec tunnel
with required cryptographic algorithms.

Palo Alto Networks 15


Amazon Web Services Concepts and Services

VPN Connections
VPN connections are the IPsec tunnels between your VGW and CGW. VPN connections represent two redundant IPsec
tunnels from a single CGW to two public IP addresses of the VGW in your subscriber VPC.

Figure 7 VPN connections

AWS
VPC
AZ-a
Router
Host

VGW Subnet 2

Customer Host
Gateway
Firewall or
Router
Subnet 1
VPN Router
AZ-b

When creating an VPN connection, you have the option of running dynamic routing protocol (BGP) over the tunnels or
using static routes.

AWS Direct Connect


AWS Direct Connect allows you to connect your network infrastructure directly to your AWS infrastructure using
private, dedicated bandwidth. You can connect from your datacenter or office via a dedicated link from a telecommuni-
cations provider, or you can connect directly in a colocation facility where AWS has a presence. This direct connection
provides some advantages:

• Hardware-firewalls support

• Higher-bandwidth network connections

• Lower-bandwidth costs

• Consistent network-transport performance

• Arbitrary inter-VPC traffic inspection/enforcement

AWS Direct Connect requires a network device, provided by you, to terminate the network connection from the AWS
backbone network. This same device also terminates your carrier connection, completing the path between your
private network and your AWS infrastructure. AWS Direct Connect provides dedicated port options for 1Gbps and
10Gbps, and AWS partners provide hosted connections with options for 50Mbps to 10Gbps. Your firewalls exchange
BGP routing information with the AWS network infrastructure. There is no static routing available. AWS charges for
data that egresses from your AWS infrastructure as data-out. Charges for data transfers are at a lower rate than using
public internet. Because carrier-provided bandwidth is dedicated to your use, this connection tends to have more
consistent performance than public internet transport.

Palo Alto Networks 16


Amazon Web Services Concepts and Services

For organizations requiring higher firewall performance and high-availability options offered by physical firewall appli-
ances, AWS Direct Connect might be the ideal solution. It provides a direct connection from your hardware firewalls
to each of your VPCs within the AWS Direct Connect region. The port is an 802.1Q VLAN trunk. Virtual interfaces are
created on the AWS Direct Connect, and these virtual interfaces are then associated with a VPC via its VGW. Layer 3
subinterfaces on the firewall are tagged with the VLAN of the virtual interface, completing the connection to your VPC.
All VPCs within the region of your AWS Direct Connect can have a connection to your firewall so that arbitrary security
policy between VPCs is an option.

Virtual interfaces are of two types: public and private. As the name implies, private virtual interfaces provide a private
communication path from the AWS Direct Connect port to a VPC. Public virtual interfaces provide access to other
AWS services via their public IP addresses such as S3 storage. Public virtual interfaces require the use of a public IP
address. They may be public address space that you already own, or the public IP address may be provided by AWS.
Your BGP peering connection between hardware firewalls and AWS requires the use of a public virtual interface.

An AWS Direct Connect port has a limit of 50 virtual interfaces. If you require more than 50 VPCs in a region, multiple
connections might be required. As an example, you can use link aggregation to Direct Connect, and each link can carry
50 virtual interfaces, providing up to 200 VPC connections. For redundancy, you can also use multiple connections
within a region.

AWS Direct Connect Gateway


The AWS Direct Connect Gateway compliments AWS Direct Connect by allowing you to connect one or more of your
VPCs to your on-premises network, whether those VPCs are in the same or different AWS Regions.

The Direct Connect gateway:

• Can be created in any public Region.

• Can be accessed from most public Regions.

• Is a globally available resource.

• Uses AWS Direct Connect for connectivity to your on-premises network.

A Direct Connect Gateway cannot connect to a VPC in another account. The Direct Connect Gateway is designed to
connect to/from on-premises networks to the VPCs, and VPCs connected to the gateway cannot communicate directly
with each other. You program your on-premise network to BGP peer to the Direct Connect Gateway and not to every
VPC; this reduces configuration and monitoring versus per-VPC BGP peering over Direct Connect.

As bandwidth to on-premises grows, AWS Direct Connect provides the ability to extend your private VPC infra-
structure and publicly available AWS services directly to your data center infrastructure by using dedicated network
bandwidth. You have the option of placing your network equipment directly in an AWS regional location, with a direct
cross-connect to their backbone network, or using a network provider service, such as LAN extension or MPLS, to
extend your AWS infrastructure to your network. In some AWS campus locations, AWS Direct Connect is accessible via
a standard cross connect at a carrier facility or a Carrier-Neutral Facility. An AWS Direct Connect campus interconnect
is provided over 1-gigabit Ethernet or 10-gigabit Ethernet fiber links. If an organization does not have equipment at
an AWS Direct Connect Facility or requires less than 1-gigabit interconnect speeds, they can use an AWS partner to
access Direct Connect.

Palo Alto Networks 17


Amazon Web Services Concepts and Services

AWS Direct Connect virtual interfaces exchange BGP routing information with your firewall. Because AWS Direct
Connect paths are usually preferred over VPN connections, the route selection criteria reflect this preference and
prefers AWS Direct Connect routes over VPN Connection routes. All available path options are considered in order
until there exists a single “best” path. In the event there are multiple, equivalent paths for AWS Direct Connect routes,
the traffic is load balanced using equal cost multi-path. AWS Direct Connect dynamic routes of the same prefix length
are always preferred over VPN connection routes.

Private virtual interfaces programmed on the Direct Connect port are extended over 802.1q Trunk Links to the custom-
er on-premises firewall, one virtual interface (VLAN) each for every VPC to which you are mapping, as shown in Figure
8. You specify the name of the virtual interface, the AWS account to which the virtual interface should be associated,
the associated VGW VLAN ID, your BGP peer IP address, the AWS BGP peer IP address, the BGP autonomous system,
and the MD5 BGP authentication. Note that there is a maximum limit of 50 virtual interfaces per Direct Connect port.
You can increase your interconnect scale by adding multiple links to your peer point, up to 4, to form a link aggregation
group (LAG) and accommodate up to 200 virtual interfaces per LAG. There are alternative Direct Connect options if
you need to scale beyond 200.

Figure 8 Direct Connect private virtual interfaces


VPC-10
10.1.0.0/16
10.1.1.4

Host

169.254.1.2/30
Subnet
10.1.1.0/24
BGP Peering

VPC-20

169.254.1.1/30 Direct Connect 10.2.0.0/16


VLAN10 10.2.1.11
Eth1/5.10 802.1Q Trunk
169.254.2.10/30
169.254.2.9/30 Host
Eth1/5.20 VLAN20
169.154.3.2/30 Subnet
VLAN30 10.2.1.0/24
Eth1/5.30

BGP Peering VPC-30


10.3.0.0/16
169.254.3.1/30 10.3.1.22

Host

Subnet
10.3.1.0/24

Palo Alto Networks 18


Amazon Web Services Concepts and Services

Direct Connect provides many options for device and location redundancy. Two of these options are illustrated in
Figure 9. The first option provides redundant connections from a single firewall to your VPC. When multiple Direct
Connects are requested, they are automatically distributed across redundant AWS backbone infrastructure. The BGP
peering IP addresses must be unique across all virtual interfaces connected to a VGW. The VLAN is unique to each
AWS Direct Connect port because it is only locally significant, so virtual interfaces can share a common VLAN.

The second option illustrates redundant firewalls distributed across to two separate AWS locations servicing the same
region. This option provides device, geographic, and (optionally) service provider redundancy, as well.

Figure 9 Direct Connect redundancy options

VPC-10
10.1.0.0/16
10.1.1.22

Host

Subnet
10.1.1.0/24

AWS Location 1

VPC-10
10.1.0.0/16
10.1.1.22

AWS Location 1
Host

Subnet
10.1.1.0/24

AWS Location 2

AWS VPC PEERING


VPC peering is a native capability of AWS for creating direct two-way peer relationships between two VPCs within the
same region. The peer relationship permits only traffic directly between the two peers and does not provide for any
transit capabilities from one peer VPC through another to an external destination. VPC peering is a two-way agreement
between member VPCs. It’s initiated by one VPC to another target VPC, and the target VPC must accept the VPC
peering relationship. VPC peering relationships can be created between two VPCs within the same AWS account, or
they can be created between VPCs that belong to different accounts. After a VPC peering relationship is established,
there is two-way network connectivity between the entire CIDR block of both VPCs, and there is no ability to segment
using smaller subnets.

Palo Alto Networks 19


Amazon Web Services Concepts and Services

At first glance, it might appear possible to create network connectivity that exceeds the native capabilities offered by
VPC peering if you manipulate route tables, even create daisy-chain VPC connections, and get creative with route
tables. However, VPC peering strictly enforces source and destination IP addresses of packets traversing the peering
connection to only source and destinations within the directly peered VPCs. Any packets with a source or destination
outside of the two-peer VPC address space are dropped by the peering connection.

VPC peering architecture uses network policy to permit traffic only between two directly adjacent VPCs. Two example
scenarios:

• Hub and spoke

• Multi-tier applications

In a hub-and-spoke model, Subscriber VPCs (spokes) use VPC peering with the Central VPC (hub) to provide direct
communications between the individual subscriber VPC instances and central VPC instances or services. Peer sub-
scriber VPCs are unable to communicate with each other, because this would require transit connectivity through the
central VPC, which is not a capability supported by VPC peering. Additional direct VPC peering relationships can be
created to permit communication between VPCs as required.

Figure 10 VPC peering hub and spoke

VPC Peering VPC Peering


VPC-A Connection
VPC-B Connection
VPC-C
10.1.0.0/16 10.2.0.0/16 10.3.0.0/16

10.1.1.16 10.2.1.77 10.3.1.60

Host Host Host

Subnet Subnet Subnet


10.1.1.0/24 10.2.1.0/24 10.3.1.0/24

Host X Host

10.1.1.17 10.3.1.61

For multi-tiered applications, VPC peering can use network connectivity to enforce the policy that only directly adja-
cent application tiers can communicate. A typical three tier application (web, app, database) might provide web servers
in a public-facing central VPC, with VPC peering to a subscriber VPC hosting the application tier, and the Application
VPC would have another VPC peering relationship with a third subscriber VPC hosting the database tier. Be aware that
network connectivity of VPC peering would provide no outside connectivity directly to application or database VPCs,
and that the firewalls have no ability to provide visibility or policy for traffic between the application and database
VPCs.

Palo Alto Networks 20


Amazon Web Services Concepts and Services

Figure 11 illustrates how a central VPC with multi-tier VPC connections could operate.

Figure 11 VPC peering multi-tier

App Tier DB Tier


10.2.0.0/16 10.3.0.0/16
VPC Peering 10.2.1.77 10.3.1.60
Central Services
Connection
10.1.0.0/16 Host Host

10.1.1.77
Subnet Subnet
10.2.1.0/24 10.3.1.0/24
Host

Subnet Host
10.1.1.0/24 X
10.3.1.61
Host
Subscriber VPC-1
10.1.1.78 10.11.0.0/16

Internet 10.11.1.13

Host
10.1.2.5

Host Subnet Subscriber VPC-2


10.11.1.0/24
10.12.0.0/16
Subnet
10.1.2.0/24 10.12.1.4

Host
Host
10.1.2.8
Subnet
10.12.1.0/24

AWS Transit Gateway


AWS Transit Gateway enables you to control communications between your VPCs and to connect to your on-premises
networks via a single gateway. In contrast to VPC peering, which interconnects two VPCs only, Transit Gateway can act
as a hub in a hub-and-spoke model for interconnecting VPCs. The spokes peer only to the gateway, which simplifies
design and management overhead. You can add new spokes to the environment incrementally as your network grows.

Transit Gateway provides the ability to centrally manage the connectivity and routing between VPCs and from the
VPCs to any on-premises connections via VPN or Direct Connect. This allows you to control spoke-to-spoke commu-
nication and routing. AWS Transit Gateways support dynamic and static Layer 3 routing between Amazon VPCs and
VPNs. Routes determine the next hop depending on the destination IP address of the packet, and they can point to an
Amazon VPC or to a VPN connection. Transit Gateway allows the scaling of thousands of VPCs through attachments
types (of either VPC or VPN attachments).

VPC attachments are attachments from a VPC to the Transit Gateway. As part of the VPC attachment, an ENI interface
for Transit Gateway is created in each availability zone in use in the attached VPC. The ENIs exist in Transit Gateway
subnets that you create in the attached VPCs, one subnet per availability zone that you will be using.

Palo Alto Networks 21


Amazon Web Services Concepts and Services

VPN attachments are either VPCs attached via VPN or remote on-premises sites. A VPN attachment consists of two
IPSec VPN tunnels per attachment to a CGW. VPN attachments have the advantage of supporting dynamic routing and
equal cost multipath (ECMP).

Figure 12 . AWS Transit Gateway

AWS

Customer VPN Transit


Gateway Gateway
Firewall or
VPN Router

Route to
Local Service

You can create connections between your AWS Transit Gateway and on-premises gateways by using a VPN or Direct
Connect. Because AWS Transit Gateway supports ECMP, you can increase bandwidth to on-premises networks by:

• Creating multiple VPN connections between the Transit Gateway and the on-premises firewall.

• Using BGP to announce the same prefixes over each path.

• Enabling ECMP on both ends of the connections to load-balance traffic over the multiple paths.

AWS Transit Gateway can interoperate with central firewall deployments to direct inbound, outbound, and east-west
flows through firewalls based on defined routing tables in the gateway.

AWS PrivateLink
Unlike VPC Peering, which extends an entire VPC access between VPCs, PrivateLink (an interface VPC endpoint)
enables you to connect to services. These services include some AWS services, services hosted by other AWS accounts
(referred to as endpoint services), and supported AWS Marketplace partner services.

The interface endpoints are created directly inside of your VPC, using elastic network interfaces and IP addresses
in your VPC’s subnets. The service is now in your VPC, enabling connectivity to AWS services or AWS PrivateLink-
powered service via private IP addresses. VPC Security Groups can be used to manage access to the endpoints, and
the interface endpoint can be accessed from your premises via AWS Direct Connect.

An organization can create services and offer them for sale to other AWS customers, for access via a private connec-
tion. The provider creates a service that accepts TCP traffic, host it behind a network load-balancer (NLB), and then
make the service available, either directly or in AWS Marketplace. The organization is notified of new subscription
requests and can choose to accept or reject each one.

Palo Alto Networks 22


Amazon Web Services Concepts and Services

In the following diagram, the owner of VPC B is the service provider and has configured a network load balancer with
targets in two different Availability Zones. The service consumer (VPC A) has created interface endpoints in the same
two Availability Zones in their VPC. Requests to the service from instances in VPC A can use either interface endpoint.

Figure 13 AWS PrivateLink

AWS
VPC-A Service Consumer 10.1.0.0/16 VPC-B Service Provider 10.8.0.0/16

AZ-a AZ-a
EC2 Instance Endpoint
IP: 10.1.1.101 IP: 10.1.1.12 Target 1

Host Host
Network
us-west-2a subnet Load balancer
10.1.1.0 us-west-2a subnet

Target 2

Host
Endpoint Endpoint service
us-west-2b subnet
IP: 10.1.2.23 vpce-svc-1234 us-west-2b subnet
10.1.2.0

AZ-b AZ-b

The service provider and the service consumer run in separate VPCs and AWS accounts and communicate solely
through the endpoint, with all traffic flowing across Amazon’s private network. Service consumers don’t have to worry
about overlapping IP addresses, because the addresses are served out of their own VPC and then translated to the
service provider’s IP address ranges by the NLB. The service provided must run behind an NLB and must be accessible
over TCP. The service can be hosted on EC2 instances, EC2 Container Service containers, or on-premises (configured
as an IP target).

AWS RESILIENCY
Traditional data center network resilience was provided by alternate transport paths and high availability platforms like
switches, routers, and firewalls. The high availability platforms either had redundancy mechanisms within the chassis
or between chassis, which often introduced cost and complexity in the design. As networks scaled to meet higher
throughput demands of modern data centers, more chassis had to be included in the high availability group to provide
more throughput further complicating designs and increasing costs. The move to web-based, multi-tier applications
allows network designs to scale more gracefully and provide resiliency to the overall application through a stateless
scale out of resources.

Availability Zones
As discussed earlier in this guide, AWS provides resilience within regions by grouping data center facilities into
Availability Zones of power, cooling, etc. When designing applications, the different tiers of the application should
be designed such that each layer of the application is in at least two Availability Zones for resilience. AWS provided
network resources like IGW and VGW are designed to be resilient in multiple Availability Zones as well providing
consistency in overall services. Customer provided platforms like firewalls should also be placed in each Availability
Zone for a resilient design.

Palo Alto Networks 23


Amazon Web Services Concepts and Services

Load Balancers
Load balancers distribute traffic inbound to an application across a set of resources based on IP traffic criteria such as
the DNS, Layer 4 or Layer 7 information. A common offering of the load balancers is to be able to check inbound traffic
targets for health, and remove unhealthy resources thereby enhancing resiliency of the application.

AWS offers three types of load balancers, and like other AWS resources mentioned above, they can exist in multiple
Availability Zones for resiliency, and they scale horizontally for growth.

Classic Load Balancer


The Classic Load Balancer provides basic load balancing across multiple Amazon EC2 instances and operates at both
the request level and connection level. The Classic Load Balancer is intended for applications that were built within
the EC2-Classic network, not the VPC. AWS recommends using the Application Load Balancer (ALB) for Layer 7 and
Network Load Balancer for Layer 4 when using Virtual Private Cloud (VPC).

Application Load Balancer


The Application Load Balancer (ALB), as the name implies, operates at the application- or request-level (Layer 7), routing
traffic to targets—EC2 instances, containers and IP addresses—based on the content of the request. ALB supports both
HTTP and HTTPS traffic, providing advanced request routing targeted at delivery of modern application architectures.

Traffic destined for the ALB uses DNS names and not a discrete IP address. As the load balancer is created a fully
qualified domain name (FQDN) is assigned and tied to IP addresses, one for each Availability Zone. The DNS name
is constant and will survive a reload. The IP addresses that the FQDN resolves to could change; therefore, FQDN is
the best way to resolve the IP address of the front end of the load balancer. If the ALB in one Availability Zone fails or
has no healthy targets, and is tied to external DNS like the Amazon Route 53 Cloud DNS, then traffic is directed to an
alternate ALB in the other Availability Zone.

The ALB can be provisioned with the front end facing the internet and load balance using public IP addressing on the
outside or be internal-only using IP addresses from the VPC. You can load balance to any HTTP/HTTPS application
based:

• Inside your AWS region, or

• On-premises, using the IP addresses of the application resources as targets

This allows the ALB to load balance to any IP address, across Availability Zones, and to any interface on an instance.
ALB targets can also be in legacy data centers connected via VPN or Direct Connect, which helps migrate applications
to the cloud, and burst or failover to cloud.

The ALB offers content-based routing of connection requests based on either the host field or the URL of the HTTP
header of the client request. ALB uses a round-robin algorithm to balance the load and supports a slow start when
introducing adding targets to the pool to avoid overwhelming the application target. ALB directs traffic to healthy
targets based on IP probes or detailed error codes from 200-499. Target health metrics and access logs can be stored
in AWS S3 storage for analysis.

The ALB supports terminating HTTPS between the client and the load balancer and can manage SSL certificates.

Palo Alto Networks 24


Amazon Web Services Concepts and Services

Network Load Balancer


Network Load Balancer operates at the connection level (Layer 4), routing connections to targets—Amazon EC2 in-
stances, containers, and IP addresses—based on IP protocol data. Ideal for load balancing of TCP traffic, Network Load
Balancer is capable of handling millions of requests per second while maintaining ultra-low latencies. Network Load
Balancer is optimized to handle sudden and volatile traffic patterns while using a single static IP address per Availability
Zone.

The NLB accepts incoming traffic from clients and distributes the traffic to targets within the same Availability Zone.
Monitoring the health of the targets, NLB ensures that only healthy targets get traffic. If all targets in an Availability
Zone are unhealthy, and you have set up targets in another Availability Zone, then the NLB automatically fails-over to
the healthy targets in the alternate Availability Zones.

The NLB supports a Static IP address assignment, including an Elastic IP address, for the front-end of the load balancer,
making it ideal for services that do not use FQDNs and rely on the IP address to route connections. If the NLB has
no healthy targets, and is tied to Amazon Route 53 Cloud DNS, then traffic is directed to an alternate NLB in another
region. You can load balance to any IP address target based inside of your AWS region or on-premises. This allows the
NLB to load balance to any IP address and any interface on an instance. NLB targets can also be in legacy data centers
connected via VPN or Direct Connect, which helps migrate applications to the cloud, and burst or failover to cloud.

The Flow Logs feature can record all requests sent to your load balancer, capturing information about the IP traffic
going to and from network interfaces in your VPC. CloudWatch provides metrics such as active flow count, healthy
host count, new flow count, and processed bytes. The NLB is integrated with other AWS services such as Auto Scaling,
Amazon EC2 Container Service, and Amazon CloudFormation.

Palo Alto Networks 25


Design Details: VM-Series Firewall on AWS

Design Details: VM-Series Firewall on AWS


The Palo Alto Networks VM-Series firewall is the virtualized form factor of our next-generation firewall that can be de-
ployed in a range of private and public cloud computing environments. The VM-Series on AWS enables you to securely
implement a cloud-first methodology while transforming your data center into a hybrid architecture that combines the
scalability and agility of AWS with your on-premises resources. This allows you to move your applications and data to
AWS while maintaining a security posture that is consistent with the one you may have established on your physical
network. The VM-Series on AWS natively analyzes all traffic in a single pass to determine application, content and user
identity. The application, content and user are used as core elements of your security policy and for visibility, reporting
and incident investigation.

VM-SERIES ON AWS MODELS


VM-Series firewalls on AWS are available in six primary models: VM-100, VM-200, VM-300, VM-500, VM-700, and
VM-1000-HV. Varying only by capacity, all of the firewall models use the same image. A capacity license configures the
firewall with a model number and associated capacity limits.

Table 1 VM-Series firewall capacities and requirements

VM-300/
VM-100/VM-200 VM-1000HV VM-500 VM-700
Capacities
AWS instance size tested (recommended) c5.xlarge m5.xlarge m5.2xlarge m5.4xlarge

Firewall throughput (App-ID™ enabled) 800Mbps 1Gbps 2.5Gbps 5Gbps

Threat Prevention throughput 500Mbps 1Gbps 2Gbps 4Gbps

IPsec VPN throughput 500Mbps 750Mbps 1.25Gbps 1.75Gbps

Firewall throughput (App-ID enabled) 1.25Gbps 2.25Gbps 2.25Gbps 6Gbps

Threat Prevention throughput 1Gbps 1.75Gbps 2Gbps 4.5Gbps

IPsec VPN throughput 1Gbps 1.25Gbps 1.75Gbps 2Gbps

New sessions per second 9K 9K 20K 40K

Max sessions 250K 800K 2M 10M

System Requirements
Cores supported (min/max) 0.4/2 2/4 2/8 2/16

Memory (min) 6.5GB 9GB 16GB 56GB

Disk drive capacity (min) 32GB 60GB 60GB 60GB

c5.18xlarge m5.xlarge, m5.2xlarge, m5.4xlarge,


Minimum AWS instance sizes supported c5.18xlarge c5.18xlarge c5.18xlarge
BYOL or VM-Series BYOL, VM-ELA, BYOL or BYOL or
Licensing options ELA or PAYG VM-Series ELA VM-Series ELA

Palo Alto Networks 26


Design Details: VM-Series Firewall on AWS

Although the capacity license sets the VM-Series firewalls limits, the size of the virtual machine on which you deploy
the firewall determines the firewall’s performance and functional capacity. In Table 1, the mapping of VM-Series firewall
to AWS virtual machine size is based on VM-Series model requirements for CPU, memory, disk capacity, and network
interfaces. When deployed on a virtual machine that provides more CPU than the model supports, VM-Series firewalls
do not use the additional CPU cores. Conversely, when you deploy a large VM-Series model on a virtual machine that
meets the minimum CPU requirements, it effectively performs the same as a lower model VM-Series.

In smaller VM-Series firewall models, it may seem that a virtual machine size smaller than those listed in Table 1 would
be appropriate; however, smaller virtual machine sizes do not have enough network interfaces. AWS provides virtual
machines with two, three, four, eight, or fifteen network interfaces. Because VM-Series firewalls reserve an interface
for management functionality, two-interface virtual machines are not a viable option. Four-interface virtual machines
meet the minimum requirement of a management, public, and private interface. You can configure the fourth interface
as a security interface for optional services such as VPN or a DMZ.

Although larger models of VM-Series firewalls offer increased capacities (as listed in Table 1), on AWS, some through-
put is limited, and larger number of cores helps with scale more than throughput. For the latest detailed information,
see the VM-Series on AWS document. Many factors affect performance, and Palo Alto Networks recommends you do
additional testing in your environment to ensure the deployment meets your performance and capacity requirements.
In general, public cloud environments are more efficient when scaling out the number of resources versus scaling up to
a larger virtual machine size.

VM-SERIES LICENSING
You purchase licenses for VM-Series firewalls on AWS through the AWS Marketplace or through traditional Palo Alto
Networks channels.

Pay As You Go
A pay as you go (PAYG) license is also called a usage-based or pay-per-use license. You purchase this type of license
from the AWS public Marketplace, and it is billed annually or hourly. With the PAYG license, VM-Series are licensed and
ready for use as soon as you deploy it; you do not receive a license authorization code. When the firewall is stopped or
terminated on AWS, the usage-based licenses are suspended or terminated. PAYG licenses are available in the follow-
ing bundles:

• Bundle 1—Includes a VM-300 capacity license, Threat Prevention license (IPS, AV, and Anti-Spyware), and a
premium support entitlement

• Bundle 2—Includes a VM-300 capacity license, Threat Prevention license (IPS, AV, malware prevention),
GlobalProtect™, WildFire®, PAN-DB URL Filtering licenses, and a premium support entitlement

An advantage of the Marketplace licensing option is the ability to use only what you need, deploying new VM-Series
instances as needed, and then removing them when they are not in use.

Palo Alto Networks 27


Design Details: VM-Series Firewall on AWS

BYOL and VM-Series ELA


Bring your own license (BYOL) and the VM-Series Enterprise License Agreement (ELA) compose a license that you purchase
from a channel partner. VM-Series firewalls support all capacity, support, and subscription licenses in BYOL. When
using BYOL, you license VM-Series firewalls like a traditionally deployed appliance, and you must apply a license
authorization code. After you apply the code to the device, the device registers with the Palo Alto Networks support
portal and obtains information about its capacity and subscriptions. Subscription licenses include Threat Prevention,
PAN-DB URL Filtering, AutoFocus™, GlobalProtect, and WildFire.

The VM-Series ELA provides a fixed-price licensing option allowing unlimited deployment of VM-Series firewalls with
BYOL. Palo Alto Networks offers VM-Series ELAs in one and three-year term agreements. An advantage of BYOL is
the ability to choose different capacity licenses for the VM-Series compared to the two VM-300 bundle options. As of
PAN-OS version 8.0, Palo Alto Networks supports use of VM-100/VM-200, VM-300/1000-HV, VM-500, and VM-700
on AWS, using instance types that support the requirements for each VM firewall. BYOL provides additional flexibility
in that the license can be used on any virtualization platform (VMware, AWS, Azure) and is transferrable. For more
information about instance sizing and the capacities of Palo Alto Networks VM firewalls, see VM-Series for Amazon Web
Services.

Note

When creating a new instance from a Marketplace AMI, AWS offers only instance types
supported by the AMI. The instance type chosen determines the CPU, memory, and
maximum number of network interfaces available. For more information, see
Amazon EC2 Instance Types in the AWS documentation.

The VM-Series ELA includes four components:

• A license token pool that allows you to deploy any model of the VM-Series firewall. Depending on the firewall
model and the number of firewalls that you deploy, a specified number of tokens are deducted from your
available license token pool. All of your VM-Series ELA deployments use a single license authorization code,
which allows for easier automation and simplifies the deployment of VM-Series firewalls.

• Threat Prevention, WildFire, GlobalProtect, DNS Security and PAN-DB Subscriptions for every VM-Series
firewall deployed as part of the VM-Series ELA.

• Unlimited deployments of Panorama as a virtual appliance.

• Support that covers all the components deployed as part of the VM-Series ELA.

VM-SERIES ON AWS INTEGRATION


Launching a VM-Series on AWS
The Amazon AWS Marketplace provides a wide variety of Linux, Windows, and specialized machine images like a Palo
Alto Networks VM-Series firewall. There, you can find AMIs for Palo Alto Networks VM-Series firewall with various
licensing options. After you select one, the AMI launch instance workflow provides a step-by-step guided workflow for
all IP addressing, network settings, and storage requirements. Customers can provide their own custom AMIs to suit
design needs. Automation scripts for building out large-scale environments usually include AMI programming.

Palo Alto Networks 28


Design Details: VM-Series Firewall on AWS

Bootstrapping
One of the fundamental design differences between traditional and public cloud deployments is the lifetime of resourc-
es. One method of achieving resiliency in public cloud deployments is through the quick deployment of new resources
and quick destruction of failed resources. One of the requirements for achieving quick resource build-out and tear
down is current and readily available configuration information for the-resource to use during initial deployment. Using
a simple configuration file, for example, you need to provide only enough information to connect to Panorama. The
VM-Series firewall can load a license, connect to Panorama for full configuration and policy settings, and be operational
in the network in minutes. You can manually upgrade the software after deploying the virtual machine, or you can use
bootstrapping to update the firewall software as part of the bootstrap process.

Bootstrapping allows you to create a repeatable process of deploying VM-Series firewalls through a bootstrap package.
The package can contain everything required to make the firewall ready for production or just enough information to
get the firewall operational and connected to Panorama. In AWS, you implement the bootstrap package through an
AWS S3 file share that contains directories for configuration, content, license, and software. On the first boot, VM-
Series firewalls mount the file share and use the information in the directories to configure and upgrade the firewall.
Bootstrap happens only on the first boot; after that it stops looking for a bootstrap package.

Management
At initial boot, the first interface attached to the virtual machine (eth0) is the firewall’s management interface. When
building the instance via the template, the firewall must be assigned a private IP address by placing it in a subnet in the
VPC. You can assign the address to be given to the firewall or allow AWS to assign it from the pool of available ad-
dresses in the subnet. The firewall’s management interface can then obtain its internal IP address through DHCP. In the
template, you can also assign a public IP address to the eth0 interface that is dynamically assigned from the AWS pool
of addresses for that region. The public IP address assigned in the template is dynamic and assigned at boot, meaning
that if you shut the instance down for maintenance, when you restart the instance, the private IP address is the same if
you assigned it to be, but the dynamically assigned public IP address is likely different. If you desire a statically assigned
public IP address for the management interface, assign it an Elastic IP address. An Elastic IP address is a public IP
address that remains associated to an instance and interface until it is manually disassociated or the instance is deleted.

Note

If you assign a dynamic public IP address on the instance eth0 interface and later assign
an Elastic IP address to any interface on the same instance, upon reboot, the dynamic
IP address on eth0 is replaced with the Elastic IP address, even if that Elastic IP address
is associated to eth1 or eth2. The best way to predictably assign a public IP address
to your eth0 management interface is to assign an Elastic IP address associated to the
eth0 interface.

If you are managing the firewalls with Panorama, you can place Elastic IP addresses on Panorama and then only
dynamic IP addresses on the firewall management interfaces, as long as Panorama is in the AWS environment and has
reachability to all private IP management subnets. If Panorama is on-premises, it must have VPN access to the VPC
environment to manage firewalls that only use private IP addresses from the AWS virtual network.

Palo Alto Networks 29


Design Details: VM-Series Firewall on AWS

MANAGING AWS DEPLOYMENTS WITH PANORAMA


The best method for ensuring up-to-date firewall configuration is to use Panorama for central management of firewall
policies. Panorama simplifies consistent policy configuration across multiple independent firewalls through its device
group and template stack capabilities. When multiple firewalls are part of the same device group, they receive a
common ruleset. Because Panorama enables you to control all of your firewalls—whether they are on-premises or in
the public cloud, a physical appliance or virtual—device groups also provide configuration hierarchy. With device group
hierarchy, lower-level groups include the policies of the higher-level groups. This allows you to configure consistent
rulesets that apply to all firewalls, as well as consistent rulesets that apply to specific firewall deployment locations such
as the public cloud.

As bootstrapped firewalls deploy, they can also automatically pull configuration information from Panorama. VM-Series
firewalls use a VM authorization key and Panorama IP address in the bootstrap package to authenticate and register to
Panorama on its initial boot. You must generate the VM authorization key in Panorama before creating the bootstrap
package. If you provide a device group and template in the bootstrap package’s basic configuration file, Panorama
assigns the firewall to the appropriate device group and template so that the relevant rulesets are applied, and you can
manage the device in Panorama going forward.

You can deploy Panorama in your on-site data center or a public cloud provider such as AWS. When deployed in your
on-site data center, Panorama can manage all the physical appliances and VM-Series firewalls in your organization.
If you want a dedicated instance of Panorama for the VM-Series firewalls inside of AWS, deploy Panorama on AWS.
Panorama can be deployed as management only, Log Collector only, or full management and log collection mode.

When you have an existing Panorama deployment on-site for firewalls in your data center and internet edge, you can
use it to manage the VM-Series firewalls in AWS. Beyond management, your firewall log collection and retention need
to be considered. Log collection, storage, and analysis is an important cybersecurity best practice that organizations
perform to correlate potential threats and prevent successful cyber breaches.

On-Premises Panorama with Dedicated Log Collectors in the Cloud


Sending logging data back to the on-premises Panorama can be inefficient, costly, and pose data privacy and residency
issues in some regions. An alternative to sending the logging data black to your on-premises Panorama is to deploy
Panorama dedicated log collectors on AWS and use the on-premises Panorama for management. Deploying a ded-
icated Log Collector on AWS reduces the amount of logging data that leaves the cloud but still allows your on-site
Panorama to manage the VM-Series firewalls in AWS and have full visibility to the logs as needed.

Figure 14 Panorama Log Collector mode in AWS

VPC
Management

VPN Gateway

Panorama
Log Collector Mode Panorama

Palo Alto Networks 30


Design Details: VM-Series Firewall on AWS

Panorama Management in AWS with Cortex Data Lake


There are two design options when deploying Panorama management on AWS. First, you can use Panorama for
management only and use the Palo Alto Networks Cortex Data Lake to store the logs generated by the VM-Series
firewalls. Cortex Data Lake is a cloud-based log collector service that provides resilient storage and fast search capa-
bilities for large amounts of logging data. Cortex Data Lake emulates a traditional log collector. Logs are encrypted and
then sent by the VM-Series firewalls to the Cortex Data Lake over TLS/SSL connections. Cortex Data Lake allows you
to scale your logging storage as your AWS deployment scales because licensing is based on storage capacity and not
the number of devices sending log data.

The benefit of using Cortex Data Lake goes well beyond scale and convenience when tied into the Palo Alto Networks
Cortex AI based continuous security platform. Cortex is a scalable ecosystem of security applications that can apply
advanced analytics in concert with Palo Alto Networks enforcement points to prevent the most advanced attacks. Palo
Alto Networks analytics applications such as Cortex XDR and AutoFocus, as well as third-party analytics applications
that you choose, use Cortex Data Lake as the primary data repository for all of Palo Alto Networks offerings.

Figure 15 Panorama management and Cortex Data Lake

Virtual Network
Management

rd
Cortex Cortex Cortex 3
Hub Apps Party Apps

Cortex
Cortex Data Lake

Panorama
Management Only Mode

Palo Alto Networks 31


Design Details: VM-Series Firewall on AWS

Panorama Management and Log Collection in AWS


Second, you can use Panorama for both management and log collection. Panorama on AWS supports high-availability
deployment if both virtual appliances are in the same VPC and region. You can deploy the management and log col-
lection functionality as a shared virtual appliance or on dedicated virtual appliances. For smaller deployments, you can
deploy Panorama and the Log Collector as a single virtual appliance. For larger deployments, a dedicated Log Collector
per region allows traffic to stay within the region and reduce outbound data transfers.

Figure 16 Panorama management and Log Collection in AWS

VPC Management

Panorama

Panorama is available as a virtual appliance for deployment on AWS and supports Management Only mode, Panorama
mode, and Log Collector mode with the system requirements defined in Table 2. Panorama on AWS is only available
with a BYOL licensing model.

Table 2 Panorama Virtual Appliance on AWS

Management only Panorama Log Collector


4 CPUs 8 CPUs 16 CPUs
8GB memory 32GB memory 32GB memory
Minimum system 81GB system disk 2TB to 24TB log storage ca- 2TB to 24TB log storage capacity
requirements pacity
t2.xlarge m4.2xlarge m4.4xlarge
AWS instance sizing m4.2xlarge m4.4xlarge c4.8xlarge

SECURITY POLICY AUTOMATION WITH VM MONITORING


Public cloud application environments are typically built around an agile application development process. In the
agile environment, applications are deployed quickly, and new environments are often built out to accommodate a
revised application versus trying to upgrade the existing operational environment. When the new update application
environment goes online, the older, now-unused application environment is deleted. This amount of change presents
a challenge to enforcing security policy, unless your security environment is compatible with an agile development
environment.

Palo Alto Networks 32


Design Details: VM-Series Firewall on AWS

Palo Alto Networks firewalls, including the VM-Series, support dynamic address groups. A dynamic address group can
track IP addresses are assigned or removed from a security group that is then assigned to a security policy. You can
combine the flexibility of dynamic address groups and the VM monitoring capabilities on the firewall and Panorama to
dynamically apply security policy as VM workloads and their associated IP addresses spin up or down.

The VM Monitoring capability allows the firewall to be automatically updated with IP addresses of a source or desti-
nation address object referenced in a security policy rule. Dynamic address groups use the metadata about the cloud
infrastructure, mainly IP address-to-tag mappings for your VMs. Using this metadata (that is, tags) instead of static IP
addresses in the security policy makes them dynamic. So, as you add, change, or delete tags on your VMs, security
policy is always enforced consistently.

Figure 17 VM Monitoring AWS tag to dynamic address group mapping


AWS Compute Resources

Dynamic Address
Group Definition
Sharepoint Servers
10.1.1.2 10.1.6.4 10.3.7.6
VM-Monitoring DB Servers
XML API
Admin Servers

Public Web Servers


10.4.2.2 10.4.3.8 10.4.9.1
Public Web Servers DB Servers 10.4.2.2
10.1.1.2
Admin Servers Sharepoint Servers
10.8.7.3 Learned Group
Membership
Sharepoint Servers DB Servers
10.6.3.2 10.8.5.7 10.8.7.3

Depending on your scale and the cloud environments you want to monitor, you can choose to enable VM Monitoring
for either the firewall or Panorama to poll the evolving VM landscape. If you enable VM Monitoring on the firewall, you
can poll up to 10 sources (VPCs); however, each firewall will act and poll independently, limiting the flexibility and scale.
The Panorama plugin for AWS allows you to monitor up to 100 VM sources (VPCs) on AWS and over 6000 hosts.
You then program Panorama to automatically relay the dynamic address group mapping to an entire group of firewalls,
providing scale and flexibility.

Palo Alto Networks 33


Design Details: VM-Series Firewall on AWS

PRISMA PUBLIC CLOUD INFRASTRUCTURE PROTECTION FOR AWS


Prisma Public Cloud provides cloud infrastructure protection across the following areas:

• Multi-cloud security—Provides a consistent implementation of security best practices across your AWS and
other public cloud environments. Prisma Public Cloud requires no agents, proxies, software, or hardware for
deployment and integrates with a variety of threat intelligence feeds. Prisma Public Cloud includes pre-pack-
aged polices to secure multiple public cloud environments.

• Continuous compliance—Maintain continuous compliance across CIS, NIST, PCI, FedRAMP, GDPR, ISO
and SOC 2 by monitoring API-connected cloud resources across multiple cloud environments in real time.
Compliance documentation is generated with one-click exportable, fully prepared reports.

• Cloud forensics—Go back in time to the moment a resource was first created and see chronologically every
change and by whom. Prisma Public Cloud provides forensic investigation and auditing capabilities of potentially
compromised resources across your AWS environment, as well as other public cloud environments. Historical
information extends back to initial creation of each resource and the detailed change records includes who
made each change.

• DevOps and automation—Enable secure DevOps without adding friction by setting architecture standards that
provide prescribed policy guardrails. This methodology permits agile development teams to maintain their focus
on developing and deploying apps to support business requirements.

Prisma Public Cloud connects to your cloud via APIs and aggregates raw configuration data, user activities, and network
traffic to analyze and produce concise actionable insights.

AWS integration requires that you create an External ID in AWS to establish trust for the Prisma Public Cloud account
to pull data. An associated account role sets the permissions of what the Prisma Public Cloud account can see with
read-only access plus a few write permissions. AWS VPCs must be configured to send flow logs to CloudWatch logs,
not S3 buckets. A Prisma Public Cloud onboarding script automates account setup, and CloudFormation templates are
available to assist with permissions setup.

Prisma Public Cloud performs a five-stage assessment of your cloud workloads. Contributions from each stage progres-
sively improve the overall security posture for your organization:

• Discovery— Prisma Public Cloud continuously aggregates configuration, user activity, and network traffic data
from disparate cloud APls. It automatically discovers new workloads as soon as they are created.

• Contextualization— Prisma Public Cloud correlates the data and applies machine learning to understand the
role and behavior of each cloud workload.

• Enrichment—The correlated data is further enriched with external data sources—such as vulnerability scanners,
threat intelligence tools, and SIEMs—to deliver critical insights.

• Risk assessment—Prisma Public Cloud scores each cloud workload for risk based on the severity of business
risks, policy violations, and anomalous behavior. Risk scores are then aggregated, enabling you to benchmark
and compare risk postures across different departments and across the entire environment.

• Visualization—the entire cloud infrastructure environment is visualized with an interactive dependency map that
moves beyond raw data to provide context.

Palo Alto Networks 34


Design Details: VM-Series Firewall on AWS

Prisma Public Cloud Threat Defense


Prisma Public Cloud enables you to visualize your entire AWS environment, down to every component within the
environment. The platform dynamically discovers cloud resources and applications by continuously correlating con-
figuration, user activity, and network traffic data. Combining this deep understanding of the AWS environment with
data from external sources, such as threat intelligence feeds and vulnerability scanners, enables Prisma Public Cloud to
produce context around risks.

The Prisma Public Cloud platform is prepackaged with policies that adhere to industry-standard best practices. You
can also create custom policies based on your organization’s specific needs. The platform continuously monitors for
violations of these policies by existing resources as well any new resources that are dynamically created. You can easily
report on the compliance posture of your AWS environment to auditors.

Prisma Public Cloud automatically detects user and entity behavior within the AWS infrastructure and management
plane. The platform establishes behavior baselines, and it flags any deviations. The platform computes risk scores—sim-
ilar to credit scores—for every resource, based on the severity of business risks, violations, and anomalies. This quickly
identifies the riskiest resources and enables you to quantify your overall security posture.

Prisma Public Cloud reduces investigation-time from weeks or months to seconds. You can use the platform’s graph
analytics to quickly pinpoint issues and perform upstream and downstream impact analysis. The platform provides you
with a DVR-like capability to view time-serialized activity for any given resource. You can review the history of changes
for a resource and better understand the root cause of an incident, past or present.

Prisma Public Cloud enables you to quickly respond to an issue based on contextual alerts. Alerts are triggered based
on risk-scoring methodology and provide context on all risk factors associated with a resource. You can send alerts, or-
chestrate policy, or perform auto-remediation. The alerts can also be sent to third-party tools such as Slack, Demisto®,
and Splunk to remediate the issue.

Prisma Public Cloud provides the following visibility, detection, and response capabilities:

• Host and container security—Configuration monitoring and vulnerable image detection.

• Network security—Real-time network visibility and incident investigations. Suspicious/malicious traffic


detection.

• User and credential protection—Account and access key compromise detection. Anomalous insider activity
detection. Privileged activity monitoring.

• Configurations and control plane security—Compliance scanning. Storage, snapshots and image configura-
tion monitoring. Security group and firewall configuration monitoring. IP address management configuration
monitoring.

Palo Alto Networks 35


Design Details: VM-Series Firewall on AWS

NETWORKING IN THE VPC

IP Addressing
When a VPC is created, it is assigned a network range, as described earlier. It is common (but not required) to use a
private IP address range with an appropriate subnet scope for your organization’s needs.

The minimum CIDR block prefix length for a VPC remains /16. RFC1918 defines much larger space for 10.0.0.0/8 and
172.16.0.0/12, which must be further segmented into smaller address blocks for VPC use. Try to use a subnet range
sufficient for anticipated growth in this VPC. You can assign up to four secondary blocks after VPC creation. Avoid
IP address overlap within the AWS VPC environment and with the rest of your organization, or else you will require
address translation at borders.

Consider breaking your subnet ranges into functional blocks that can be summarized for ACLs, etc. The addressing
is broken up into a subnet for management, public, and compute. All addressing for the first VPC in this reference
architecture is done inside of the 10.1.0.0/16 range. By default, all endpoints within the VPC private address range
have direct connectivity via the AWS VPC infrastructure. Multiple availability zones are used for resiliency: Zone-a
and Zone-b. Each zone is assigned unique IP addresses from within the VPC CIDR block. The ranges for your network
should be chosen based on the ability to summarize for ACLs or routing advertisements outside of the VPC.

Figure 18 VPC IP addressing

Single VPC 10.1.0.0/16

AZ-a

Management Public Compute

Management Public Web Business DB


Subnet Subnet Subnet Subnet Subnet
10.1.9.0/24 10.1.10.0/24 10.1.1.0/24 10.1.2.0/24 10.1.3.0/24

AZ-b

Management Public Compute

Management Public Web Business DB


Subnet Subnet Subnet Subnet Subnet
10.1.109.0/24 10.1.110.0/24 10.1.101.0/24 10.1.102.0/24 10.1.103.0/24

Firewall Interfaces
All networking within the VPC is Layer 3; therefore, any firewall interfaces are configured for Layer 3 operation. As
discussed earlier, even if you use DHCP on the firewall for the interface IP address assignment, it is important to
statically assign the addresses for routing next-hop assignments and other firewall functions. You can do this when
creating the instance for the VM-Series firewall. The VM-series firewall requires three Ethernet interfaces, management
(eth0), public-facing interface (eth1), and the compute or private-facing interface (eth2). When the VM-Series instance

Palo Alto Networks 36


Design Details: VM-Series Firewall on AWS

is created, by default it has a single interface (eth0). You therefore have to create an Elastic Network Interface for eth1
and eth2 and associate them to the VM-series instance. You also need to create two Elastic IP addresses; associate one
address for the management interface, and associate one for the firewall’s public IP address for internet traffic (eth1).

Figure 19 Firewall interfaces


Management
Instance eth0
Firewall mgmt

Public Compute
Instance eth1 Instance eth2
Firewall ethernet1/1 Firewall ethernet1/2

Caution

As mentioned earlier, if you want a public IP address for management access on eth0
and you assigned eth0 a dynamic public IP address when creating the instance, any
Elastic IP assigned to the instance will result in the dynamic address being returned
to the pool. eth0 will not have a public IP address assigned unless you create another
Elastic IP address and assign it to eth0.

Source and Destination Check


Source and destination checks are enabled by default on all network interfaces within your VPC. Source/Dest Check
validates whether traffic is destined to, or originates from, an instance and prevents any traffic that does not meet this
validation. A network device that wishes to forward traffic between its network interfaces within a VPC must have
the Source/Dest Check disabled on all interfaces that are capable of forwarding traffic. To pass traffic through the
VM-Series firewall, you must disable Source/Dest Check on all dataplane interfaces (ethernet1/1, ethernet1/2, etc.).
Figure 20 illustrates how the default setting (enabled) of Source/Dest Check prevents traffic from transiting between
interfaces of an instance.

Note

SGs and Source/Dest Check can be changed at the instance level however these
changes apply only to the first interface (eth0) of the instance. For a VM-Series with
multiple Ethernet interfaces, the first interface represents the management interface
(ethernet1/0). To avoid ambiguity, SGs and Source/Dest Check should be applied to all
individual network interfaces (eth0, eth1, and eth2).

Palo Alto Networks 37


Design Details: VM-Series Firewall on AWS

Figure 20 Source and destination check

src: 10.0.2.15
dst: 10.0.1.10
x
Host

src: 10.0.1.10
x Host

10.0.1.10 10.0.2.15
dst: 10.0.2.15

Source/Dest. Check – Enabled (default)

src: 10.0.2.15
dst: 10.0.1.10

Host Host

src: 10.0.1.10
10.0.1.10 10.0.2.15
dst: 10.0.2.15

Source/Dest. Check – Disabled

Internet Gateway
Create an IGW function in your VPC in order to provide inbound and outbound internet access. The internet gateway
performs network address translation of the private IP address space to the assigned public dynamic or Elastic IP
addresses. The public IP addresses are from AWS address pools for the region that contains your VPC, or you may use
your organization’s public IP addresses assigned to operate in the AWS environment.

Route Tables
Although all devices in the VPC can reach each other’s private IP address across the native AWS fabric for the VPC,
route tables allow you to determine what external resources an endpoint can reach based on what routes or services
are advertised within that route table. After a route table is created, you choose which subnets are assigned to it. You
can create separate route tables for Availability Zone-a (AZ-a) versus Availability Zone-b (AZ-b), and you will want to for
the compute subnets. Consider creating the route tables for management, public, and compute:

• The management route table has the management subnets assigned to it and an IGW for the default route for
Internet access.

• The public route table has the public subnets assigned to it and the IGW for the Internet access.

• The compute route table for AZ-a has the compute private subnets for AZ-a assigned to it and no IGW. After
the firewall is configured and operational, you assign a default route in the table pointed to the ENI of the VM
firewall for AZ-a.

• The compute route table for AZ-b has the compute private subnets for AZ-b assigned to it and no IGW. After
the firewall is configured and operational, you assign a default route in the table pointed to the ENI of the VM
firewall for AZ-b.

Palo Alto Networks 38


Design Details: VM-Series Firewall on AWS

Figure 21 Example route tables

Management Destination Target Subnets Assigned

10.1.0.0/16 Local
10.1.9.0, 10.1.109.0
0.0.0.0/0 Igw-991233991ab

Public Destination Target Subnets Assigned

10.1.0.0/16 Local
10.1.10.0, 10.1.110.0
0.0.0.0/0 Igw-991233991ab

Compute AZ-a Destination Target Subnets Assigned

10.1.0.0/16 Local
10.1.1.0
0.0.0.0/0 Eni-1182230012a

Compute AZ-b Destination Target Subnets Assigned

10.1.0.0/16 Local
10.1.11.0
0.0.0.0/0 Eni-1182230012b

AWS TRAFFIC FLOWS


There are three traffic-flow types that you might wish to inspect and secure:

• Inbound—Traffic originating outside and destined to your VPC hosts

• Outbound—Traffic originating from your VPC hosts and destined to flow outside

• East/west—Traffic between hosts within your VPC infrastructure

Inbound Traffic from the Internet


Inbound traffic originates outside, destined to services hosted within your VPC, such as web servers. Enforcing firewall
inspection of inbound traffic requires Destination NAT on the VPC firewall. AWS provides a 1:1 NAT relationship with
public Elastic IP addresses and VPC private IP addresses and does not modify the source or destination ports. You can
assign public IP addresses to internal servers however this would bypass the firewall and expose the host. There are
two options for providing inbound inspection for multiple applications through the firewall:

• Destination IP address and port-mapping by using a single Elastic IP address

• Network interface secondary IP addresses and multiple Elastic IP addresses

Elastic IP addresses have a cost associated with their use. The first option above minimizes Elastic IP cost with an
increase in Destination NAT complexity on the firewall, as well as potential end-user confusion when they are accessing
multiple inbound services using the same DNS Name or IP address with a different port representing different servers.

A single service port (for example, SSL) may have its external IP address mapped to a single server internal IP address
provided the service. Additional servers that provide the same service are represented externally as offering their
service on the same external IP address. The service port is used to differentiate between servers having the same
external IP address.

Palo Alto Networks 39


Design Details: VM-Series Firewall on AWS

Figure 22 illustrates the inbound and return address translation:

• Packet address view 1 to 2—The IGW translates the packet’s destination address.

• Packet address view 3—The firewall translates the packet’s destination IP address (and optionally port).

• Packet address view 4 to 5—The firewall translates the source IP address to the firewall internet interface to
match the IGW NAT.

• Packet address view 6—The IGW translates source IP address to the public EIP address.

Figure 22 Inbound traffic inspection using Destination NAT address and port

VPC
AZ-a
15.0.201.45
Public Web – 10.1.1.0/24
10.1.10.0/24

10.1.10.10 10.1.1.10
(eth1) (eth2)
10.1.1.100
AWS Public <-> Internal
52.1.7.85 ---> 10.1.10.10

1 2 3
Source: 15.0.201.45 Source: 15.0.201.45 Source: 15.0.201.45
Destination: 52.1.7.85:80 Destination: 10.1.10.10:80 Destination: 10.1.1.100:80

6 5 4
Source: 52.1.7.85 Source: 10.1.10.10 Source: 10.1.1.100
Destination: 15.0.201.45 Destination: 15.0.201.45 Destination: 15.0.201.45

Public IP Address Private IP Address Destination Target Status Propagated


52.1.7.85 10.1.10.10 10.1.0.0/16 Local Active No
0.0.0.0/0 fw-e2 Active No
IGW NAT Table
Compute Subnet Route Table

Outbound Traffic Inspection


Outbound traffic is originating from your VPC instances, destined to external destinations, typically the internet.
Outbound inspection is useful for ensuring that instances are connecting to permitted services (such as Windows
Update) and permitted URL categories, as well as for preventing data exfiltration of sensitive information. Enterprise
networks often make use of Source NAT at their internet perimeter for outbound traffic, in order to conserve limited
public IPv4 address space and provide a secure one-way valve for traffic. Similarly, outbound traffic from your VPCs
that require firewall inspection also makes use of Source NAT on the firewall, ensuring symmetric traffic and firewall
inspection of all traffic.

The application server learns the default gateway from DHCP and forwards traffic to the default gateway address, and
the route table controls the default gateway destination. Directing outbound traffic to the firewall therefore does not
require any changes on the application server. Instead, you program the route table with a default route that points to
the firewall’s eth2 Elastic Network Interface using the ENI address and not the IP address of the firewall’s interface.

Palo Alto Networks 40


Design Details: VM-Series Firewall on AWS

Figure 23 illustrates outbound traffic flow. Packet address view 1 has its source IP address modified to that of the
firewall interface in packet address view 2. Note that the Compute subnet route table has a default route pointing to
the firewall.

Figure 23 Outbound traffic inspection using Source NAT

VPC
AZ-a
winupdate.ms
Public Web – 10.1.1.0/24
10.1.10.0/24

10.1.10.10 10.1.1.10
(eth1) (eth2)
10.1.1.100
AWS Public <-> Internal
52.1.7.85 ---> 10.1.10.10

3 2 1
Source: 52.1.7.85 Source: 10.1.10.10 Source: 10.1.1.100
Destination: winupdate.ms Destination: winupdate.ms Destination: winupdate.ms

4 5 6
Source: winupdate.ms Source: winupdate.ms Source: winupdate.ms
Destination: 52.1.7.85 Destination: 10.1.10.10 Destination: 10.1.1.100

Public IP Address Private IP Address Destination Target Status Propagated


52.1.7.85 10.1.10.10 10.1.0.0/16 Local Active No
0.0.0.0/0 fw-e2 Active No
IGW NAT Table
Compute Subnet Route Table
Destination Target Status Propagated
10.1.0.0/16 Local Active No
0.0.0.0/0 igw Active No
Public Subnet Route Table

East/West Traffic Inspection


East/west traffic is a traffic flow between instances within the same VPC and can be a flow within the same subnet.
Network segmentation using route policy between subnets is a well-understood principle in network design. You group
services of a similar security policy within a subnet and use the default gateway as a policy-enforcement point for
traffic entering and exiting the subnet. AWS VPC networking provides direct reachability between all instances within
the VPC, regardless of subnet association; there are no native AWS capabilities for using route policy-segmentation
between subnets within your VPC. All instances within a VPC can reach any other instance within the same VPC,
regardless of route tables. You can use network ACLs to permit or deny traffic based on IP address and port between
subnets, but this does not provide the visibility and policy enforcement for which many prefer VM-Series. For those
wishing to use the rich visibility and control of firewalls for east/west traffic within a single VPC, there are two options:

• For every instance within a subnet, configure routing information on the instance to forward traffic to the
firewall for inspection.

• Use a floating NAT configuration on the firewall to direct intra-VPC subnets through the firewall, and use
network ACLs to prevent direct connectivity.

You can use instance routing policy to forward traffic for firewall inspection by configuring the instance default gateway
to point to the firewall. An instance default gateway configuration can be automated at time of deployment via the
User Data field—or manually after deployment. Although this approach accomplishes the desired east/west traffic
inspection, the ability for the instance administrator to change the default gateway at any time to bypass firewall
inspection might be considered a weakness of this approach. For inspection of traffic between two instances within
the same VPC, both instances must be using the firewall as the default gateway in order to ensure complete visibility;
otherwise, there is asymmetric traffic through the firewall.

Palo Alto Networks 41


Design Details: VM-Series Firewall on AWS

VPC route tables always provides direct network reachability to all subnets within the VPC; it is therefore not possible
to use VPC routing to direct traffic to the firewall for intra-VPC inspection. You can only modify route table behavior for
networks outside the local VPC CIDR block.

Using intra-VPC routing policy to forward traffic for firewall inspection requires the use of floating NAT configuration.
For both source and destinations servers, floating NAT creates a virtualized IP address subnet, which resides outside
the local CIDR block, thus permitting route table policy to forward all floating NAT IP address to the firewall for inspec-
tion. The design pattern for floating NAT is identical to the scenario where you want to interconnect to two networks
having overlapping IP address space; for overlapping subnets, you must use Source and Destination NAT. Network
ACLs are used between the two local subnets in order to prevent direct instance connectivity. For each destination
virtual server desired, floating NAT requires a NAT rule for both source and destination on the firewall, so it’s a suitable
design pattern for smaller deployments.

Note

To provide a less complex way to inspect east/west traffic, consider using multiple
VPCs, grouping instances with similar security policy in a VPC, and inspecting inter-VPC
traffic with Transit Gateway or Transit VPC. The Transit Gateway design model present-
ed later in this guide provides a scalable design for east/west, inbound, and outbound
traffic inspection and control.

Connections to On-Premises Networks


When migrating workloads to your cloud resources, it is convenient to direct connectivity between private IP ad-
dressed servers in your data center to the private IP addresses of your VPC-based servers. Direct connectivity also
helps the network and system programmers to reach resources that do not have public IP access. You should still take
care to control access between the data center and on-premises connections and the servers and data bases in the
cloud. At least one end of the connection should be protected with firewalls.

To provide VPN access to your VPC, you can either VPN to/from the firewall in the VPC or an AWS VGW. When
creating an VPN connection, you have the option of running dynamic routing protocol (BGP) over the tunnels or using
static routes.

Palo Alto Networks 42


Design Details: VM-Series Firewall on AWS

On-Premises Firewall to VGW


When using an on-premises firewall (customer gateway) to connect to the VGW, after creating the VPN connections
in the AWS guided workflow, you can download the following from the same AWS VPN configuration page used to
configure the firewall: a template configuration in CLI format containing IP addresses, IPsec tunnel security, and BGP
configuration for the CGW device.

Figure 24 On-premises VPN customer gateway

AWS
VPC
AZ-a
Router
Host

VGW Subnet 2

On-Premises Host
Customer Gateway
Firewall Router
Subnet 1
AZ-b

The baseline AWS VPN connection configuration template for a Palo Alto Networks VM-Series firewall provides
moderate security for IPsec tunnel and IKE gateway crypto profiles. The template makes use of SHA-1 hash for
authentication of both IKE gateway and IPsec tunnels; this practice is now considered insecure and has been depre-
cated for use in SSL certificates. AWS VPN Table 3 shows the template configuration provided by AWS for both IKE
and IPsec Crypto on the first line, and compatible versions as a top-down, ordered list of higher to lower security for
IKE and IPsec crypto profiles on a VM-Series running PAN-OS 9.0. For IPsec Diffie-Hellman protocol, only a single
compatible option can be selected within the firewall configuration. The VGW accepts any of the options in Table 3 for
IKE crypto settings and any options in Table 4 for IPsec crypto settings. Negotiation of crypto profile settings is done
at the time of tunnel establishment, and there is no explicit configuration of VGW crypto settings. You can change your
firewall crypto settings at any time, and the new settings reestablish the IPsec tunnels with the compatible new security
settings. Firewalls prefer options in top-down order. Your firewall uses more secure options first (as shown in Table 3
and Table 4), or you can choose any single option you prefer.

Caution

Use of Galois/Counter Mode options for IPsec crypto profile (aes-128-gcm or aes-256-
gcm) in your firewall configuration prevents VPN tunnels from establishing with your
VPN gateway.

Palo Alto Networks 43


Design Details: VM-Series Firewall on AWS

Table 3 Firewall and VGW compatible IKE crypto profile

IKE crypto profile Diffie-Hellman Hash Encryption Protocol


AWS template Group2 SHA-1 AES-128-CBC IKE-V1
Firewall compatible Group14 SHA-256 AES-256-CBC IKE-V2
Firewall compatible Group2 SHA-1 AES-128-CBC —

Table 4 Firewall and VGW compatible IPsec crypto profile

IPsec crypto profile Diffie-Hellman Hash Encryption


AWS template Group2 SHA-1 AES-128-CBC
Firewall compatible Group14 SHA-256 AES-256-CBC
Firewall compatible Group5 — AES-128-CBC
Firewall compatible Group2 — —

AWS allows you to manually assign IP subnets to VPN connection tunnels, or you can have AWS automatically
assign them. It is recommended that you use manual assignment to prevent subnet collisions on the firewall inter-
face, as described below. When using manual assignment, you must use addressing in the link local reserved range
169.254.0.0/16 (RFC 3927) and typically use a /30 mask. When using manual assignment, you should keep track of
allocated addresses and avoid duplicate address ranges on the same firewall.

You should take care if you choose automatic assignment as AWS randomly assigns IP subnets to VPN connection
tunnels from 256 available /30 subnets of the link local reserved range 169.254.0.0/16 (RFC 3927). For each tunnel
subnet, the first available address is assigned to the VGW side and the second available address is assigned to the
CGW. Because the subnet assignments are random, an increasing number of VPN connections from subscriber VGWs
results in greater likelihood of a tunnel subnet collision on the terminating firewall (CGW). AWS guidance indicates that
at 15 VPN connections (two tunnels for each), the probability of any two tunnel subnets colliding is 50%, and at 25
tunnels the probability of subnet collision increases to 85%. The VM-Series does not support the assignment of the
same IP address on multiple interfaces. Overlapping tunnel subnets must be terminated on different firewalls. Because
tunnel subnet assignment is random if you experience a tunnel subnet collision during creation of a new VPN con-
nection, you have the option to delete the VPN connection and create a new one. The likelihood of subnet collisions
continues to increase as the number of VPN connections increase.

VPN to VGW Traffic Flow


When you use a single VPC, the challenge of using the VGW is that there is no easy way to force traffic coming into
the VPC environment through the firewall in the VPC for protection and visibility of traffic destined for the file servers.
This is because inbound packets to the VPC, destined to 10.1.0.0/16 addresses in the VPC entering through the VGW,
have an entry in the route table to go directly to the end system, thus bypassing the firewall.

Palo Alto Networks 44


Design Details: VM-Series Firewall on AWS

If the VPN gateway at the on-premises location is not a firewall, then you have uncontrolled access from the on-prem-
ises network to the hosts in the VPC, as shown in Figure 25.

Figure 25 VPN connections with VGW in VPC bypass firewall

AZ-a
172.16.20.101
Public Web – 10.1.1.0/24
10.1.10.0/24

10.1.10.10 10.1.1.10 10.1.1.100


(eth1) (eth2)
Host

Destination Target Status Propagated


10.1.0.0/16 Local Active No
172.16.0.0/16 vgw Active Yes
0.0.0.0/0 fw-e2 Active Yes
Compute Subnet Route Table

On-Premises VPN to VPC-Based Firewall Traffic Flow


The preferred design uses VPN connections to/from your VPC based firewall(s) to on-premises networks. With this
design, all inbound VPN traffic can be inspected and controlled for correct behavior as shown in Figure 26. This design
removes the risk that the on-premises VPN peer is not a firewall protected connection. You can use static or dynamic
routing for populating the route tables on the firewalls. The application and other end-point servers in the VPC can
remain pointed to the firewalls for their default gateway.

Figure 26 VPN connections to the VM-Series firewalls in the VPC

172.16.20.101

AZ-a
Public Web – 10.1.1.0/24
10.1.10.0/24

10.1.1.100
10.1.10.10 10.1.1.10
(eth1) (eth2)
Host

Destination Target Status Propagated


10.1.0.0/16 Local Active No
0.0.0.0/0 fw-e2 Active Yes
Compute Subnet Route Table

For single VPC and proof of concept designs, this peering method suffices. As environments scale to multiple VPCs,
you should consider a central pair of firewalls to provide access in and out of the VPCs to reduce complexity. This is
discussed further in the “Design Models” section of this document.

Palo Alto Networks 45


Design Details: VM-Series Firewall on AWS

Resiliency
Traditional resilient firewall design uses two firewalls in a high availability configuration. In a high availability configura-
tion, a pair of firewalls shares configuration and state information that allows the second firewall to take over for the
first when a failure occurs. Although you can configure high availability so that both firewalls are passing traffic, in the
majority of deployments, the firewalls operate as an active/passive pair where only one firewall is passing traffic at a
time. This design is to make sure that traffic going through the firewalls is symmetric thereby enabling the firewalls to
analyze both directions of a connection. The VM-Series on AWS does support stateful high availability in active/passive
mode for traditional data center-style deployments done in the cloud; however, both VM-Series firewalls must exist
in the same availability zone, and it can take 60 seconds or longer for the failover to take place due to infrastructure
interactions beyond the control of the firewall.

Unlike traditional data center implementations, VM-Series resiliency for cloud-based applications is primarily achieved
through the use of native cloud services. The benefits of configuring resiliency through native public cloud services in-
stead of firewall high availability are faster failover and the ability to scale out the firewalls as needed. In a public cloud
resiliency model, configuration and state information is not shared between firewalls. Applications typically deployed in
public cloud infrastructure, such as web- and service-oriented architectures, do not rely on the network infrastructure
to track session state. Instead, they track session data within the application infrastructure, which allows the application
to scale out and be resilient independent of the network infrastructure.

The AWS resources and services used to achieve resiliency for the application and firewall include:

• Availability Zones—Ensure that a failure or maintenance event in an AWS VPC does not affect all VM-Series
firewalls at the same time.

• ECMP—Available with Transit Gateway with VPN attachments and dynamic routing with BGP.

• Load balancers—Distribute traffic across two or more independent firewalls that are members of a common
availability set. Every firewall in the load balancer’s pool of resources actively passes traffic allowing firewall
capacity to scale out as required. The load balancer monitors the availability of the firewalls through TCP or
HTTP probes and updates the pool of resources as necessary.

AWS load balancers use Availability Zones for resilient operation as follows:

• The load-balancer function front end connection can live in multiple zones. This prevents an outage of one zone
from taking complete web server access down.

• The load-balancer function can address targets in multiple Availability Zones. This allows upgrades and migra-
tions to happen without shutting down complete sections.

Resiliency for Inbound Application Traffic


You can implement resiliency in AWS for firewall-protected inbound web server applications from the internet through
the use of an AWS load-balancer sandwich design. This design uses a resilient, public-facing load balancer and a second
resilient load balancer on the private side of the firewalls.

Traffic routing functions such as domain name servers and firewall next-hop configurations use FQDN to resolve
load-balancer IP addresses versus hard-coded IP addresses. Using FQDN allows the load balancers to be dynamic
and scale up or down in size, as well as remain resilient; one load balancer may go down, and the other can continue
feeding sessions to the web server farm.

Palo Alto Networks 46


Design Details: VM-Series Firewall on AWS

To analyze the pieces of the load balancer design, you can walk through the steps in Figure 27. This scenario illustrates
a user accessing an application www.refarch.lb.aws.com located on the web servers:

• #1 in the figure shows the URL request from the end user being directed toward www.refarch.lb.aws.com.
This request is sent to a DNS. In this case, the AWS Route 53 cloud-based DNS resolves to an A record for the
Public load balancers. The DNS resolves to one of two public IP addresses for the Public load balancers, #2 in
the figure. There are two IP addresses, one for the load balancer in each Availability Zone that provides resil-
ience for the Public load balancer.

• The Public load balancer is programmed with two targets for the next hop. The two targets, #3 in the figure, are
the private IP addresses for eth1 (ethernet1/1), the public-facing interface on each of the VM-series firewalls. IP
addresses 10.1.10.10 and 10.1.110.10 provide two paths for the incoming connection. The Public load balancer
translates the packet’s destination address to one of the two VM-series target address and translates the source
IP address to the private IP address for the Public load-balancer so that traffic returns to it on the return flow.

• Each firewall is programmed to translate incoming HTTP requests for eth1 IP address with a new source IP
address and a new destination IP address. The source IP address is changed to the eth2 (ethernet1/2) IP address
of that firewall, #4 in the figure, so that the return response traffic from the web server travels back through the
same firewall to maintain state and translation tables. The destination IP address is changed to the next hop,
which is the IP address of the Internal load balancer, #5 in the figure. The firewall learns the IP addresses for the
redundant Internal load balancers by sending a DNS request for the FQDN assigned to the load balancers. This
requests one of two IP addresses, one for each of the Internal load balancers, one in each Availability Zone.

• The Internal load balancers are programmed with web servers in both Availability Zones and do a round-robin
load balance to the active servers in the target list, #6 in the figure.

Figure 27 Load-balancer sandwich of firewall

VPC
AZ-a
10.1.1.100

Web-server
Public 10.1.1.0/24
1 10.1.10.0/24
3
10.1.1.10
AWS Route53 (eth2)
6
refarch.lb.aws.com
tcp/80 Public LB 10.1.10.10
4 Internal LB
(eth1)
tcp/443

5
2
10.1.110.10
(eth1)
4
6
3 10.1.101.10
Public (eth2) Web-server
10.1.110.0/24 10.1.101.0/24

10.1.101.100

AZ-b

Palo Alto Networks 47


Design Details: VM-Series Firewall on AWS

The redundant load-balancer design uses probes to make sure that each path is operational and that end systems are
operational. This liveliness check also serves to guarantee the return path is operational as the Public load balancer
probes through the firewall to the internal load balancer. The internal load balancers probe the end-point web servers
in their target group to make sure they are operational.

Figure 28 shows the network address translations as the packet moves from the outside to the inside and then returns.

Figure 28 Load balancer design with address translation

VPC
AZ-a Web-server
10.1.1.0/24

10.1.1.101

Public
10.1.10.0/24 10.1.1.10
AWS Route53 (eth2)
refarch.lb.aws.com
tcp/80 Public LB 10.1.10.10 Internal LB
(eth1)
tcp/443

10.1.110.10
(eth1)

10.1.101.10
Public (eth2)
10.1.110.0/24

10.1.101.100
Source: 52.1.7.85
Destination: refarch.lb.aws.com
Web-server

AZ-b 10.1.101.0/24

Source: 52.1.7.85 Source: 10.1.10.169 Source: 10.1.1.10 Source: 10.1.1.117


Destination: 34.209.84.167 Destination: 10.1.10.10 Destination: 10.1.1.117 Destination: 10.1.1.101

Source: 34.209.84.167 Source: 10.1.10.10 Source: 10.1.1.101 Source: 10.1.1.101


Destination: 52.1.7.85 Destination: 10.1.10.169 Destination: 10.1.1.10 Destination: 10.1.1.117

Resiliency for Outbound and On-premises Bound Traffic


The web server subnet route table is programmed for the default route to point to the firewall eth2 (ethernet1/2)
interface; thus, the instance’s default route results in transiting the firewall. The firewall can be programmed to protect
the outbound traffic flows and associated return traffic threats. The web servers in AZ-a are programmed to exit via the
firewall for AZ-a, and the web servers in AZ-b are programmed to firewall AZ-b. This effectively maintains resilience on
an Availability Zone basis.

Palo Alto Networks 48


Design Details: VM-Series Firewall on AWS

To get on-premises traffic to/from the web servers, the VPN links terminate on the firewalls in each Availability Zone.
This design has the same resilience as outbound traffic, because the web servers use the default route to get anything
other than devices within the VPC. The Transit Gateway design, covered later in this guide, offers a fully resilient
outbound, inbound, and east-west design.

Figure 29 Outbound and on-premises flow

VPC
AZ-a
On-premises Destination Target Status Propagated
10.1.0.0/16 local Active No
0.0.0.0/0 igw-a Active No
172.16.20.101
Route Table Web-server-aza

Public
10.1.10.0/24 10.1.1.10 Web
(eth2) 10.1.1.0/24

10.1.10.10 Host
(eth1)

Microsoft.com
10.1.110.10
(eth1) Host

10.1.101.10 Web
Public (eth2) 10.1.101.0/24
10.1.110.0/24

Destination Target Status Propagated


10.1.0.0/16 local Active No
0.0.0.0/0 igw-b Active No
Route Table Web-server-azb
AZ-b

Palo Alto Networks 49


Design Models

Design Models
Now that you understand the basic AWS components and how you can use your VM-Series, you next use them to
build your secure infrastructure. The design models in this section offer example architectures that you can use to
secure your applications in AWS. The design models build upon the initial Single VPC design as you would likely do
in any organization; build the first VPC as a Proof of Concept, and as your environment grows, you grow into a more
modular design where VPCs may be purpose-built for the application tier that it houses.

The design models presented here differ in how they provide resiliency, scale, and services for the design. The designs
can be combined, as well, offering services like load balancing for the web front-end VPCs and common outbound
services in a separate module. The design models in this reference design are:

• Single VPC—Proof of Concept or small-scale multipurpose design

• Transit Gateway—High-performance solution for connecting large quantities of VPCs together, with a scalable
solution to support inbound, outbound, and east/west traffic flows through separate dedicated security VPCs

• Transit VPC—General purpose scalability with resilience and natural support for inbound, outbound, and east/
west traffic

CHOOSING A DESIGN MODEL


When choosing a design model, consider the following factors:

• Scale—Is this deployment an initial move into the cloud and thus will be a smaller scale with many services
in a Proof of Concept? Will the application load need to grow quickly and in a modular fashion? Are there
requirements for inbound, outbound, and east/west flows? The Single VPC provides inbound traffic control and
scale, outbound control and scale on a per availability zone basis; however, east/west control is more limited.
Transit Gateway offers the benefits of a highly scalable design for multiple spokes connecting to a central hub
for inbound, outbound, and VPC-to-VPC traffic control and visibility. The Transit VPC offers the benefits of a
modular and scalable design where inbound loads can scale in the spokes, and east/west and outbound can be
controlled and scaled in a common transit block.

• Resilience and availability—What are the application requirements for availability? The Single VPC provides a
robust design with load balancers to spread the load, detect outages, and route traffic to operational firewalls
and hosts. The Transit Gateway and Transit VPC models complement the Single VPC design by providing a highly
resilient and available architecture for inbound, outbound, and east/west traffic flows. Transit Gateway adds
more resilience capabilities with ECMP support.

• Complexity—Understanding application flows and how to scale and troubleshoot is important to the design.
Complex designs that require developers to use non-standard designs can result in errors. Placing all services in
a single compute domain (VPC) might seem efficient but could be costly in design complexity. Beyond the initial
implementation, consider the Transit Gateway for a more intuitive and scalable design.

Palo Alto Networks 50


Design Models

SINGLE VPC MODEL


A single standalone VPC might be appropriate for small AWS deployments that:

• Provide the initial move to the cloud for an organization.

• Require a starting deployment that they can build on for a multi-VPC design.

• Do not require geographic diversity provided by multiple regions.

For application high-availability, the architecture consists of a pair of VM-Series in each of two Availability Zones within
your VPC Region. The firewalls are sandwiched between AWS load balancers for resilient inbound web application traf-
fic and the return traffic. The firewalls are capable of inbound and outbound traffic inspection that is easy to support
and transparent to DevOps teams.

You can use security Groups and network access control lists to further restrict traffic to individual instances and
between subnets. This design pattern provides the foundation for other architectures in this guide.

Figure 30 Single VPC design model

Single VPC
AZ-a
10.1.9.20 10.1.9.21 DB – 10.1.3.1-2/24
Primary
Panorama

Management – 10.1.9.0/24

Business – 10.1.2.1-2/24

Public
10.1.10.0/24 10.1.1.10 Web – 10.1.1.1-2/24
Web
(eth2) 10.1.1.0/24

Public LB 10.1.10.10 Internal LB


(eth1)
lb.aws.com
tcp/80
tcp/443

10.1.110.10 Web – 10.1.101.1-2/24


(eth1)

10.1.101.10 Web
Public (eth2) 10.1.101.0/24
10.1.110.0/24
Business – 10.1.102.1-2/24

10.1.109.20 10.1.109.21 DB – 10.1.103.1-2/24


Standby
Panorama

Management – 10.1.109.0/24
AZ-b

Inbound Traffic
For inbound web application traffic, a public-facing load balancer is programmed in the path of the two firewalls. The
load balancers are programmed to live in both availability zones and have reachability to the Internet Gateway Service.
The cloud or on-premises DNS responsible for the applications housed inside the VPC use FQDNs to resolve load-bal-
ancer public IP addresses versus hard-coded IP addresses in order to provide failover capabilities. If one public load
balancer fails, it can be taken out of service, and if a load balancer changes IP address, it is discovered.

Palo Alto Networks 51


Design Models

An internal load balancer is programmed in the path between the firewall’s inside interface and the pool of front-end
web servers in the application server farm. The internal load balancers are also programmed in both availability zones,
use the IP addresses in the web server address space, and can have targets (application web servers) in multiple
Availability Zones.

The public load balancers are programmed with HTTP and/or HTTPS targets, which are the next-hop IP addresses of
the VM-Series firewall private IP address for public firewall interface, eth1(ethernet1/1). The firewalls are subsequently
programmed to perform destination NAT translation for the incoming HTTP/HTTPS sessions to an IP address on the
internal load balancers. Using Application load balancers as the inside load balancers, the firewalls can use FQDN to
dynamically learn the next hop for the address translation. The firewall is also programmed to perform a source NAT
translation to the firewall’s eth2(ethernet1/2) IP address so that return traffic is destined to the same firewall.

Health probes from the public load balancer probes through the firewall to the internal load balancer, ensuring that the
path through the firewall is operational. The internal load balancer does health probes for the servers in the web server
farm, and if it has no operational servers, does not respond to the probes from the public load balancer. If the internal
load balancer does not respond, that path is taken out of service.

The firewall security policy for inbound traffic should allow only those applications required (whitelisting). Firewall
security profiles should be programmed to inspect for malware and protect from vulnerabilities for traffic entering the
network allowed in the security policies.

Outbound Traffic
Program the web server route table so that the default route to points to the firewall eth2 (ethernet1/2) interface.
Program the firewall to protect the outbound traffic flows and associated return traffic threats. The web servers in
Availability Zone-a (AZ-a) are programmed to exit via the firewall for AZ-a, and the web servers in AZ-b are pro-
grammed to firewall AZ-b. This effectively maintains resilience for outbound and return traffic on an Availability Zone
basis.

The firewalls apply source NAT to the outbound traffic, replacing the web server’s source IP address with the private
IP address of the firewall’s public interface eth1(ethernet1/1). As the traffic crosses the IGW, the private source IP ad-
dress is translated to the assigned AWS-owned public IP address. This ensures that the return traffic for the outbound
session returns to the correct firewall.
The firewall outbound-traffic policy typically includes UDP/123 (NTP), TCP/80 (HTTP), and TCP/443 (HTTPS). Set the
policy up for application-based rules using default ports for increased protection. DNS is not needed, because virtual
machines communicate to AWS name services directly through the AWS network fabric. You should use positive
security policies (whitelisting) to enable only the applications required. Security profiles prevent known malware and
vulnerabilities from entering the network in return traffic allowed in the security policy. URL filtering, file blocking, and
data filtering protect against data exfiltration.

East/West Traffic
The firewalls can be set up to inspect east/west traffic by using double NAT and programming applications to use NAT
addresses inside the VPC, however, complexity increases as subnets are added and can cause friction with the DevOps
teams who must customize IP next-hop addresses for applications versus using the assigned VPC addressing. Network
ACLs can be used to restrict traffic between subnets, however they are limited to standard layer 4 ports to control
to applications flowing between tiers. The Transit design model, covered later in this guide, offers a more elegant and
scalable way to direct east/west traffic through the firewall for visibility and control.

Palo Alto Networks 52


Design Models

Backhaul or Management Traffic


To get traffic from on-premises connections to the internal servers, the VPN connections from on-premises gateways
connect to the firewalls in AZ-a and AZ-b. Depending on the number of on-premises gateways, a single gateway could
connect to both VPC-based firewalls, an HA pair of firewalls could connect to both VPC-based firewalls, or you could
program a-to-a and b-to-b VPN connections. The Single VPC deployment guide uses a single HA pair for the on-prem-
ises gateway terminating on each VM-Series in the VPC. The VPN peers are programmed to the eth1 public IP address-
es, but the VPN connection tunnel interfaces are configured to terminate on a VPN security zone so that policy for
VPN connectivity can be configured differently than outbound public network traffic. Security policies on the firewalls
only allow required applications through the dedicated connection to on-premises resources to the VPN security zone.

Figure 31 VPN connection to Single VPC

Single VPC
10.1.9.20 10.1.9.21
Primary
Panorama

Management – 10.1.9.0/24
AZ-a
Public
10.1.10.0/24 10.1.1.10 Web
(eth2) 10.1.1.0/24

172.16.20.101 10.1.10.10
HA-Active (eth1) Host
VPN

HA-Standby 10.1.110.10
VPN
(eth1) Host

10.1.101.10 Web
Public (eth2) 10.1.101.0/24
10.1.110.0/24

10.1.109.20 10.1.109.21
Standby
Panorama
AZ-b
Management – 10.1.109.0/24

Traffic from the private-IP-addressed web servers to the firewalls has the same resilience characteristics as the out-
bound traffic flows; servers in AZ-a use the firewall in AZ-a via the default route on the server pointing to the firewall,
and servers in AZ-b use the firewall in AZ-b as their default gateway. Static or dynamic route tables on the firewalls
point to the on-premises traffic destinations.

TRANSIT GATEWAY MODEL


The AWS Transit Gateway (TGW) service enables an organization to easily scale connectivity across thousands of VPCs,
AWS accounts, and on-premises networks. With the ability to connect to a single gateway rather than peering between
each of the VPCs, TGW simplifies connectivity between VPCs. The TGW acts as a hub controlling traffic routed
between spoke VPCs.

Palo Alto Networks 53


Design Models

TGW supports two types of attachments: VPC and VPN. VPN attachments support dynamic routing and ECMP. VPC
attachments support only static routing without ECMP support. You can have multiple VPN attachments to increase
the amount of VPN tunnels between the TGW and VPCs. You can also concurrently attach a VPC with VPN and VPC
attachments. With the VPC attachment type, TGW deploys a network interface per availability zone within the VPC.
VPN attachments are IPSec VPN tunnels to CGWs in the VPC.

This guide describes two designs for providing a scalable, secure architecture for TGW:

• Multiple security VPCs with VPC-only attachments

• Multiple security VPCs with VPC and VPN attachments

This guide briefly covers the first design and then provides more detail about the second, which is the recommended
approach because it routes around failures faster with the use of dynamic routing and ECMP.

In both designs, multiple spoke VPCs are connected via a VPC attachment method. What differs is the attachments of
your security VPCs in the second design. The spoke VPCs can scale up to thousands of VPCs, and in both designs, you
deploy three dedicated security VPCs. Each security VPC has two VM-Series firewalls deployed, one per availability
zone. You can deploy additional VM-Series for scaling, which is covered in a later section. The VM-series firewalls in
this architecture segment traffic while preventing known and unknown threats. The security VPCs contain VM-Series
firewalls that control a specific set of traffic flows, such as inbound and return web traffic, as well as outbound and
east/west traffic flow.

You can connect your on-premises site or sites via AWS Direct Connect, VPNs, or both. A VPN connection has a limit
of 1.25Gbps. To overcome the VPN bandwidth limitation, you can use ECMP routing to aggregate multiple VPN
connections. A subnet per availability zone is allocated for the network interfaces of the VPC attachment.

Multiple Security VPCs with VPC-Only Attachments


In this design, three security VPCs (Security-In, Security-Out and Security-East-West) are connected to the TGW with
VPC attachments. With the VPC-only attachment method for the outbound and east-west security VPCs, you are limit-
ed to static routing, no ECMP support, and during an outage, the need to reprogram the static routes to an alternative
firewall’s network interface. You can do this manually or automate it by using AWS CloudWatch, AWS Lambda, and
an AWS CloudFormation template script for detection and failover of the firewalls. This automation can take minutes,
which is not ideal for many customers.

Palo Alto Networks 54


Design Models

The advantage of VPC attachment is a simple, high-bandwidth design with no VPN tunnels. The disadvantage is the
lack of support today for ECMP and the ability to provide automatic failover for the firewalls securing outbound and
east-west traffic flows. Failover as mentioned above requires reprograming the static routes pointing to the firewall ENI
either manually or with automation.

Figure 32 Multiple security VPCs with VPC-only attachments


Spoke 1 Spoke 2
10.1.0.0/16 10.2.0.0/16

web-b db-a db-b


web-a
10.1.2.0/24 10.1.1.0/24 10.1.2.0/24
10.1.1.0/24
Mgmt-a
10.255.110.0/24 TGW-a
10.255.1.0/24

Priv-a
Pub-a 10.255.11.0/24
10.255.100.0/24

Inbound

Pub-b Services-a
Inbound Web and 10.255.200.0/24
Priv-b
10.255.12.0/24 10.3.1.0/24
return traffic
Mgmt-b
10.255.120.0/24 TGWB Transit GW
10.255.2.0/24

Security-In
Internet 10.255.0.0/16 Services-b
10.3.2.0/24

Mgmt-a
10.254.110.0/24 TGW-a
10.254.1.0/24

Priv-a
Pub-a 10.254.11.0/24 Services
10.254.100.0/24
10.3.0.0/16
Outbound

Pub-b Priv-b Mgmt-a


TGW-a 10.253.110.0/24
10.254.200.0/24 10.254.12.0/24
Outbound 10.253.1.0/24

initiated traffic Mgmt-b


10.254.120.0/24 TGW-b
VPN or Direct
10.254.2.0/24 Priv-a
Connect 10.253.11.0/24

Security-Out VPN Mgmt-b


10.254.0.0/16 Priv-b 10.253.120.0/24
10.253.12.0/24

VPN
Attachment NGFWs TGW-b
10.253.2.0/24
VPC
Attachment
Security-East-West
On-Prem or Colo
172.16.0.0/16 10.253.0.0/16
East-West Traffic
The security VPCs are VPC attached and the Security-In VPC is using an application load balancer.

Multiple Security VPCs with VPC and VPN Attachments


This design is recommended over the VPC-only design because you are able to quickly detect path failures in the
security VPCs for outbound and east-west traffic by using ECMP and dynamic failover using dynamic routing. In this
design, the security inbound VPC uses a VPC attachment type because the load balancers in the inbound VPC ensure
path connectivity and rapid failover. In this design, the security outbound and east-west VPCs are connected via VPN
attachment for data traffic and via a VPC attachment for the firewall management traffic. This is to ensure a much
faster recovery time during an outage in the Availability Zone or VM-Series firewall.

Note

For information about the individual AWS charges per attachment type, see AWS Transit
Gateway pricing.

Palo Alto Networks 55


Design Models

Even though you could deploy one security VPC with a pair of firewalls for all inbound, outbound, and east-west traffic,
this design is for scaling a large number of spoke VPCs, potentially thousands. The separation of traffic flows allows the
scaling up of that security function when needed. For example, you may need more firewalls for inbound and return
traffic than for outbound and east-west.

Figure 33 Multiple security VPCs with both VPC and VPN attachments
VPC 1 VPC 2
10.1.0.0/16 10.2.0.0/16

VPC1-a VPC1-b VPC1-a VPC1-b


10.1.1.0/24 10.1.2.0/24 10.1.1.0/24 10.1.2.0/24
Mgmt-a
10.255.110.0/24 TGW-a
10.255.1.0/24

Priv-a
Pub-a 10.255.11.0/24
10.255.100.0/24

Inbound

Pub-b
Inbound Web and 10.255.200.0/24
Priv-b
10.255.12.0/24 Services-a
10.3.1.0/24
return traffic
Mgmt-b
10.255.120.0/24 TGWB Transit GW
10.255.2.0/24

Security-In
Internet
10.255.0.0/16 Services-b
10.3.2.0/24

TGW-a
Pub-a 10.254.1.0/24
10.254.100.0/24

Mgmt-a
10.254.110.0/24 Services
10.3.0.0/16
Outbound
Mgmt-b
Pub-b
10.254.120.0/24
Outbound 10.254.200.0/24
TGW-a Pub-a
10.253.1.0/24 10.253.100.0/24
initiated traffic TGW-a
10.254.2.0/24 VPN or Direct
Connect Mgmt-a
10.253.110.0/24

Security-Out
VPN
10.254.0.0/16
Mgmt-b
10.253.120.0/24

NGFWs
VPN Attachment TGW-b Pub-b
10.253.200.0/24
10.253.2.0/24

VPC Attachment
On-Prem or Colo Security-East-West
172.16.0.0/16 10.253..0.0/16
East-West Traffic

TGW Design
The TGW design consists of a single region deployment with one TGW deployed. At the time of this writing, you can
deploy up to five TGWs per region. The TGW design uses two route tables (Spoke and Security) with all the necessary
propagated routes. The TGW in this scenario has a VPC attachments for each spoke VPC, as well as:

• One VPC attachment for the inbound security VPC.

• Two VPN attachments for the outbound security VPC (two IPSec VPNs from each firewall to TGW).

• Two VPN attachments for the east-west security VPC (two IPSec VPNs from each firewall to TGW).

In addition to the mentioned attachments, there are also VPC attachments from both the outbound and east-west
security VPCs for the firewall management. This ensures connectivity to the firewalls if the VPN tunnels are down.

Routing
TGW route tables behave like route domains. You achieve segmentation of the network by creating multiple route
tables and associating VPCs and VPNs to them. You have the ability to create isolated networks, allowing you to steer
and control traffic flow between VPCs and on-premises connections. This design uses two TGW route tables: Security
and Spokes. The Security TGW route table has all of the routes propagated to it, so that all of the VPCs can be reached.
The TGW spoke routing table has routes to all the security VPCs but does not have direct routes to other spokes.
Spoke-to-spoke communication is routed to the VM-Series firewalls in the east-west security VPC.

Palo Alto Networks 56


Design Models

Within a VPC, local route tables are used to route to the TGW. After they are in the TGW, TGW route tables are used
to route to a VPC through the associated attachment. TGW attachments are associated with a single TGW route table.
Each table can have multiple attachments.

You can configure static routes within the TGW route table, or you can use the VPC or VPN connections to propagate
routes into the TGW routing table. Routes that are propagated across a VPN connections with BGP, support ECMP.

VPC attachments don’t currently support ECMP. Static routes within a VPC or a TGW route table allow only a single
route of the same value pointing to a single next hop. This means you can’t configure two default routes (for example,
0.0.0.0/0) in the same route table to separate next hops.

Note

Even though the TGW route tables can support up to 10,000 routes, there is still a BGP
prefix limitation of 100 prefixes per virtual gateway.

The following tables list the routes in the two TGW route tables.

Table 5 Spoke route table

Route Description
0.0.0.0/0 Each firewall advertises a default route over two equal cost paths to the outbound security VPC. This
is propagated via BGP as part of the VPN attachments. As you add more VPN attachments from the
security outbound VPC, this number increases.
10.0.0.0/12 Two 10.0.0.0/12 routes (covers the spoke VPCs 10.1.0.1-10.15.255.255) to the east-west security
VPC. This is propagated from the active firewall via BGP as part of the VPN attachments. There are
two and not four routes due to BGP AS path prepending on the firewall in AZ-b (covered in the east/
west section).
10.255.0.0/16 Route to the inbound security VPC, for return inbound traffic.
172.16.0.0/16 Route to the on-premises network. Number of routes is determined by the number of tunnels.

Table 6 Security route table

Route Description
0.0.0.0/0 Four default routes to the outbound security VPC. This is propagated via BGP as part of the VPN at-
tachments. As you add more VPN attachments from the security outbound VPC, this number increas-
es.
10.1.0.0/16 Route to the spoke VPC named VPC1.
10.2.0.0/16 Route to the spoke VPC named VPC2.
10.3.0.0/16 Route to the Services VPC where Panorama is deployed.
10.253.0.0/16 Route to the east-west security VPC.
10.254.0.0/16 Route to the outbound security VPC.
10.255.0.0/16 Route to the inbound security VPC, for return inbound traffic.
172.16.0.0/16 Route to the on-premises network. Number of routes is determined by number of tunnels.

Palo Alto Networks 57


Design Models

Spoke VPCs
For this reference architecture, web and database servers have been distributed across the Spoke VPCs. Use an AWS
internal load balancer to load balance the web traffic between the web servers in VPC1. The VM-Series firewalls in the
east-west security VPC inspect between VPC1 and VPC2. In the TGW, don’t propagate the Spoke VPC routes in the
Spoke routing table, only the Security routing table. This ensures traffic for 10.0.0.0/12 routes (spoke VPCs) flows to
the VM-Series firewalls in the east-west security VPC.

In this design, each spoke VPC has a default route in the VPC routing table pointing to the TGW as the next hop. The
TGW route table for the Spoke VPCs has the routes mentioned previously in the Routing section, to allow the spoke
VPCs to reach the security VPCs and the on-premises network. For routing between the spoke VPCs, they have to
route via the firewalls in the east-west security VPC.

Inbound Traffic
This design uses a dedicated security VPC with VM-Series firewalls for inbound traffic. The VPC for inbound traffic is
connected to the TGW through a VPC attachment. In this VPC, deploy two VM series firewalls and an AWS application
load balancer to load balance incoming traffic to the two firewalls. Use an IGW for internet access. Deploy the firewalls
in separate AZs and configure them to use source NAT and forward inbound traffic to the internal load balancer in the
Spoke1 VPC, where the web servers reside.

The VPC for inbound traffic, use eight subnets (four per AZ), consisting of Public, Private, Management, and TGW
attachment subnets. For TGW, AWS recommends having dedicated subnets for the TGW attachments per AZ. For
connectivity for the VPC, deploy four route tables, each having a local route for the VPC (10.255.0.0/16). The follow-
ing table lists the associated routes.

Table 7 Inbound security VPC routes

Route table name Routes Route description


Security-In-to-IGW 0.0.0.0/0 Route table for the public subnets with a default route 0.0.0.0/0 pointing to
the IGW and a local VPC route
10.255.0.0./16
Security-In-to-TGW 10.255.0.0./16 Local VPC route only, needed for the subnets of the TGW ENIs across all AZs
in use with the VPC attachment
Security-In-Priv 10.0.0.0/8 Route table for the private subnet with a 10.0.0.0/8 route to reach any spoke
VPCs, as well as the local VPC route
10.255.0.0/16
Security-In-Mgmt 0.0.0.0/0 Route table for the management subnets for the firewalls to reach Panorama,
as well as the outbound VPC for software and content updates
10.255.0.0/16

Palo Alto Networks 58


Design Models

In Figure 34, web traffic is directed to the DNS name of the application load balancer, which is configured with two
listeners (firewall-a and firewall-b). The load balancer gives a resilient architecture for inbound traffic with the advantage
of spreading the traffic sessions across the firewalls. The firewalls are configured to translate the source IP address as
the traffic egresses the firewall to ensure return traffic is returned to the same firewall, to avoid asymmetric routing.
The firewalls are configured with security policies with App-ID, to determine which applications and traffic is permitted.
The firewalls send web traffic to the internal load balancer in the Spoke1 VPC in this design. The Firewalls are config-
ured with a 10.0.0.0/8 route to the gateway of the private subnet.

Figure 34 Security-In VPC


Spoke 1
Inbound traffic sent from firewalls 10.1.0.0/16
to spoke VPC load balancers

Return traffic sent to same


firewall due to SNAT
VPC1-a
10.1.1.0/24

Mgmt-a TGW-a
10.255.110.0/24 10.255.1.0/24
Outbound return web .10
traffic VPC
.10 Attachment VPC1-b
.10
Pub-a Priv-a Transit GW 10.1.2.0/24
10.255.11.0/24
10.255.100.0/24
AZ-a VPC
Attachment
Spoke 2
Pub-b AZ-b 10.2.0.0/16
10.255.200.0/24 Priv-b
Inbound Web traffic to
.10 10.255.12.0/24
Application LB
.10
.10 VPC
Mgmt-b TGWB Attachment
10.255.2.0/24 VPC1-a
10.255.120.0/24 10.2.1.0/24

Security-In
10.255.0.0/16
VPC1-b
10.2.2.0/24

After traffic leaves the inbound security VPC, the TGW has a route in the Security TGW route table for 10.1.0.0/16
pointing to VPC1. After traffic enters VPC1, it is directed to the internal load balancer, which then uses a round-robin
method to load balance the traffic to an operational web server. Return traffic uses a default route pointing to the
TGW. After it’s in the TGW, the Spoke route table has a route to the 10.255.0.0/16, which is the source network
address translation (SNAT) IP address range for the inbound security VPC. After it’s in the inbound security, VPC traffic
flows to the firewall that applied SNAT to the traffic flow.

Outbound Traffic
This design uses a dedicated security VPC with two VM-Series firewalls for outbound traffic. This is the traffic that
originates within the AWS VPCs destined to the internet. You connect the VPC to the TGW through two VPN attach-
ments that map to the two firewalls deployed in the outbound security VPC. Each firewall is a CGW and has two IPSec
tunnels as part of each VPN attachment. Each firewall is in a separate AZ and is configured with SNAT to guarantee
that return traffic passes through the same firewall. The VPN tunnels advertise default routes into the two TGW
routing tables. You use an IGW in the VPC for outbound internet traffic.

The VPC uses six subnets (three per AZ), consisting of Public, Management and TGW attachment subnets. The
Management and TGW subnets are for management of the firewalls through VPC attachments discussed later, in the
management section. For connectivity for the VPC deploy three route tables, one for the VPN attachments and two for
management. The VPC has a default route 0.0.0.0/0 that points to the IGW for the VPN tunnels. The default route and
the local route 10.254.0.0/16 exist in a VPC routing table called Security-Out-IGW.

Palo Alto Networks 59


Design Models

In Figure 35, internet-bound traffic originating in any of the AWS VPCs is routed to the outbound VPC. This VPC is
attached via two VPN attachments one to each firewall. The firewalls are configured with SNAT so that return traffic
comes through the same firewall. The firewalls are configured with security policies with App-ID, to determine which
application and traffic can ingress or egress the firewalls.

Figure 35 Security-Out VPC

BGP AS 65254 TGW-a


10.254.1.0/24
VPN
2 Tunnels Attachment
with eBGP Transit GW
.10 .10
Mgmt-a
Pub-a
10.254.110.0/24
10.254.100.0/24
AZ-a VPC
Attachment

Pub-b AZ-b
Outbound initiated 10.254.200.0/24 Mgmt-b
.10 10.254.120.0/24
traffic .10 VPN BGP AS 64512
2 Tunnels Attachment
with eBGP
TGWB
BGP AS 65254
10.254.2.0/24

Security-Out
10.254.0.0/16

Each VPN attachment consists of two tunnels, and eBGP is used for dynamic routing. The TGW route tables are
propagated with a default route 0.0.0.0/0 from four VPN endpoints, two per CGW. ECMP and dynamic routing using
BGP is supported on the VPN attachment type providing BGP peer detection and failover, making this a better choice
than VPC only attachment for outbound traffic.

The VPC attachment shown in the figure is only for management traffic from and to the firewalls and is covered in the
management section.

East/West Traffic
This design uses a dedicated security VPC with VM-Series firewalls for east-west traffic flows. This is the east-west
traffic flows between different spoke VPCs. This is not mandatory for all spoke VPCs, only the VPCs that you would like
to inspect VPC-VPC traffic flow.

You connect the VPC to the TGW through two VPN attachments which map to the two firewalls deployed in the east-
west security VPC. Each firewall is a CGW and has two IPSec tunnels as part of each VPN attachment. Each firewall
is deployed in a separate AZ, and they are deployed as active/standby, configured this way through BGP AS path
pre-pending. The VPN tunnels advertise 10.0.0.0/12 routes per VPN tunnel into the two TGW routing tables.

In this VPC deploy six subnets (three per AZ), consisting of Public, Management and TGW attachment subnets. The
Management and TGW subnets are for management of the firewalls through VPC attachments discussed later in the
management section. For connectivity for the VPC deploy three route tables, one for the VPN attachments and two for
management. The VPC has a default route 0.0.0.0/0 that points to the IGW for the VPN tunnels. The default route and
the local route 10.253.0.0/16 exists in a routing table called Security-East-West-to-IGW.

In Figure 36, traffic between spoke VPCs is routed to the east-west security VPC for inspection and policy enforce-
ment by the active VM-Series firewall. This VPC is attached via two VPN attachments one to each firewall. This design
is architected to work without NAT. Running NAT internally can be undesirable to application and database teams. To
ensure traffic uses the same firewall to avoid asymmetric routing, an active standby architecture has been designed for

Palo Alto Networks 60


Design Models

the east-west firewalls. This is achieved with BGP AS path prepending, to force the Firewall in AZ-b to have a longer
BGP AS path than the firewall in AZ-a. This allows the firewall in AZ-a to have a shorter AS path and be preferred over
the firewall in AZ-b. The firewalls are configured with security policies with App-ID, to determine which application and
traffic can ingress or egress the firewalls.

Figure 36 Security-east-west VPC


Spoke 1
10.1.0.0/16

VPC1-a
BGP AS 65253 Active 10.1.1.0/24
TGW-a
10.253.1.0/24 VPN
2 Tunnels Attachment
with eBGP Transit GW
.10 Mgmt-a
Pub-a .10 10.253.110.0/24 VPC1-b
10.253.100.0/24 10.1.2.0/24
AZ-a VPC
Attachment
AZ-b
Pub-b Standby
10.253.200.0/24 .10 Mgmt-b Spoke 2
.10 10.253.120.0/24 10.2.0.0/16
VPN
2 Tunnels Attachment BGP AS 64512
with eBGP
TGWB
BGP AS 65253 10.253.2.0/24
Spoke to Spoke Traffic routed VPC1-a
to East-West Security VPC 10.2.1.0/24
Security-East-West
10.253.0.0/16

VPC1-b
10.2.2.0/24

Each VPN attachment consists of two tunnels, and eBGP is used for dynamic routing. The TGW route tables are
propagated with a 10.0.0.0/12 from four VPN endpoints. Only the two routes from the active firewall are installed
in the routing table while firewall in AZ-a is active. ECMP and dynamic routing using BGP is supported on the VPN
attachment type providing BGP peer detection and failover, making this a better choice for outbound traffic compared
to a VPC attachment with static routing.

The VPC attachment shown in the figure is only for management traffic from and to the firewalls and is covered in the
management section.

Backhaul to On-Premises Traffic


You achieve backhaul to on-premises with either Direct Connect Gateway or VPN connections. For on-premises
locations, you should deploy firewalls to inspect ingress and egress traffic to AWS. You configure Direct Connect
connections to TGW in the AWS Direct Connect console. VPN connections are connected to the TGW as VPN attach-
ments, just like the outbound and east-west security VPCs. An AWS Site-to-Site VPN connection per CGW provides
two VPN endpoints (tunnels) for automatic failover. You can configure additional tunnels to increase the bandwidth as
TGW supports ECMP.

Palo Alto Networks 61


Design Models

As shown in Figure 37, you can backhaul to TGW with Direct Connect either directly in a colocation facility or from
on-premises as a service through a WAN provider. Dual connectivity is recommended for resiliency.

Figure 37 Backhaul with Direct Connect Gateway

Transit GW
Firewalls
BGP peering over Direct Connect

HA- Active
Direct fiber in Colocation or circuits
over WAN provider from premises
HA- Standby

BGP AS 64512

BGP AS 65001

On-Prem or Colo
172.16.0.0/16

Figure 38 shows VPN connectivity from on-premises to TGW via VPN. This consists of a VPN attachment per CGW
made of two tunnels per attachment.

Figure 38 Backhaul with VPN

VPN
Transit GW
Attachment
Firewalls

HA- Active
IPSec VPNs
with BGP Peering

HA- Standby
VPN BGP AS 64512
Attachment

BGP AS 65001 Two tunnels per attachment

On-Prem or Colo
172.16.0.0/16

Management Traffic
In this scenario, you have deployed Panorama in an active/standby configuration in separate VPC called Services. You
can deploy Panorama in any VPC, on-premises or both locations. In this deployment Panorama is used for management
of the firewalls and the Cortex Data Lake is used for logging.

The firewalls are all deployed with a management interface in a management subnet that has the ability to route to
Panorama as well as to Palo Alto Networks for software and content updates. The firewalls also need connectivity to
subscription services and the Cortex Data Lake for logging. If access is required from internet-accessible IP addresses,
you can use AWS EIPs on Panorama and the firewall management ports.

You use the VPC attachment method for management to reach the firewalls in case the VPN tunnels are down with
VPN attachment only. You could manage the firewalls over the VPN tunnels, but management of the firewalls is lost if
the VPN tunnels are down for any reason.

Palo Alto Networks 62


Design Models

All three security VPCs have a Management subnet per AZ as well as the TGW subnet for the VPC attachments. As
mentioned previously, for TGW it is recommended to have a separate subnet for the TGW attachments per AZ. For
management connectivity for the VPC, the management route tables all have a local route for the VPC. The following
table lists the associated routes.

Table 8 Management routes

VPC Route table name Routes Route description


Security-In Security-In-Mgmt 0.0.0.0/0 Management routing table with a default route
pointing to the TGW and a local VPC route
10.255.0.0/16
Security-In Security-In-TGW 10.255.0.0/16 Local VPC route. Subnet for TGW for attach-
ments
Security-Out Security-Out-Mgmt 0.0.0.0/0 Management routing table with a default route
pointing to the TGW and a local VPC route
10.254.0.0/16
Security-Out Security-Out-TGW 10.254.0.0/16 Local VPC route. Subnet for TGW for attach-
ments
Security-East-West Security-East-West-Mg- 0.0.0.0/0 Management routing table with a default route
mt pointing to the TGW and a local VPC route
10.253.0.0/16
Security-East-West Securi- 10.253.0.0/16 Local VPC route. Subnet for TGW for attach-
ty-East-West-TGW ments

Scaling
TGW is very scalable. You can scale up to 5000 VPC attachments today, allowing thousands of VPCs that can be
connected to the TGW. You can also deploy up to five TGWs per region.

To scale the security solution further the following can be achieved:

• Inbound security VPC—You can easily add additional firewalls because a load balancer is deployed to steer
traffic to the firewalls and SNAT is used for return traffic. You could do so by deploying firewalls in additional
AZs or within the two existing AZs.

• Outbound security VPC—You can easily add additional firewalls because you have deployed SNAT and VPN
attachments that support ECMP and dynamic routing. You could do so by deploying firewalls in additional AZs
or within the two existing AZs.

• East-west security VPC—Because NAT is not in use on east-west traffic flows, careful planning is needed on
IP subnet allocation and understanding which IP prefixes can be grouped together to steer certain routes to
separate pairs of firewalls. You can add additional firewalls with advertisement of defined prefixes for subsets of
VPCs that need east-west inspection before connecting to each other, such as 10.0.0.0/14 for VPC1-VPC3 and
10.4.0.0/14 for VPC4-7 for example. If there is an option to use SNAT, scaling is simplified because you are able
to use both firewalls

Palo Alto Networks 63


Design Models

TRANSIT VPC MODEL


When your AWS infrastructure grows in number of VPCs due to scalability or the desire to segment east/west traffic
between VPCs, the question of how to secure your infrastructure arises. One option is to continue using the Single
VPC design with a VM-Series in each VPC. The cost and management complexity of firewalls required grows linearly
with the number of VPCs. A popular option is the Transit VPC architecture. This design uses a hub and spoke architec-
ture consisting of a central Transit VPC hub (which contains firewalls) and Subscriber VPCs as spokes. The Transit VPC
provides:

• Common security policy for outbound traffic of Subscriber VPCs.

• An easy way to insert east/west segmentation between Subscriber VPCs.

• A fully resilient traffic path for outbound and east-west traffic with dynamic routing.

The architecture provides VM-Series security capabilities on AWS while minimizing cost and complexity of deploying
firewalls in every VPC. You can place common infrastructure services such as DNS, identity, and authentication in the
Transit VPC; however, most chose to keep the Transit VPC less complicated and to locate these services in a Services
VPC, which is connected to the Transit VPC for ubiquitous access.

A Transit VPC easily accommodates organizations wanting to enable distributed VPC ownership to different business
functions (engineering, sales, marketing, etc.) while maintaining common infrastructure and security policies, as well
as accommodating organizations that wish to consolidate ad hoc VPCs infrastructure within a common infrastructure.
Transit VPC architecture provides a scalable infrastructure for east/west inspection of traffic between VPCs that
minimizes friction with DevOps. Within their VPC, the business or DevOps can move rapidly with changes, and when
transactions need to go between VPCs or to the internet, common or per-VPC policies can be applied in the Transit
VPC. You can automate Transit VPC security policy assignment to endpoint IP addresses by assigning tags to AWS
application instances (VMs) and using Panorama to monitor the VMs in a VPC. Panorama maps the assigned tags to
dynamic address groups that are tied to the security policy, providing a secure, scalable, and automated architecture.

Figure 39 Panorama VM Monitoring for dynamic address group mapping

VM-Monitoring
XML API

Dynamic Address
AWS Compute Resources Group Definition
Sharepoint Servers • Address Group Assignment
• Security Policy Settings
DB Servers
Admin Servers
Public Web Servers
Policy Enforcement
10.4.2.2
Learned Group
10.1.1.2
Membership
10.8.7.3

Security Policy
Source Destination Application Rule

Public Web Servers DB Servers MSSQL-DB

Admin Servers Sharepoint Servers MS-RDP

Sharepoint Servers DB Servers SSH

Palo Alto Networks 64


Design Models

Transit VPC builds upon the Single VPC, using the same design of two VM-Series firewalls, one in each of two
Availability Zones. Subscriber VPCs are connected via an IPsec overlay network consisting of redundant pair of VGW
connections (in each VPC) that connect to each VM-Series firewall within the Transit VPC. Subscriber VPN tunnels
terminate in the Subscr security zone. Dynamic routing using BGP provides fault-tolerant connectivity between the
Transit VPC and Subscriber VPCs.

Outbound traffic uses Source NAT as in the Single VPC design, and east/west traffic between VPCs passes through the
Transit VPC firewalls for policy enforcement.

Figure 40 Transit VPC design model

Subscriber VPC#1
Sub-2a
10.2.1.0/24

VGW
BGP-AS 64512
Transit VPC
Sub-2b
AZ-a 10.2.2.0/24

10.101.1.11
Management
10.101.1.0/24

fw1-2a Subscriber VPC#2


Subscr

BGP-AS 64827
Sub-2a
(eth1) 10.3.1.0/24
10.101.0.10
VGW
Public-2a BGP-AS 64513
10.101.0.0/24
Internet

Public-2b Sub-2b
10.101.2.0/24 10.3.2.0/24

(eth1)
10.101.2.10
Subscr

fw1-2b
BGP-AS 64828 Subscriber VPC#3
Management
10.101.3.0/24 Sub-2a
10.1.1.0/24
10.101.3.11

AZ-b
Sub-2b
VGW
10.1.2.0/24
BGP-AS 64514

Palo Alto Networks 65


Design Models

Differences from the Single VPC design model:


• VGW and BGP routing—The Subscriber VPCs communicate over IPsec VPN tunnels to the Transit VPC firewalls.
The VGWs in the VPC originate the VPN tunnels and connect to each Transit firewall. BGP is used to announce
VPC routes to the Transit VPC and learn the default route from the Transit VPC firewalls. Each VGW has an BGP
over IPSec peering session to one firewall in each availability zone.
• Inbound—Transit VPC does not use load balancers for inbound traffic in this design. Inbound traffic on the
Transit VPC can be designed to reach any VPC behind the Transit VPC; however, it is preferred that large scale
public-facing inbound web application firewalls and front ends be deployed in the spoke VPC with the applica-
tion front end and use the load-balancer sandwich design. In this way, the Single VPC can remain the template
design for VPCs hosting web front-ends, and the Transit VPC is used for outbound and east/west traffic for all
VPCs. In this design, the firewall load can be distributed; Transit firewalls can scale to handle more east/west
and outbound flows.
• Outbound—Outbound traffic from all VPCs use VPN links to the Transit VPC for common and resilient out-
bound access.
• East/west—This traffic follows routing to the Transit VPC, where security policy can control VPC-to-VPC traffic
for east/west.
• Backhaul—Backhaul to On Premises connections would be delivered to the Transit VPC for shared service
access to associated VPCs.

VGW and BGP Routing


The Transit VPC design relies on BGP routing to provide a resilient and predictable traffic flow to/from Subscriber
VPCs and the Transit VPC firewalls. Subscriber VGWs’ implementation of BGP evaluates several criteria for best path
selection. After a path is chosen, all traffic for a given network prefix uses this single path; VGW’s BGP does not
support equal cost multipath (ECMP) routing, so only one destination route is installed; the other link is in standby. This
behavior is most evident for outbound internet traffic from a Subscriber VPC, because all internet traffic takes a single
path regardless of connectivity. There is the ability to influence the path selected to better distribute Subscriber VPC
traffic across each VM-Series firewall within the Firewall Services VPC; however, most designs favor one route for all
traffic over one Transit firewall and have the ability to failover to the other path and firewall in the event of an outage
or scheduled maintenance.

Note

The BGP timers that the AWS VGW uses for keepalive interval and hold-time are set
to 10 and 30 respectively. This provides an approximately 30 second failover for the
transit network versus the default BGP 180 seconds. The BGP timers on the firewalls
are set to match the AWS VGW settings.

Intentionally biasing all traffic to follow the same route to the Transit VPC provides a measurable way to predict when
the overall load of the existing Transit firewall will reach its limit and require you to build another for scale; Scale is
discussed more later. Another benefit of deterministic traffic flow through the Transit VPC firewalls is that it eliminates
the need to use NAT on east/west traffic to ensure that return traffic comes through the same firewall(s). The Transit
VPC design follows an active/standby design in which all firewalls in Availability Zone-a are the preferred path, and all
firewalls in Availability Zone-b is the backup path by using BGP Path Selection.

Palo Alto Networks 66


Design Models

BGP Path Selection is a deterministic process based on an ordered list of criteria. The list is evaluated top-down until
only one “best” path is chosen and installed into the VGW routing table. The VGW uses the following list for path
selection criteria:

• Longest prefix length

• Shortest AS Path length

• Path origin

• Lowest multi-exit discriminator (MED)

• Lowest peer ID (IP address)

The VPN connection templates provided by AWS have the first four criteria the same for every configuration, so
the default behavior is for path selection to choose the BGP peer (firewalls) with the lowest IP address. The peer IP
addresses are those assigned to the firewall tunnel interfaces and specified by AWS. BGP Peer IP address is the least
significant decision criterion and one you cannot change to influence path selection. Left to the default peer IP address
determining path selection, you will have a random distribution of traffic paths, which is undesirable. There are ways to
influence the path selection and have a predictable topology.

Prefix Length
Network efficiency and loop avoidance require that hosts and routers always choose the longest network prefix for
destination path selection. Prefix length can provide information regarding desired routing policy but is most often
related to actual network topology. This architecture uses prefix length and BGP AS Path to ensure symmetric path
selection in the absence of multi-path support by the VGW for connectivity between Firewall Services VPC and
Subscriber VPCs.

The easiest way to consider this is to think of the two Availability Zones in your Firewall Services VPC as separate
BGP routing domains within a single autonomous system. VM-Series firewalls within the Firewall Transit VPC should
announce only directly connected routes and a default route. You should avoid BGP peering between firewalls within
your Transit VPC because this creates an asymmetric path for traffic arriving on the tunnel not being used by the VGW
for outbound traffic.

AS Path Length
Autonomous systems (AS) Path length is the number of autonomous systems through which a route path must traverse
from its origin. Because your BGP AS is directly adjacent to that of the VGW, AS Path length is always 1 unless mod-
ified by the firewall when advertised. If you could examine AS Path length, you would see your private AS associated
with all routes you announce to VGW {65000}. Prepending AS Path adds the local AS as many times as the configu-
ration indicates. A prepend of 3 results in {65000, 65000, 65000, 65000}. AS Path Length is modified as part of an
export rule under BGP router configuration in the firewall. There are three configuration tabs for an export rule:

• General tab
Used by—Select BGP peer(s) for which you want to modify exported routes

• Match tab
AS Path Regular Expression—Type the private AS used in your BGP configuration (65000)

• Action tab
AS Path—Type=Prepend, type the number of AS entries you wish to prepend (1,2,3, etc.)

Palo Alto Networks 67


Design Models

The Transit VPC reference design uses a unique BGP autonomous system per firewall in the Transit VPC and then uses
an AS prepend of 2 on the firewalls in the backup path Availability Zone-b, and no prepend from the firewalls in the
active path Availability Zone-a.

Path Origin
BGP path origin refers to how a route was originally learned and can take one of three values: incomplete, egp, or igp.
An interior gateway protocol (igp) learned route is always preferred over other origins, followed by exterior gateway
protocol (egp), and lastly incomplete. By default, AWS VPN connection templates announce routes with incomplete
origin. Under BGP router configuration in the firewall, you can modify path origin as either an export rule or redistri-
bution rule. Default Route (0.0.0.0/0) is advertised to the VGW via redistribution rule in the firewall and is the easiest
place to modify path selection for the default route. Changing the origin to either igp or egp prefers this path over the
other firewall. AWS does not document the use of path origin as part of route selection and is left at default in this
design.

Multi-Exit Discriminator
A multi-exit discriminator (MED) is a way for an autonomous system to influence external entities’ path selection into a
multi-homed autonomous system. MED is a 32-bit number, with a lower number used for preferred path. MED is an
optional configuration parameter, and a MED of <none> and 0 are equivalent. Under BGP router configuration in the
firewall, you can modify MED by export rules and redistribution rules. AWS does not document the use of MED as
part of route selection; however, it responds to MED values set by the firewalls when announcing routes. Because this
design is using a unique BGP AS per firewall in the transit and a unique BGP AS per VGW, MED is not used for path
selection.

Palo Alto Networks 68


Design Models

Lowest Peer-ID (IP address)


Given the selection criteria discussed above, if there are still multiple candidate paths remaining, the route selection
process of AWS BGP chooses the BGP peer with the lowest IP address. For IPsec tunnels, these are the IP addresses
assigned by AWS. The default behavior for the AWS-provided VPN connection configuration templates uses lowest
peer-ID for route selection.

Figure 41 BGP path selection process

Prefix AS Path Origin MED Peer-ID


10.8.0.0/24 65000,7224 incomplete 0 169.254.49.1
10.8.0.0/24 65000,65000,7224 incomplete 0 169.254.49.145
10.8.0.0/24 65000,7224 incomplete 0 169.254.58.30
10.8.0.0/16 65000,7224 incomplete 0 169.254.59.10

Longest Prefix

10.8.0.0/24 65000,7224 incomplete 0 169.254.49.1


10.8.0.0/24 65000,65000,7224 incomplete 0 169.254.49.145
10.8.0.0/24 65000,7224 incomplete 0 169.254.58.30

AS Path Length

10.8.0.0/24 65000,7224 incomplete 0 169.254.49.1


10.8.0.0/24 65000,7224 incomplete 0 169.254.58.30

Path Origin

10.8.0.0/24 65000,7224 incomplete 0 169.254.49.1


10.8.0.0/24 65000,7224 incomplete 0 169.254.58.30
Multi Exit
Discriminator

10.8.0.0/24 65000,7224 incomplete 0 169.254.49.1


10.8.0.0/24 65000,7224 incomplete 0 169.254.58.30

Lowest Peer-ID

10.8.0.0/24 65000,7224 incomplete 0 169.254.49.1

Palo Alto Networks 69


Design Models

Inbound Traffic
The Transit VPC design model prefers to host inbound web application firewalls and load balancers in the subscriber
VPC with the application front end and use the load-balancer sandwich design. In this way, the Single VPC can remain
the template design for VPCs hosting web front-ends with the associated firewall pair in the Single VPC for inbound
traffic protection, and the Transit VPC is used for outbound and east/west for those VPCs and all others.

Figure 42 Transit VPC with inbound in Subscriber VPC

Inbound Web Server VPC

AZ-a Web-2a
10.1.1.0/24

Transit VPC lb.aws.com


Public-2a
tcp/80
10.1.10.0/24 (eth2)
AZ-a
tcp/443

(eth1)
10.101.1.11
Management
10.101.1.0/24
(eth1)

(eth2)
Public-2b
(eth1) 10.1.110.0/24
10.101.0.10
Public-2a
10.101.0.0/24 Web-2b
Internet
Outbound AZ-b 10.1.101.0/24

Public-2b
10.101.2.0/24
(eth1) Subscriber VPC#1
10.101.2.10

Sub-2a
10.2.1.0/24

Management
10.101.3.0/24
10.101.3.11

AZ-b Sub-2b
10.2.2.0/24

If required, inbound traffic on the Transit VPC can be designed to reach any VPC behind the Transit VPC. For inbound
traffic via the Transit VPC, the front-end private IP address on the Transit firewall is mapped to a public IP address
via the IGW. The IGW provides Destination NAT as usual to a firewall secondary IP address. When received by the
firewall, both Source and Destination NAT are applied. Source NAT is required in this scenario in order to ensure the
return traffic remains symmetric through the receiving firewall. If you did not apply Source NAT, the VGW routing
table in the VPC might have chosen the other firewall for its default route back to the internet, and the session fails
due to asymmetry. The return traffic from the Subscriber instance is sent back to the translated source address of the
firewall to match the existing session and the reciprocal Source and Destination NAT translations are applied. Lastly
the IGW modifies the source address to the public Elastic IP address. Note that for inbound traffic, the originating host
IP address is opaque to the destination Subscriber VPC instance, because it was source translated by the firewall. All
inbound traffic appears to be originating for the same source host.
Security policies on the firewalls control which applications can flow inbound through the transit firewalls.

Palo Alto Networks 70


Design Models

Outbound Traffic
Outbound traffic requires only Source NAT on the firewall in order to ensure traffic symmetry. The VGW BGP default
route determines the outbound direction of all internet traffic and, in so doing, the Transit VPC firewall and the associ-
ated Source NAT IP address used for outbound traffic. The outbound traffic from an endpoint is initiated to the direct
internet IP addresses desired and then:

• The VGW forwards it to the firewall.

• At the firewall, the outbound packet’s source IP address is translated to a private IP address on the egress
interface.

• At the AWS IGW, the outbound packet’s source IP address is translated to a public IP address and forwarded to
the internet.

The traffic policy required for outbound traffic includes UDP/123 (NTP), TCP/80 (HTTP), and TCP/443 (HTTPS). Set
the policy up for application-based rules using default ports for increased protection. DNS is not needed, because
virtual machines communicate to AWS name services directly through the AWS network fabric. You should use positive
security policies (whitelisting) to enable only the applications required. Security profiles prevent known malware and
vulnerabilities from entering the network in return traffic allowed in the security policy. URL filtering, file blocking, and
data filtering protect against data exfiltration.

You can statically map endpoint IP address assignment in a security policy to a VPC subnet or the entire VPC IP address
range. To reduce administration and adapt to a more dynamic continuous development environment, you can automate
endpoint IP address assignment in security policy by assigning tags to AWS application instances (VMs) and using
Panorama to monitor the VMs in a VPC. When an instance is initialized, Panorama maps the assigned tags to device
address groups that are tied to the security policy, and then Panorama pushes the mappings to the firewalls in the
Transit VPC. In this way, members of a security policy can exist in a number of VPCs or IP address ranges.

East/West Traffic
Functional segmentation by application tiers—such as web, business, database, or business units—are all done on a
per-VPC basis. In this way, east/west traffic, or traffic between VPCs, flows through the Transit VPC firewalls. The
east/west traffic follows the default route, learned by the VGW in the VPC, to the Transit firewalls where all internal
routes are known as well as the path to Internet access. It is possible to advertise each VPC’s routes to all other VPCs;
however, in large environments you can overload the route tables in the VPC. A VPC route table is limited to 100 BGP
advertised routes. Using aggregate routes or the default therefore may be required in larger environments.

As with outbound traffic, all east/west traffic flows over the same links and across the same Transit VPC firewalls for
predictable load measurement and troubleshooting. For security policy grouping, each VPC link to the Transit firewall
is placed into a common Subscr security zone on the firewall. This design uses a single security zone with multiple
dynamic address groups and VM Monitoring to automate address group assignment. You can base security policy on a
combination of security zone and dynamic address group.

A positive control security policy should allow only appropriate application traffic between private resources. You
should place resources with a different security policy in separate VPCs, thereby forcing traffic between security groups
though the Transit VPC. You should also enable security profiles in order to prevent known malware and vulnerabilities
from moving laterally in the private network through traffic allowed in the security policy.

Palo Alto Networks 71


Design Models

Backhaul to On-Premises Traffic


To get traffic from on-premises connections to the internal servers, the VPN connections from on-premises gateways
connect to the firewalls in AZ-a and AZ-b. Depending on the number of on-premises gateways, a single gateway
could connect to both VPC firewalls, an on-premises HA pair could connect to both VPC-based firewalls, or you could
program a-to-a and b-to-b VPN connections. This design uses an HA pair On Premises with VPN connections to both
VPC firewalls, and VPN connection tunnel interfaces are configured to terminate on the Public interface.

Connections from the Subscriber VPCs private IP addressed servers now have the same improved resilience charac-
teristics as the outbound traffic flows. The default route in the route table for the servers points to a resilient VGW in
that VPC. The VGW peers to both firewalls in the Transit VPC, and dynamic routes on the Transit firewalls point to the
remote on-premises traffic destinations.

Figure 43 VPN to Transit VPC

Transit VPC

On-Premises AZ-a Mgmt Subscriber VPC#1


OnPrem

10.10.0.0/16 Subscr Sub-2a


BGP-AS 65001 10.2.1.0/24

HA-Active (eth1)
Public-2a
VPN

10.101.0.0/24

Sub-2b
HA-Standby 10.2.2.0/24
(eth1)

Public-2b
OnPrem

Subscr

10.101.2.0/24

AZ-b Mgmt

Security policies on the firewalls restrict to required applications being allowed through a dedicated connection
to on-premises resources. In this way, Subscriber-to-Public outbound traffic can be assigned different rules than a
Subscriber-to-on-premises security policy.

Palo Alto Networks 72


Design Models

Management Traffic
Management traffic from on-premises to the firewalls can pass through the VPN connection; however, there is the
need for one of the firewalls to be active for this to work. This design uses Panorama located in AWS to manage the
firewalls. A Panorama HA pair is housed in a separate VPC to simplify the Transit VPC and uses VPC Peering to the
Transit VPC for connectivity. The use of VPC peering for the connection to the Transit VPC means that the Transit
firewall does not have to be fully operational with VPN connectivity for Panorama to communicate with it; this is
important when initially setting up the firewalls or troubleshooting VPN connectivity.

Figure 44 Management connectivity

Subscriber VPC#1
Transit VPC
Sub-2a
AZ-a 10.1.1.0/24

10.101.1.11
Management
10.101.1.0/24

Sub-2b
10.1.2.0/24

(eth1)
10.101.0.10
Public-2a
10.101.0.0/24
Internet
Central Mgmt VPC
Public-2b
10.101.2.0/24 Sub-2a
(eth1) 192.168.100.10/25
10.101.2.10
Panorama
- Primary

Management Sub-2b
10.101.3.0/24 192.168.100.138/25
10.101.3.11
Panorama

AZ-b - Standby

You should use security groups and network ACLs to restrict Panorama to management communications to needed
ports and IP address ranges. Panorama should have dedicated public IP access so that you can always reach the fire-
walls in the Transit using private IP addressing within the VPC, and Security groups should be used to restrict outside
access to Panorama to only those ranges in use by your organization to manage the firewalls. If you are not using
Panorama, you need public IP addresses on the Transit firewall management interfaces or a jumphost in the Transit
VPC.

Scaling the Transit VPC


As organizations grow the number of Subscriber VPCs connected to their Transit VPC, they will likely run into some
scaling challenges—firewall performance and IPsec tunnel performance being the most common. Large Transit VPC
architecture has two variations on the Transit VPC to address each of these challenges.

To address performance scale, Large Transit VPC architecture augments the Transit VPC architecture by placing addi-
tional firewalls in the Transit VPC to distribute Subscriber VPC tunnels, eliminating performance issues where all active
traffic flows through a single firewall pair. Overall outbound traffic performance from a VPC is constrained by the VGW
IPsec tunnel performance, which AWS indicates multi-Gbps throughput can be expected.

Palo Alto Networks 73


Design Models

Recall from the Transit VPC design that inbound web applications should have their firewalls in the same VPC as the
application front-end web servers and use the load-balancer sandwich design from the Single VPC design. In this
way the Single VPC can remain the template design for VPCs hosting web front-ends, and the Transit VPC is used for
outbound and east/west for those VPCs and all others. In this way the firewall load can be distributed; Transit firewalls
can scale to handle more east/west and outbound flows.

Figure 45 illustrates how the Large Transit VPC design mitigates firewall performance issues by distributing Subscriber
VPC connections across multiple VM-Series firewalls within the Transit VPC. The design pattern is identical to Transit
VPC for each pair of firewalls across your Transit VPC Availability Zones. In the example diagram, Subscriber VPC-1 and
Subscriber VPC-2 VPN tunnels are terminated on fw1-2a and fw1-2b, and Subscriber VPC-3 VPN tunnel subnets are
connected to a new pair of firewalls (fw2-2a and fw2-2b) in the Transit VPC to terminate the new tunnel subnets. In
this scenario the Transit firewall pair for AZ-a (fw1-2a and fw2-2a) have to peer to each other, and the Transit firewall
pair for AZ-b (fw1-2b and fw2-2b) peer to each other, exchanging only subscriber VPC routes so that all VPCs have
east/west connectivity. The Transit firewalls in AZ-a do not have eBGP peering sessions with the firewalls in AZ-b.

Figure 45 Large Transit VPC—firewall scale

Subscriber VPC#1
Sub-2a
Transit VPC 10.2.1.0/24

10.101.1.11-12 AZ-a BGP AS=64512

Management
10.101.1.0/24 Sub-2b
10.2.2.0/24
Public-2a
10.101.0.0/24

fw1-2a 10.101.0.10
BGP-AS 64827 (eth1)

Subscriber VPC#2
eBGP
Peers Sub-2a
10.101.0.11 10.3.1.0/24
(eth1)
fw2-2a
BGP AS=64829

Internet
Sub-2b
fw1-2b
BGP AS=64513 10.3.2.0/24
BGP AS=64828
(eth1)
10.101.2.10
eBGP
Peers

fw2-2b
Subscriber VPC#3
BGP AS=64830 (eth1)
Sub-2a
10.101.2.11 10.1.1.0/24
Public-2b
10.101.2.0/24

Management
10.101.3.0/24
Sub-2b
10.101.3.11-12 AZ-b BGP AS=64514 10.1.2.0/24

Palo Alto Networks 74


Design Models

Figure 46 illustrates using the Transit VPC for outbound and east/west traffic loads, while keeping inbound traffic into
an individual VPC where larger scale inbound Webserver front-end applications exists. In this way, the Transit VPC
remains less complicated with requiring load balancers in the VPC, though this is not required and inbound traffic via
the Transit VPC is supported. In the VPC where the web server front-end application lives, the load-balancer sandwich
design with firewalls in the VPC provides resilient operation for inbound and return traffic, and the VGW provides
resilient transport to Transit VPC for any outbound or east/west traffic.

Figure 46 Large Transit VPC with inbound web server in VPC

Inbound Web Server


Transit VPC

10.101.1.11-12 AZ-a AZ-a Web-2a


10.1.1.0/24

Management
10.101.1.0/24
lb.aws.com
Public-2a
tcp/80
Public-2a 10.1.10.0/24 (eth2)
tcp/443
10.101.0.0/24
10.101.0.10 (eth1)
fw1-2a
BGP-AS 64827 (eth1)

eBGP (eth1)
Peers
10.101.0.11 (eth2)
Public-2b
(eth1) 10.1.110.0/24
fw2-2a
BGP AS=64829

Web-2b
AZ-b 10.1.101.0/24
fw1-2b Internet
BGP AS=64828
(eth1)
10.101.2.10 Subscriber-1
eBGP
Peers
Sub-2a
10.2.1.0/24
fw2-2b
BGP AS=64830 (eth1)
10.101.2.11
Public-2b
10.101.2.0/24
Sub-2b
Management 10.2.2.0/24
10.101.3.0/24

10.101.3.11-12 AZ-b

Palo Alto Networks 75


Summary

Summary
Moving applications to the cloud requires the same enterprise-class security as your private network. The shared-se-
curity model in cloud deployments places the burden of protecting your applications and data on the customer.
Deploying Palo Alto Networks VM-series firewalls in your AWS infrastructure provides scalable infrastructure with
the same protections from known and unknown threats, complete application visibility, common security policy, and
native cloud automation support. Your ability to move applications to the cloud securely helps you to meet challenging
business requirements.

The design models presented in this guide build upon the initial Single VPC design, as you would likely use in any
organization; build the first VPC as a proof of concept, and as your environment grows, you grow into a more modular
design where VPCs may be purpose-built for the application tier that it houses. Reuse the Single VPC design for a
resilient inbound design and use the Transit Gateway design to scale your environment with more visibility and less
complexity.

Palo Alto Networks 76


You can use the feedback form to send comments
about this guide.

HEADQUARTERS
Palo Alto Networks Phone: +1 (408) 753-4000
3000 Tannery Way Sales: +1 (866) 320-4788
Santa Clara, CA 95054, USA Fax: +1 (408) 753-4001
http://www.paloaltonetworks.com info@paloaltonetworks.com

© 2019 Palo Alto Networks, Inc. Palo Alto Networks is a registered trademark of Palo Alto Networks. A list of our trade-
marks can be found at http://www.paloaltonetworks.com/company/trademarks.html. All other marks mentioned herein may be
trademarks of their respective companies. Palo Alto Networks reserves the right to change, modify, transfer, or otherwise
revise this publication without notice.

B-000110P-2 08/19

Das könnte Ihnen auch gefallen