Transcript
This transcript was autogenerated. To make changes, submit a PR.
There was a time, not that long ago, in fact, as recent as six years,
when hybrid DNS required setting up a DNS solution in the cloud.
This was a quintessential example of an AWS customer performing
unappreciated heavy lifting until the severe Route 53 resolver came around.
Welcome to CONF22 Platform Engineering Conference and thank
you for attending the session.
I'm Arthur Siddiqui, Senior Principal Software Engineer at Silicon Valley
Bank, Division of First Citizens Bank.
The agenda will cover the original problem statement when there
wasn't an adequate offer from AWS.
I will introduce the Route23 resolver service and explain the key concepts.
It will be followed by a solution for hybrid DNS for multi account setup.
In addition, I'll elaborate on the key design considerations as well as native
security tools that are available to us.
I'll wrap it up by talking about a new capability released this
year called Route23 Profiles.
Until the time Route 53 Resolver was announced, customers had two paths, choose
a marketplace product or come up with a custom solution to set up hybrid DNS.
The latter, while likely to be more economical, relied on
setting up DNS servers on EC2s.
The responsibility of setting up resiliency also fell on the customer.
All these challenges we have created when ROC 53 Resolver
was announced six years ago.
Before I delve into this topic, let me do a quick sidebar.
When a BPC is provisioned, the DOT2 IP is reserved to resolve DS
queries originating from the BPC.
For example, if a VPC has a CIDR of 10.
0.
0.
0 23, the IP 10.
0.
0.
2 is what provides the data's resolution.
This is why, historically, it used to be called a 2 or a plus 2 resolver.
This has all along been a default feature of VPC.
It should be noted that if a VPC has multiple siders, it is the
primary sider where the plus two IP is what's used for DNS resolution.
Circling back to Route V3 Resolver, it was announced in November 2018.
This offering was specifically targeted to simplify DNS for
the hybrid cloud use case.
Any enterprise that ventures into cloud will almost always have
a hybrid network that is having both on prem and cloud presence.
This means there is a requirement for on prem network to resolve workloads in AWS.
Similarly, resources in AWS may have a requirement to resolve
private DNS records on prem.
This requirement is solved by the introduction of the Resolver Endpoints.
There are two types of endpoints, inbound and outbound, and this
terminology is from AWS standpoint.
Inbound Resolver Endpoint is for queries coming to AWS from on prem network.
It is used to resolve BBC resources for on prem.
It is important to note that on prem DNS Must issue recursive instead
of I trade up queries for the Route three resolvers in market.
Eight points.
On the other hand, VAR Resolver eight points of queries going
from AWS to Auburn network.
It is used to resolve private DS records on prem.
These result endpoint manifests at Elastic Network Interface or ENI.
Therefore, the best practice.
It should provision these endpoints in every AZ that the VPC spans to.
While I expect the audience will use infrastructure as code,
the console screenshot is to illustrate some of the key points.
As Route 53 Resolver is a regional service and a VPC resource, it
should land in a shared service account such as Core Network.
As this is an ENI, it requires a security group.
For the Inbound Resolver Endpoint, Security Inbound Rules should be open
for 423 for both TCP and UDP protocol.
On the flip side, for Outbound Resolver Endpoint, Security
Outbound Rules should be open for 423 for both TCP and UDP protocol.
This example shows a VPC spanning two available resolvers.
The default behavior when provisionally informed for AWS to grab an IP from
the subject of VPC or alternative.
Alternatively, one can specify IP from the subnet.
Needless to say, these ENI get a allocated private ips
out all points are complemented with a 40 rule.
This 40 rule applies to BBT traffic destined for on-prem.
This is why on prem NIDA servers need to be configured under this rule.
In this case, the on prem NIDA server is listed as 10.
50.
50.
50.
Circling back to the comment how ENIs are created when Resolver
informs the provision, here the diagram shows four ENIs.
This is a VPC finding two available zones, and hence we have a pair of inbound and
outbound And Alva enforce respectively.
And then this shows how this heed inbound rule should look like For the
inbound endpoint, only traffic on 4 53 promote TCP and UDP protocol is allowed.
This rule assumes that on-prem network is using a Class A CI of 10.000 slash eight.
There are two other key points that talked about before we can look at the design.
The first one is, once the resolver endpoints have been provisioned
in a shared service account, the question arises, how would it be
shared across the spoke accounts?
Enter Resource Access Manager, or RAM.
This is an AWS service that has a goal of sharing resources across
accounts within the AWS organization.
Route 23 resolver is one of many types supported by RAM.
Some of the other examples are system manager, parameters,
RDS cluster, backup loss, etc.
The second key point is to understand the concept of a BPC
association to private hostage zones.
In this case, private hostage zones across all spoke accounts will
need to be associated with a BPC and shared service account where
the endpoints were provisioned.
It will allow shared service accounts to resolve private DNS
records across all spoke accounts.
This design encapsulates the top up to this point.
In the top, it depicts a shared service account where resolver endpoints have been
provisioned and have UPC spanning two AZs.
In the first use case indicated by blue arrows, the request originates on prem.
An on prem user is looking up a resource in one of the two spoke accounts.
On prem DNS server will forward a request to the resolver's inbox endpoint.
Since the private host zone of spoke accounts are associated with the VPC
network account, resolution of private records such as EC2 is possible.
In the second use case indicated by orange arrows, Request is
originating in a Spoke account.
The EC2 resource is looking up an on prem system.
There is a forwarding rule for corporate domain, let's say star.
com42.
io, associated with the resolver's outbound rule.
In addition, this forwarding rule is shared with Spoke VPCs
via Resource Access Manager.
The outbound endpoint.
are also configured with the IP of on prem DNS server.
Early in the deck, IP of on prem DNS server was set up as 10.
50.
50.
50.
It is important to call out that while what we have talked so far achieved
DNS resolution in the hybrid setup, isolation of VPCs still applies.
How packet flows between VPCs is still needs a networking construct.
That construct is Transit Gateway.
Similarly, networking path between on prem and edge VPC of a network account
will either via VPN or direct connect.
When it comes to DNS setup, there are two security tools that should
be a part of your ecosystem.
GuardDuty is the first one.
A server is not a big fan of.
It, to me, represents the pinnacle of how security products should be delivered.
Single click of a button to enable it, and it passes through three sets of logs,
including DNS, to identify threats such as data exfiltration through DNS queries.
Over the years, this service has had enhancements and extends
its reach into specific services such as S3, EKS, and RDS.
Security Duel is a Rocket 3 resolver DNS Firewall.
This is a relatively new service and was announced in early 2021.
It provides the capability to regulate outbound DNS traffic
originating in the VPCs.
It supports both AWS Managed Domain Lists as well as the custom one.
Managed Lists is a good way when looking to reduce a service's DNS traffic.
As with any typical firewall, there are three action types that can be
configured, allow, block, and alert.
This action can be applicable to a specific DNS record type or a
broader firewall rule that will apply to all DNS record types.
The last subtopic I would like to talk about is the announcement from last April.
AWS announced a new capability called Route 33 Profiles.
The value proposition is that a profile allows bundling up private hosting
zone associations, forwarding roles, and even DNS firewall rule groups.
This profile can then be shared via Resource Access Manager across
accounts in AWS organization.
While I haven't had a chance to play with this capability, it
will certainly make the hybrid design both simpler and cleaner.
There is no escaping DNS and everyone just expects it to work.
Therefore, to be, to implement a solution in a resilient manner using
AWS managed services is an excellent position to take as a platform engineer.