Transcript
This transcript was autogenerated. To make changes, submit a PR.
Okay, so this session is all about reimagining application networking
and security, and I'm going to introduce some new solutions
in this space specifically oriented around cloud native
application networking as well as cloud native application
security. Let's get right into it. These overall
agenda I have three main things I want to talk
about. First of all, how did we get here? Why is there even a need
to reimagining the way we've approached application networking and
security from what we've been doing in the past 1020,
30 years? Then I'm going to deep
dive a little more, double click on what we're doing in the application networking
space. What are the challenges that are unique to that environment,
as well as these, what are solutions that we're bringing forward and these
similarly doing the same exercise. But now in the security
space, we at Cisco have been playing in
both spaces for over 30 years, so we have a lot of expertise
and thought leadership. But how do we apply it to the problems
presented in this new cloud native environment? That's basically these running
theme that you'll see. And finally, I'll summarize everything and just
call out some key takeaways. Let's get right into it then.
So how did we get here? We've seen that traditionally
application has been very tightly coupled to hardware. You wanted
to spin up a new application, you'd have
to stack and rack a new server, install an operating systems,
and then that would run your app. Well,
a big leap forward was taken when we introduced
virtualization technologies that had then the ability to
share a hardware, to say, let's virtualize a hardware
so that you cloud run multiple applications on these
same physical hardware. And that way you didn't have to have
a second or third or multiple instances of
actual physical hardware per each application.
So that as you scaled up and scaled down, you maintain a lot
of efficiency. However, this approach, while it's very flexible
and extensible, it did require a lot of extra overhead.
So if you see here, we have multiple layers of abstraction in
the forms of an operating system for the hardware itself, as well
as virtual machine operating systems. All of these
then would have to be licensed. All of these then would have to be patched,
et cetera. There was a lot more extra overhead as well as
then ultimately the performance overhead, because you're going through all these layers
of abstraction. So an improvement was made here by
virtualizing not the hardware, but the operating system
itself. This was the approach used by containers,
and it's far more efficient, far more effective, and then it
really led to a change in how applications themselves
were developed. For instance, instead of having
what are now known as monolithic application architectures,
these every component within the application would
reside within the application. You have these huge libraries and
files, et cetera. Well, all of these then could be
split apart into atomic parts, self contained and
containerized individually, but then interconnecting
either with application programming interfaces or
direct protocol connections like TCP
or GRPC or whatever the case may be, really doesn't matter.
But the point is that this way you
could bring a lot of benefits. For instance, you don't have to
upgrade an entire application all at once.
You'd have benefits such as portability. You could
have some of these services running on premise,
or some running in the cloud, or you could have some of these running in
one cloud provider and another in another cloud provider.
You have maximum flexibility that way. You could also ensure
continuous delivery. Say okay, I want to upgrade one
service. Well, traditionally when we upgrade an application, you'd have to
take it down, you'd have to install the update,
boot it up, and then you'd have this planned outage.
Well you can do this continuously now by saying, you know what, if I'm upgrading
a specific service when it's ready, I spin that up in a
container and I just change my pointers.
Rollbacks are just as easy. Let's say there's an issue with version
20 of this ML app. I just roll that back to 10
and again it fits instant.
And then when I'm ready for my modified upgrade, I just go
ahead. So tremendous amount of availability realized
through this continuous delivery architectures.
Not only that then, but scalability itself. So when
we containerize these services, we can describe these entire application
environment in a YAML file, we feed that into Kubernetes,
which in turn wraps all these containers and pods and
then manages the orchestration via a
control plane that says, you know what, I'm going to assign work to a
series of worker nodes and I will spin up and spin
down the services as I need them dynamically and
continuously and per my application
requirements or even per the health and availability,
maintaining the environment to whatever was declared in
that Kubernetes file. Okay, so that's just a very high
level overview of how
the application architectures have evolved. It's very simplified,
but it's just for the point of laying some context
for our discussion now to follow. So as we go and look deeper
into what are the challenges that are presented with these new cloud
native architectures specific to networking,
and then we'll repeat for security, at least there's some context
laid there. So as we brought, but we
said, okay, now we have all these microservices within an
application all needing to be interconnecting. And these
may be on Prem, they may be in the cloud, they may be in different
cloud providers, there's all sorts of variations.
Now all of these, then these interconnections
have to be managed and these are net new interconnections
because previously all of these services would be on a single server,
virtual or physical, it doesn't matter, but you wouldn't have to interconnect
them. So now in addition to providing and managing
all your external connections, there's a whole
new series of internal connections that need to be
similarly managed, that need to be authenticated, that need to
be encrypted, that need to be observed, and the traffic and the
loads need to be managed on these, et cetera, et cetera,
et cetera. So all the application challenges you had,
areas now significantly increased,
doubled, maybe even more. It all depends on the complexity of the application
architectures itself. Okay, so this is
time consuming. Obviously this is error prone,
and if it's done individually, it can lead to a lot of inconsistencies,
et cetera. Enter then the service
mesh. So many customers have found that this is a
valuable way then of in maintaining consistent
security. You have one set of encryption
policies for your entire cluster. You have the ability to
observe the traffic pre and post encryption so
that you can see what's actually going on. You can manage the
traffic according to your loads or your policies, et cetera.
And so a lot of benefits are presented with a service mesh architecture,
istio being one of the most popular, but there's others like Linkerd
and so on. But service meshes, like many
technologies, they present some great advantages, but these, at the
same time, as we continue to push these envelope,
are presenting challenges themselves. So some of
the challenges that I'm going to talk about are the lifecycle management of the mesh
itself, how that introduces a lot more work
than was previously needed for managing network connections,
observability challenges, as well as multicluster
challenges and advanced use based challenges. So I'm going to deal with these
one at a time. Let's start with the lifecycle management
in a network. If you wanted to have more functionality from that network,
well then you'd have to upgrade the devices that constitute that network,
the routers, the switches, et cetera, get these latest software versions
on them, and then you have the new capabilities you might have to
do this manually or more recently using controllers
that would manage the software so that you just have everything all kept
to a single standard, golden images, et cetera,
so you can automate that. However, it might be tempting to think
that, okay then, if I want to upgrade a service mesh from one version
to another, it's the same exercise. I just upgrade
my mesh and I'm done. Well, not quite,
because there's a cloud native principle of immutable
infrastructure, which basically says once you establish the
infrastructure which includes these mesh and all the interconnections,
it cannot be changed. And as such,
we have to use a new approach. And the new approach, the way I
like to liken it or compare it, is to changing
a tablecloth on a restaurant. Say you're at a fancy restaurant
and you've been eating, you have all your plates and your wine and et
cetera all on the table, and the tablecloth,
maybe some wine spills on it. Well, how do you replace that tablecloth?
You don't just rip out the old one and try and get a new one
and maybe fastened to the old one as you rip it out.
No, what you do is you set a new table is set.
So an entirely new table with the new dishes,
with the new wine glasses, everything is moved over. And once it's in place,
then actually you also get moved over. So it's the same way
with a service mesh. You lay out a new mesh, you lay out new
instances of the services on that mesh, and then when they're ready,
then you gradually redirect the workloads to the new mesh.
When the confidence is there that everything's working as it should be and
everything is looking good, you can decommission the old mesh.
It's a lot more complicated. And then when you think about
that, when it comes to service meshes, they typically only have
three month lifecycles with only two supported versions,
the current version and these previous. So if you have a production
environment and you want to be on a supported version, this forces
you to upgrade your mesh every three months. And this is done on a
cluster by cluster basis. We deal with customers that have dozens,
if not some have over 100 clusters that they're
managing. And as such, that presents a lot of
toil. So being able to manage that in an efficient
and automated manner is very valuable and can remove all
that toil. A second area of challenge that we want to
touch on is observability. There's a lot of great tools
out in the open source community to give us observability. So you
have prometheus Grafana for Matrix, you have kiali
for topology, you have Jaeger for traces. But then if
you're troubleshooting as an operator, you have to constantly go from one pane
of glass to another pane of glass, and stitch all that information together
can really slow down your troubleshooting and makes it much
more difficult. Whereas if you have to repeat, and all
of these tools, incidentally, are cluster by cluster tools, if you
have multiclusters, then to correlate that information,
aggregate that information, and then integrate that information
areas, presenting some real challenges so that you can troubleshoot
as efficiently as possible. Also,
I talk about advanced use cases. So for instance, we have
what I displayed earlier when upgrading to one version of a
service to another was I illustrated what's called a blue green deployment.
It's just a simple digital cutover. It was
pointing here, now it's pointing there. Now there's a
way that this type of upgrade can be derisked,
and that's termed canary deployment with traffic management.
So rather than just sending all of the traffic to
the new version of the service we could manage, we say, hey,
why don't we start out just sending 20% of the traffic to the
new service, see how it performs, see if there's any unexpected
issues. And these, as our confidence level increases,
we increase these amount of traffic that's directed there.
Another advanced use case is the circuit breaker use case,
and that is now not managing versions of an app, but the
service itself and monitoring its health.
And we can set thresholds and say, below a given threshold,
if the health of that service degrades or deteriorates, we're just
going to cut that service so that no more workloads are
directed to it, and these receive a poor application experience.
As such, we see that when managing a service mesh,
your management solution has a number of requirements. The basic lifecycle
management, managing the security and the encryption of the mesh
provide observability optimally, an integrated observability,
like we're talking about learning about the environment, so that
you know what is normal, what is abnormal, and you can set service level
objectives and then enabling advanced use cases
like circuit breaking or canary deployments, et cetera.
And so to this end, we have a solution service mesh manager that
not only meets all these requirements, like there's a few other offerings
too on the market, but also has a number of unique differentiators.
And again, this is where we're bringing, for instance,
our 30 plus years of networking experience
to now the cloud native domain, for instance. One of the things
that we do that's completely unique is to be able to support
active, active control planes, to say,
even across service meshes that span clusters.
Instead of just having a single primary like a
hot versus a standby control plane, we can
support a multi primary control plane, which means both control
planes areas active, and that maximizes the redundancy
of that service mesh, even across clusters, as well
as takes care of all these service discovery, et cetera. We're the only ones
that can do this, or for instance, taking advantage
of our years of expertise in providing multitenancy segmentation
segregation solutions to recognize that,
okay, there could be multiple customers
leveraging these same resources, or even departments or any
other logical group. How do we maximize the flexibility
of how the traffic is separated across shared resources,
even in these type of environments? And so we're the only ones that
support multi gateways per service mesh
per cluster, as well as direct connections, so that you could have an external client
connecting directly to a workload. The maximum flexibility
when it comes to these types of interconnections. And these also we
have support for asynchronous microservices. We see that
service meshes typically are optimized for request
reply communications, which are synchronous
communications, whereas these are some communications and some applications,
notably approaches kafka, that areas event driven.
So when something happens, then communication is triggered
and it's not synchronous. And as such,
some competitors or some others have used an event based,
an event mesh approach to say you have your service mesh for your synchronous.
Now we have a completely different mesh for asynchronous apps,
whereas that's very redundant. That's a lot of extra architectures
and infrastructure delay, whereas what we do is we optimize
and we've optimized, for instance, specifically istio service
mesh to support both synchronous and asynchronous
communications. And again, we're the only ones that have done this. So a lot of
specific key differentiators in this space.
Let me now show a demonstration of Cisco
service mesh manager. So let's take a look at service Mesh
manager. Service Mesh manager is an istio distribution,
an enterprise grade istio distribution that enables you
a lot of additional features that you wouldn't get with just basic
out of the box istio. We get advantages in terms
of mesh management, integrated observability and
advanced use cases like traffic management circuit breakers or
canary deployments. We're going to look at all of these. In our quick little overview,
here's our main dashboard. We see we have two clusters, 18 services,
eleven workloads, most of them are healthy, but we have some issues.
We don't want everything green for the sake of this demo. It's more interesting,
it gives us these high level stats of our overall environment,
but it's particularly more interesting once we look at
things in, say, a topology view with an open source
tools. You would have to use, say kiali for your topology view,
and then Grafana for metrics and Jaeger for traces.
And it would be cluster by cluster. But here everything is integrated,
not just for the sake of observability, but also configuration.
So I see my overall topology, I have a multi cluster topology,
I have a master cluster, and I have a follower.
They both happen to be in the same cloud provider, in this case AWS,
but they could be on prem, they could be in different providers, et cetera,
it doesn't matter. I get the bird's eye level view of everything.
I can zoom in on any given item, whether it's a
service or workload, et cetera. And I have a tremendous amount
of information. The overall health,
this one is very healthy service. Any service level objectives,
I target the overall key metrics from here.
I could launch, for instance, I could launch a Grafana dashboard
for specifics about the given service or the object that
I'm looking at. Or I can even launch Jaeger if I
like. But having everything integrated gives me a lot
more comprehensive information that's
around that object without having to jump between
tools. So I can not only for instance,
get a view of how things are doing, but I can also change
things. For example, if we're talking about mesh management,
I can do things like inject faults, like for instance, if I
want to take a look, I saw that this service was very healthy,
but what if I start injecting a fault into it on purpose,
to get a sense of, okay, what is that going to do to
my overall environment? I'm going to say, okay,
100% of transactions will now receive
additional 2000 milliseconds of latency, 2 seconds
of latency. What's the effect that that's going to have on that
service? Well, I can now come over to health,
and then within a few moments I'm going to start seeing the effect
of that, particularly when I see latencies
between clusters, I'm going to see that I'm getting
this spike here that's going to just continue rolling out in
real time and then overall degrade the
performance and thus the health metric of that particular service.
And I'm going to see the effects of that in my overall
reporting. And on my dashboard, you can see here the spike in latency
that I've injected. Now I
can see things and configure things from this dashboard,
but I can also have a timeline view. So for instance,
I can see things, not just how they are, but I can go back in
time, whether it's a topology view, or if I want to maybe
zero in on specific services or workloads, I can not
just see the state of affairs right now. But if
I look at my timeline graph, I see that back here,
I did have some issues. So I can go back in time, I can say,
okay, what was the issues that I was experiencing there? And I
can delve further into them. So for instance, I can see that the
booking service here had some health issues.
These overall, the error rates were fine, but the latencies were in
the medium range as well as some of the error rates
were quite high. So I can see what happened even in the
past. So it gives me a tremendous amount of visibility into
the overall health of my environment.
Not only that, but if I return back to, say, my topology,
I'm going to go live, come back to my live view, I'm going to see
that, that service that I injected default is slowly
turning to a lighter shade of green. It's going to
eventually turn yellow and then orange. But some
of the other things that I can do with my service
mesh manager is to even direct traffic
to align, to say, my overall use case objectives.
For example, I have here three versions of the
movie service, version one, version two, version three,
and right now by default, it's about a 33% mixture
of traffic for each of these. But I might say, you know what,
I might want to have the majority of my traffic still going,
let's say 60% of my traffic to version 130
percent of my traffic to version two, and only 10% of my
traffic to version three, to have like a canary style deployment,
so that I'm testing the new versions incrementally
without sacrificing the user experience. And then as I
gain confidence, I can adjust and move the entire load
over. And this is
certainly easier to do than doing it via
editing these Yaml file, such as if I'm working just with
Kubernetes objects like this, and I require a lot more expertise
to go in. But then I'd be changing weights such as highlighted
here in these appropriate fields. But it's just simply far more user
friendly and allows for this type of policy to be set by even
nonexperts. Okay, let's shift
gears now and talk about application security challenges and solutions.
So these same exercise. But now, instead of just focusing on
networking, let's look at general approaches and challenges
for security. Now, in a traditional approach, again, when our
applications and the data for the applications all resided on
a single server, when it was tied very and coupled
very tightly to either physical hardware or
virtual hardware, it was very easy to protect. We just throw in
some firewalls in front of it. Even if the application itself was
lacking in security functionality, we could compensate
via the network. One approach would be that, like I say, to throw some firewalls
in front of it. Another approach, if there wasn't any native encryption,
we could provide encryption again via VPNs
virtual private network head ends on the network, and therefore take
care of that and compensate. But again, this is
our new environment. These cloud native environment applications,
where are they existing because they're so dynamic and ephemeral?
Where do we put the firewalls? Where do we put the VPN
head ends and terminates? And how do we manage then all
of these flows and ensure security?
Not only that, but I presented again a very simplified
view of microservices and interconnections.
But the reality is they're far, far more complex
than this. For example, here's a microservice
dependency graph of just one application, a banking application
by Monzo. Or what about some bigger apps like Netflix?
This is a microservices dependency graph for Netflix.
And these, even more scary, becomes these microservices dependency
graph of Amazon. These just become mind boggling in complexity.
Where is the perimeter of this application that you stick
firewalls around it? Or how do you manage all these dependencies
in such an environment? These are the new challenges that are presented
in cloud native. First of all, recognizing that we have
new security challenges, that's part one. Then the biggest
thing, or the thing that's most lacking often that we hear from customers,
is a lack of visibility. We just don't even know what's going on
and we don't have that insight. Also,
recognizing there's multiple layers of security needed from the
containers, the libraries, the dependencies, the comes,
the orchestration, et cetera, all of these layers,
even the APIs, there's so many elements of security that
need to be examined, inspected and
provisioned for. And finally, also recognizing
that the earlier in the development process, specifically the
continuous integration, continuous delivery process,
that we can identify security risks or
threats and actively address these, these more efficient and
more cost effective and the better for everyone. There's so much time that
could be saved if you spot these earlier in that
cycle and so much frustration as well. So to
that end, we have an
offering, Cisco secure application cloud that provides
these needed capabilities, visibility, policy enforcement,
shifts security left. And I'm going to talk about that in the very
next slide, as well as then offering this continuous security
in a cloud native environment. So what are we talking
about when we say shift left? Well, some analysts have really
coined this term and it's become popular, but maybe not everyone
is familiar with it. The idea is that here we have the CI
CD lifecycle. Many security tools are oriented
towards the runtime. So once the application is up and running
these, we think about security, then we take a look at tools
that address security. Whereas what if we can move
that left in that cycle to say, okay, don't just give SEc Ops
tools, let's also give the DevOps team some security
tools so that they can make good security decisions,
apply good security hygiene, and you know what, not only them, but even the
developer. And we're going to talk. But for instance, how we can enable the
developer to facilitate
and take security into account in the decisions they're making, so that
right at the start when they're coding the app,
they got security but into it rather than after the
fact investigation that now has to result in application
patching and recoding, et cetera. So we
want to shift left and make it continuous.
That's the goal of Cisco secure application.
And what I really like about the architecture
that's been used here is that a lot of competitors use an
agent based architectures. Now what does that mean? Remember we
talked about Kubernetes environment where we would have a control plane and
worker comes and they would then apply security agents
on all of these worker nodes. Now this approach,
it's not very efficient because now you got a lot of extra software, kind of
like one of our earlier slides that showed all these levels of abstraction that
was in a virtualized environment, you got more software
that has to run in order for the applications to function.
And these bogs things down. It puts additional load and expense
onto the overall architecture.
In contrast, we leverage native capabilities
that are already existing within Kubernetes,
specifically the application controller capabilities
of the Kubernetes API server. And as a result,
the only dedicated resource we need in an entire
cluster to run this security solution
is a single pod. So that's very lightweight,
that's very high performance. Like I say, to enforce the
policy we use these native mechanisms so it's fully secure
and a lot of fantastic capabilities and it scales and
it's very inexpensive these to the environment.
Also we can optionally integrate with istio service mesh. So we
talked about the benefits of a service mesh. So by applying
a sidecar like envoy, a proxy, then we
can have some additional capabilities for
each application, such as giving us observability,
providing us with firewalling, providing us with encryption,
et cetera. And then these services are typically managed by
the control plane of the service mesh. So these areas central policies
that are applied to all and it makes it scalable, manages the
traffic, manages the security, manage the observability, et cetera.
Now we patch into this by adding an additional
module within that envoy proxy,
as well as then some additional code
to provide DNS detection, so that we can even set policies that
are limiting which domains that can be
connected to by the workload and report that up to the
controller to enforce policies of that nature
as well. These finally, remember I talked about,
we also want to arm the developer to
help them make security aware decisions early in
the development phase of the CIDC lifecycle.
So we really want to shift left. How do we do that?
Well, one way is that we collect information about various
APIs and their respective security vulnerabilities,
threats, posture, et cetera, from many sources.
From our own Cisco Talos, which is one of the most comprehensive
security resources in the world. We do but
five times the amount of security analysis
of events per day than Google does searches. And not
only that, we gather information from there, but also from Cisco
umbrella as well as bitsite. And these, all these information is then
fed to our system that says, okay, these are the APIs
that we know to be secure or know to be not secure.
And therefore we can present the application developer a
curated list. They say, hey, I need an API that does this.
And rather than just first come, first serve, picking an API that
meets their needs, which is typically the approach, I presented that
security aware curated list to them. They can choose
not only the APIs that meets their needs, but the most
secure one. And you can even set specific compliance rules
like the API must meet this, that and the other thing. And these can
be set globally. So we can ensure that even
in the development process, but not just there, we can also observe
the traffic that's traversing these APIs,
monitor that traffic, and if any of the policies or compliance
rules are violated, we can immediately take action
up to and including termination of that traffic as well. So many different
options available to us via this technology.
So not only presenting container security,
but also API security. And we Cisco are
unique in this overall offering.
Let me now share a demonstration of Cisco application
cloud. Before I
get into these demo, I want to call but this website, this URL
eti cisco.com appsec
so emerging technologies and incubation cisco.com
applicationsecurityabbreviated and this is where this very
software is available for free. For anyone that wants to
run this demo, or even better to run this solution in their
environment, there's no feature limitation, there's no time
limitation. The only limitation is scale. We support up to
five comes for free. We want everyone to take it for a test drive
to see containers, serverless, API and service mesh
security in action in their own environment. So let's get
into the demo. When we log in, we're going to see a dashboard like this
that identifies the top security risks, whether the top risks from
pods or APIs or vulnerabilities or permissions,
whatever the case may be. Or for those who prefer,
we also lay out this security information in the
mitre framework so you can see all the different attack
vectors as well as then all the security best practices that areas
recommended to prevent those specific type of attacks.
But even more than just informing us of a particular vulnerability,
such as in this case, the ability for attackers to hide their tracks
and cover over their activities, we see what are all the
affected elements with our environment that
would be affected by this specific vulnerability. But what
I really like is how easy it is to repair to say okay,
I get it, there's a vulnerability, I have these, there's some
best practices that haven't been implemented. Is there anywhere
I don't want this implemented? Probably not. And then I just
apply now and then I've gone and I've created
the rule to prevent defense evasion. And then now I can see
even that specific vulnerability is plugged.
Now not only can I see my threats as outlined, but I can
see my overall environment from more than one perspective.
For instance, if I'm a DevOps person,
I'd likely be interested in seeing my clusters and pods
and interconnections, et cetera. But if I'm sec ops,
I can just quickly change that view over here. And then now
I have a view of the same environment, but from a security
perspective I see what pods are at risk or
the connections that are regular versus encrypted. If there's any
blocked traffic and I can zoom in on anything according
to my interest, not only this,
but then I can do runtime security and see, okay,
of my workloads that are running which one of these are
at risk. And for instance, I can see that this Nginx workload is quite
risky. And why is that? Well, because it's privileged, it can run
its root, and it's public, placing for a
lot of errors and vulnerabilities associated
with this workload. And therefore I could set policies to
restrict these types of risky workloads from running or having
other actions taken as a result too,
or APIs. For instance, when I come back to my security
risk, I can identify which areas my top security risks
from an API perspective. And so if I take a look and
drill down, I can say, okay, what is risky about this particular
API? Well, I focus in on its security
posture from a network or application or DNS perspective. And I
see from a network perspective there's a vulnerability.
And the specific vulnerability is it's using a deprecated version
of TLS. This leaves it susceptible to man in
the middle attacks such as poodle and beast. I can even identify
the specific endpoints where this vulnerability is
present. Now, I might set compliance
rules either on my connections, my clusters, pods,
et cetera, and even my API policies.
So for instance, I might have a policy that says okay, I'm only going to
allow API policies that are specified as low
risk according to all the sources I have, talus, umbrella and bitsight,
et cetera. Or it can even get very granular and flexible.
For instance, I might have the new policy that says
no Russia based APIs
for the time being. So to implement this,
I'll say ok, I'm going to look at all the API endpoints,
I'm going to select an attribute such as location,
and I'll say it is not equal to and then I'll punch
in Russia. And now I've got a tag that
will look and will geolocate those APIs and then can
enforce them to say okay, I'm no longer leveraging or utilizing
even APIs based on specific location as well as
any other criteria. So basically, I have very powerful
tools for applying security in my cloud
native environment, whether it's container security, APIs,
security, serverless security, so on and so forth. Very comprehensive,
powerful tool, and it's free to use. So by all means, take it
for a spin. Okay, let me wrap
things up and summarize the key points that we've
covered. So first thing, cloud native architectures,
they bring many business benefits. We talked about portability,
flexibility, scalability, containers delivery,
all of these benefits very valuable to the modern
application development and experience.
Not only this, but these architectures
do present some new challenges, which is almost
inevitable with technology. You solve some problems, but sometimes you create some
new ones. And so we're applying our,
like I said, our 30 years plus of experience in networking and
security to this new domain. These new set of challenges, both from
cloud native application networking and cloud application security,
and the two specific solutions I introduced today were Cisco
service mesh manager and Cisco secure application cloud.
And what I really like about the
approach we've taken here is that we really want to drive adoption.
We want people to take these for test drives. So we're offering these
software suites completely free of charge for people
to try. And it's got full
functionality, no time limits whatsoever.
The only limit is these scale. So five nodes
for secure application cloud, ten nodes for service mesh manager,
and then at these steps, you can just start using
them. You download, you log in, you get an account, you sign
up for, like I say, completely free, and you're off and running within
a few minutes. Same with service Mesh manager.
So we really want people to try and adopt and then
see the value that we're bringing here based on
our expertise and thought leadership into these new spaces.
And we really want you to take advantage of that.
We're part of Cisco's research and development team, which is called
emerging technologies and incubation. If you're interested in following
some of the other tools and technologies and solutions
that we're actively working on and developing, please feel
free to follow us here at.
Thanks so much for taking the time for this session. I hope you found it
useful.