Transcript
This transcript was autogenerated. To make changes, submit a PR.
Welcome to evolute, migrate and aibased containerization here@the.com.
42 Cloud Native 2023 we're going to talk about
how to automate any software from its source environment
over to the destination in a matter of minutes.
It's so very important as your CTO comes to you
and asks you to move across cloud environments, or maybe
you're a developer and you're responsible for a complex application or hundreds
of applications, and you need to ensure that they reach their destination.
With Aibased containerization, we're able to ensure that
we get the most native artifact in
the destination and be able to do that with Razorsharp
precision. So we'll talk about how AI is able
to achieve that capability and how we're able to ensure that
whether you're trying to modernize migrate or transform
that software, we can ensure that it is successful.
So we'll go through an overview of evolution migrate. We'll also
talk about the BCD framework and our software capabilities.
We'll delve into a demo and we'll finish off with a
couple of scalability patterns to ensure that we really understand
the rate at which we can take these artifacts
and move them to their most production oriented
scalability pattern, as we conclude
for today. So in order to know where we're going,
we must know where we've come from. And it's really interesting to see the developments
that have been in this space, and it really does help to shape context for
us. We're really excited that we were able to
introduce the first containerization capability in 2016.
Since the beginning, we really created the capability to
move software into this artifact in a very native
way. So we're super excited about the patents that show that. And what
that really boils down to is that we were not only the first, but also
one of the most comprehensive abilities today to be able to achieve this
capability. We saw in about 2000 Andor 17 that
Google was able to introduce the capability to also
do containerization, and we've seen a couple of developments since then.
We began developing our Aibased containerization
around 2018. We also saw the purchase
and acquisition of companies like Mellistrada in the space in
2018 as well. We've seen in about 2019 that
companies like Docker created the MTE program,
and this was really an amligation of a
number of different consulting firms coming together and ensuring that
enterprises were reaching their destinations. In 2000 Andor
20, we saw app two containers being released,
which was really focused on Greenville software, and Amazon
has continued to launch these things and mature
them. We've also seen around this time that we were able to launch
our Aibased containerization capabilities and have really been working
with a number of leading companies to get that
to its highest peak. We've also seen Azure
migrate around 2021, introduce its containerization capabilities,
and we've also seen open source projects that are able to
really introduce their ability to evaluate core
software and create images such as the conveyor project and
so many developments in space. And we're super excited.
Again, we do believe that we've created one of the most
comprehensive abilities, so we've been able to do graphical as well as obviously
non graphical software. And really what this means is that
for the ecosystem, we're able to really accelerate the
rate at which software can reach these cloud data formats. So let's
get a little bit more into BCD. BCD is
binary configuration data, and what this framework
really says is that if we can look at
a software artifact in its source environment and
put any component into either binary
configuration or data, what that means is
that we can make that software artifact more portable across the cloud.
In order to understand how this data based or
Aibased approach works, we really have to understand the premise of
it. We'll reserve a little bit of this deeper
dive for later on, but just let's talk about it at a high level.
For our input coming into the software,
we need to understand a breadcrumb. And if you're familiar
with the irobot analogy, if we ask
the right question, then we will get to the right solution. And in
this case we just need a breadcrumb. We just need an input that gives us
a reference to the software. So in this
case, that input is either the application,
binary, a package, a file, really anything that references
that unique piece of software. Now obviously we can provide you a list of the
software that's running on that environment,
but this allows us to ensure that we understand the input.
Now again, we're going to introduce how you can use BCD,
not only through the evolute migrate software, but in your own
modernization or containerization capabilities. We believe
that this removes a lot of the overhead
when you're cognitively thinking about how do I take this software from
point a to point b? While most will go into
that software and try to understand all the intricacies, and if you have to do
that, over 100 applications that have been created
over many years, this becomes very complex. And even
if you're a new software developer or a new startup,
moving your software into a cloud environment
or cross cloud environments, maybe into an on premise environment,
being able to ensure that software gets there successfully.
You also need to understand how the enterprises
cloud environment works. And so really ensuring that you
understand how to mobilize your software via BCD is
also something that's very important as a part of this
talk. And so again, we'll talk through how you can better containers
your software. Just understanding this model going back
to the input execution output.
So our input being again an application binary or
file and the execution in this case we'll be
seeing evolution migrate as we move into our demo. But again BCD
binary configuration data, really looking at how
you can analyze your software and put it into
these buckets and then allow that to move will really help you to automate
your software capabilities. And the output is very clear.
We can move into really any type of form factor.
Obviously we could do vm to vm. We can go
from core code base into a container.
In this case we'll talk about being able to go from vm to container.
And that really is the precedent for being able to go into serverless
lambda. Really all of these technologies that
are new and cutting edge are based on the container
technology. And so being able to do that in a Kubernetes environment,
whether you're across a public cloud such as Google Cloud,
AWS or Azure, it's all based on your ability
to leveraging evolute migrate and or the VCD framework.
So it really makes for an exciting execution once you get the hang
of it. We can talk about this all day, but it's always exciting to really
get into a demo just to see the software at work. Now what
you'll see is we're actually going to be running evolutes migrate.
This is our command line for this is Chrysalis. Chrysalis is
the stage where you move from a pupa to a butterfly in
that evolution process. So that's our software's
name. We actually are going to execute this
and we'll see that it will connect to the source environment.
It will be able to pull that application, transform it
and create the containers artifact. So again, we'll connect to our
guest apps. We'll tell it to only give us the base artifact since we're going
to run through this interactively and we'll ensure that we're
able to see the execution of it in
real time. So let's dive in and what
we'll do is we'll start off with our target applications
and we'll just see our ability to execute
with this so
we can see that input here is glassfish.
And here we're able to see that we
are going to take a middleware application. We'll start to run our Chrysalis
software and we'll point to our destination
guest apps. Excuse me, source guest apps. We'll specify our
privileged user. We'll go ahead
and specify how to access that environment.
And in my environment, that's the local environment. So we'll go ahead Andor
just use the root user Andor specific keys.
We'll also specify our staging, we'll specify our
target applications. And we'll go from here so we
can quickly see that we're getting connectivity. What's happening right now is it's connecting
to the actual guest apps and it's actually comparing the
source and destination to see the deterministic path that must take place.
We like to talk about OSI, which we'll talk about how we create those
translation maps here soon. But you can see that it begins to evaluate
everything about the software. It creates the beginning and the end.
And it also starts to parse and be able to determine which
pieces of this software is binary configuration Andor data.
The reason that's important is because that's going to allow us to create unique container
artifacts and make that software custom as well as Razorsharp.
So there you have it. We have our container image
and now we're actually able to copy this.
Let's go ahead and we'll do a docker run and
we'll get into this environment. So here
we can see that.
We'll go over to the guest app system and see the actual glassfish
source. This is how it existed in the virtual machine. So we can just copy
that. We'll actually look at, we know that we have
a database. So we'll just go ahead, start our database. We also
see that we have a domain. So this is the application service
and we'll kick that off.
And that's successful. And so this
application has pet clinic. This is a demonstration app
that's running in the environment. So we can go ahead and query this on
the container environment that we just created.
And we can see that the software is running
and functioning as it should. Pretty cool,
right?
So this is exciting. Just this ability to quickly transform.
We'll actually go a little bit deeper and
let's look at the actual docker file.
And again, this foundation allows us to easily bring it
into a Kubernetes environment. But we just wanted the
actual base of this. And we can see that the software was able
to really see the destination was in an Amazon
Linux environment, as well as be able to determine its packages
and dependencies and everything needed to be able to run this software.
So very exciting, this ability to have a very
lightweight container image and capable of achieving this type
of containerization and distribution.
So in order to really understand this,
we really need to delve deeper into how were we able to achieve
this and how was AI used in a way that can achieve this.
And what we're going to do is delve deeper into BCD
binary configuration data to understand how the algorithm worked,
to componentize, categorize, classify and
ensure that the containerization would be successful. So again,
the first step that occurs is that we need to understand where our software begins
and ends. So what that amounts
to is really that we needed to create a cluster for that
software and understand its interdependencies and intra
dependencies. If you may. What are those components that are running
within the software, what are those components that are running outside of the software and
what are their relationships between them?
We were able to do this classification and once we
had this particular understanding of the software, this allowed us to
create the appropriate containers image and a three tier or multi
tier type of application, which we'll talk about the actual
BCD framework and evolution migrate would be able to understand those relationships.
And so understanding the actual classification of
the software is very important. We were able
to do this via the effective technique of k meets clustering, creating a
cluster of the application of what belonged inside of it and what belonged outside
of it, and really use this type of machine learning to
achieve that capability. Now that we
understand the application and its inter
and intra dependencies, we are actually able to now create
the container artifact. And so we leverage decision trees.
We really start off at about seven or eight base paths, but through the
number of different options through operating system capabilities,
through the actual frameworks, dependencies, the destinations
Andor the sources that we're coming from, there typically is about 3100
paths that the software can take deterministically. So we can't
run this software in a way that it's undeterministic.
We have to ensure that the software will reach its destination per
execution Andor so again, we were able to leverage
the decision trees to ensure that it was able to reach that
path. Now there does come a point where something new comes
up and we are able to get it into the destination
as optimistically as possible. And we really haven't come across
a scenario based on our understanding of the world as it exists in
a way that we weren't able to create a deterministic output. So we're super excited
that the 99 percentile of software has worked. We saved the 1% for
what we may not have experienced yet, although we've implemented this in
many different enterprises and many different capabilities. So we believe this will work
in your environment as well. So once again,
getting that ability to see how BCD works. Andor really starting
to think about when you look at a software, are you looking at it in
terms of all the intricacies and components? Are you looking at it in
terms of what are the pieces of applications, what are the pieces
of containerization, what are the pieces of unique data and
where do I need to bring those in order to be successful will help
you to be able to achieve this at scale. As we
move into our runtime, it becomes very important that we separate our
build from our run because we need to be able to move
to any specific destination and we were able to do this.
Andor in doing so, again, we can run in any cloud environment,
we can come from any source environment. We've seen some exciting
source environments and some very legacy ones as well. We'll talk about
that. But the point being is that we can really do anything
new or anything old. And so it gets exciting to really
see this work across a very large majority of software.
So this also allows us to understand those dependent
relationships. Andor so certain services need to start before,
as well as what components need to be separate. We're able to achieve that in
the destination runtime, to go a little bit further into that.
Again, that classification, configuration and containerization,
those steps in the software are very unique.
And as we gather our input and really are able
to understand the breadcrumb artifact that we have,
we really are able to classify that software. And again,
our ability to provide a list of software as it exists is very straightforward because
of this capability. And so you can really just go and select it
as you leverage our more polished Clis and
UIs. That ability to now that we have
the understanding of the application, once again we're able
to really evaluate what software language is it
using, what is actually the libraries and dependencies,
what are the static and dynamic variables, where is the storage,
is it local, is it remote? And where does it belong at the destination?
And so you've seen a very simple use of the Clive, but there's
a lot more configuration options that can be used to ensure that you
reach your destination in a SQL execution.
Andor sometimes we're using this iteratively so that we can understand
the software ourselves. So this is great for dev and tests and really understanding
those intricates very quickly in your environment. Other times
we are really just wanting to have a very fast
modernization. And whether you're doing the just migrating,
moving over from one cloud to another, modernizing, where you're actually
using that containers artifact to achieve new technical capabilities or
you're transforming, you're using the software to be
able to reach new business capabilities. All of that is
possible within this migration or modernization,
or rather transformation. And the reason that is, is because once
I transform now, if we can imagine much like chat GBT,
where its inputs are able to typically
text and the output is typically text,
it could also be code, right? Our input
is software, but our output is software as well. So the ability
to put a serverless API or
other types of constructs around this software and achieve
new experiences for customers is also possible.
So again, this is why we need to understand the software at this
level of intricacy, because performance also matters.
Our ability to scale to zero also matters. And so understanding at
this level of granularity gets the software functioning well.
For your personal use of BCD and just understanding that framework,
you may not have to be able to achieve everything the software achieves,
but your ability to understand the software in a way that ensures that you
know how to deterministically get the software over to the destination
is a feat in of itself. And so once again,
when we containerize, we're able to ensure that there's proper
instantiation, proper ordering, clustering if need be.
That could be the clustering of a three tier web app,
that could be the pattern of a
scale out database, so on and so forth. So we'll talk more about that.
But the point is that we need to understand those relationships so that we
can achieve the scalability, so that you can
continue to see how this framework
is working. We want to delve deeper into the container manifest,
and we're going to do that with an in house developed software.
This software, again, we try to bring up
some edge cases. There's plenty of new use cases where we've taken
something that's been only out there for a couple of years,
or maybe the software has been green filled. Again, code to
container is also very exciting capabilities that we've put
out there, but the ability to really take
software as it exists in its current environment. In this particular case,
this software was created 20 years ago, and every
leveraging system vendor and everyone said it wasn't possible to be done.
And we actually had did it a few weeks prior
to those statements being made. And so it's exciting to see what
you can achieve when you truly understand this, plus some of the computational truths
that we'll go through as well. So with
this example application, let's look at how when we take a
software through this approach, how we're
able to get it to its destination. And in this
case we'll look at the destination container artifact to see
how actually it was able to be organized by
leveraging VCD. So here we're leveraging the broker app
and we can see that binary Andor configuration
allow us to evaluate the application.
In this case we're moving the data over to Azure
SQL and in that case we're
recognized that ahead of time and are able to ensure that the software
executes across that requirement.
Here we can see that the container manifest
is able to account for the application specific components
as well as those that are purely configuration. This configuration
and these pieces are unique to the instantiation
of the software. For multiple environments we
may need to create separate configuration and this capability
is made possible through the BCD framework Andor evolution
migrate. So again here we're able to see that
the configuration is able to be tailored based on
that and that might become variables passed through queries,
so on and so forth. As we mature that software artifact,
as we look at the data components of this, we can
see that the customer specific data will reside in Azure SQL.
We can also see that in our configuration we need
to point to that data source and that we're able to do that really
on the fly as a part of the BCD framework
andor the configuration that's occurring there. And so really when
we understand how BCD works, we can start to see how not only does
it create a very organized way to ensure that
the software reaches the destination, but also creates very organized manifests
and are able to make it easy to scale in a cloud or
Kubernetes environment. A couple of deterministic architectures
that we could achieve for our scalability pattern. So now that we have
our software effectively migrated, the ability
to take that into its destination architecture
is important and we really want that to be quite deterministic.
Now there's many ways to scale applications and this
doesn't limit the ability, but we'll talk about
the use cases that really serve about the 80 to
90 percentile of software. When you really look at the separation
of software, we will talk about the computational
truths that make that possible. But what's also important is to understand
that the delineation occurs typically via a
number of patterns that typically occurs by the application
function, that may occur by resource demand,
or that may occur by the component such as a package.
What we really see here is that that ability
to separate that application, maybe for a
shopping cart application, we may have to separate the
component that is the actual product
library. And so those may be separate components,
whether they're in an application that was made
in a macro way or it was separated in its own native services from
the beginning. Understanding that relationship will ensure that when we scale
specific instances or components, that we're scaling for that capability.
When we look at the actual resource demand,
we look at scalability based on its
hardware demand. So that can be cp, memory, disk, network,
and really ensuring that we're able to leverage the
right architecture pattern based on that. And last but not least,
to get a package. So something as simple as NgInX,
maybe it needs to be separated or scaled independently
to achieve the level of demand coming to the actual
end component. In the
case of the application function, we can see that there might be
a multiplicative relationship where let's talk about if
it's an app WebDB, we might scale the application Andor Web service more
than we scale the database Andor in those relationships,
we understand our scalability pattern. We can use deployments and services
to really ensure that those reach the correct destination
Andor architecture. When looking at
the actual resource demand, we recognize
that we may need to create another instance based on,
or another 100 instances based on resource demand. But really we
can depend on our infrastructure components to do that. And so
we might leverage a pod autoscaler to ensure that it's continuously
meeting resource demand. When we look at the
actual separation by component, again, this is just an
m plus one pattern where the deployment count may be what's being changed
to ensure that we reach our destination.
And really this simplicity of component,
this may also be the case in a scale out database where you're trying
to achieve quorum and you need to ensure that there is a minimum
amount or odd number to ensure that you reach that.
So again, no matter what the requirements of
a topology, we can achieve this by understanding the scalability
pattern and leveraging the framework to ensure that
the end resulting manifest will achieve this capability.
Now for a few computational truths and what
we recognize when we're really trying
to do these modernizations, we've come across so many scenarios where
someone has said, hey, this is impossible, it can't be done, or there have been
vendors, the Andor who created the software says it can't be
done. Or even companies have spent over twelve months trying
to do it and say look, this is just not possible.
And in these cases we kind of love these obviously.
But what's most important is when you come across these roadblocks is
how you approach the problem. And what we see is
that typically networking Andor security will be the
biggest blocker. So you do have to understand how
to overcome those gaps. Typically the easiest way to do that is
by understanding how it works in a destination environment.
Typically Andor our Kubernetes architectures,
Andor our cloud native architectures are quite mature. So they are
able to achieve every bit of security and communications
scalability that any previous environment, but also
by looking at it through OSI. Now OSI basically
states that any two disparate systems can communicate.
When we look at OSI we recognize, and some may know
this as physical data link, network, transport, session presentation,
application or please do not throw sausage pizza away,
however you remember it, just remember that whenever
you're looking at software going from source to destination to understand how
is it achieved in the source and how is it achieved in destination.
Now when we look at this, and this is actually how evolution
migrate implements VCD is, it creates native translation maps across
those layers. Andor so we know how something is supposed to behave in its source,
we know how it's supposed to behave in a destination and it's how we can
achieve that scale. And so this again might be valuable for
you as you reach that capability.
The other big thing that we find informs how we leverage
this framework to modernize is really in
the relationships between components. And whether this is interrelationships
between networked components or networked applications or intra,
this might be IPC calls that are happening
inside the software. We can understand those things in order to ensure that
when they reach their destination, they're not only in a container artifact,
but they're also able to meet the correct scalability pattern. And so when you're parsing
your own configuration, it might be helpful to understand what are
the relationships occurring in this software, between software and really focusing
on that to understand the scalability patterns. Andor last
but not least, understanding operating systems are abstractions.
When we start to look at an operating system we
typically say, oh, it's impossible because of what's been supported on this operating
system. And it's very interesting
to see that operating systems and our developers here have worked for many
years to ensure that there is compatibility
across kernels. Andor really the relationship
between those packages and interfaces are there.
And so understanding a little bit about how those
work is very valuable. And it really is
why whenever you look at the actual attempts
to modernize, you're able to achieve a lot of that across
these capabilities. So OS and software modernization are
key parts of digital transformation, and being able to
achieve that capability at scale is very important and very
valuable. Again, the key piece here is when you're
looking at the leveraging system, look at it less as this
black box, which you can't change, and look at it more about
like it's an interface andor a set of packages and capabilities that
must be complied to. Typically in our regression
testing, we're able to show that everything, end to end,
will be successful. Last but
not least, let's talk about some of the scalability patterns that
exist for when we move our
software into the destination artifact. So there's
many different outcomes that we can achieve now that we understand the software,
so intricacy that we're mobilizing to its next generation
platform or across cloud environments. We typically
see this in the area of just enterprise digital transformation, where we might
want to achieve software or OS modernization,
and really ensuring that we're getting to support
it in latest versions are possible. As far as edge computing,
when we look at things such as multi access, edge compute and its
requirements to understand latency, we start to understand how we can create
distributed workloads. We start to understand how we can
actually take the software. Maybe for the data
component or database component, we may be putting a
read only replica at the edge and allowing for right pastures
to occur. And so really ensuring that we're meeting the latency sensitive
nature of the application is possible. Now, sometimes this means that
the software may not be able to play as a cloud
native application, but it can participate in a cloud native environment.
And so again, these types of capabilities are
very important to accelerating the rate at which we can
leverage software in distributed environments such as the edge.