Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello everybody, thanks for joining my talk at
Comfortito. Today we'll discuss instance provisioning
and hot migration in the multicloud environment. I'm Jill
Bear. I'm the CEO and founder of Scaledynamics,
a startup where we provide container managed
platform. So the first question
is why is multicloud and hybrid important in
the cloud native space? This picture shows you the interest
of companies. As you see interest is growing
to do more and more multicloud strategy for the cloud usage.
97% of companies are requires to do multicloud
and hybrid management. And why multicloud?
Because each application requires has
different requirements. So you need to be able with multicloud people
can select the best resources at each time.
And of course what we will discuss and discuss how
you can move to the best resources each time.
When you want to select the resource, you have multicloud criteria. Of course you
have the performance criteria where you need to select the best hardware
machine to do the job in terms of cpu,
ram size, gpu,
derisk, disk, type of disk, speed of disk
network, et cetera. So purely performance
consideration, you also can have another consideration which is about the
cost. You can have another consideration which is
about the managed services, additional managed services
like database, database or APIs that the
provider provides you. And that means that you must
be on a specific cloud to access it. You can
also need to select according to the labilization requirements
of your work you are going to execute. For example, you need to
be labeled to be able to store data regarding medical
stuff. And at the end you
can also consider your resource and where you want to put
to execute your app according to the carbon footprint.
If you consider that the carbon footprint is a requirement
for you. So according to the latency, the cost and the carbon
footprint, this is why multicloud is important
in company spaces and the cloud native spaces. The second
point is additionally
to this multicloud approach.
Today developers wants to do containers and companies
are in the road of doing more and more containers.
Traditionally developers were doing virtual machine, which is, let's say
monolithic application. And containers provide
full agility for developers. You can define
specific components, you can reuse these components
across multiple application, you can patch each component devinually.
If one component fails, your whole application is integrated
mode and not disconnected. So containers
providers is a road for company because it provides a lot of advantages
in terms of agility, in terms of productivity.
And this is why containers is the road. So when
you consider the two points that multicloud is important and containers
is important, what is in mind is to
have a way to do container in a multicloud approach.
And this is what we do in scale dynamics.
We have defined a new container managed platform where you
will be able to provision cloud resources,
public cloud resources and on premise resources to the hybrid.
We will provide a way to deploy your containers
on top of it. You will provide metrics to
analyze what's the behavior and what's the state of each
container run on top of these resources,
and the way to move your containers across
the different resources so you can select anytime which
is the best one for your application. So now
let me explain how it runs in action. So our platform
is composed of two different pieces. One console
where you will able to manage your numbers,
create your projects, your cloud environments, you want to execute your
containers and an SDK which is
available on Linux, windows and Mac which will provide
you a way to deploy and manage day to day
your containers by the developers.
So instead of showing you how the pieces, let's start
with let me show you in action how it
is. So let's start by the console.
A console is a way where you will find your projects,
your members, your API keys to do your CI CDs,
and also the cloud management resources.
So let me show you for example, in the member
space you can invite members and you can set rights of each members.
So you can really create an
organization, can be a team, a small team or a company according to the
size of each company. So you decide and
you manage your member, right? Then you create project what is a project?
A project can be a back end, a project can be a website, a project
can be microservice, just for payment. For example,
you decide what is a project, it's your way
to encapsulate the project in term of a set of containers.
So let me for example create a new project,
conf 42 just I will use for
the demo. So when you get a project,
and of course I will deploy some container on it,
you need to create where you want your containers to be
executed. So this is what we call the environment space.
So an environment is where you want to
deploy. So you will define which resources for one environment,
which resources you want to use, and you want to run your
deploy your container on it.
So let me create an environment and during
this, let's say I will create an environment,
I have to select what I want to deploy. So on our
platform we can deploy static hosting, we can deploy a node
JS server, we can deploy an
HTTP docker built from the docker file,
and then we can deploy a node JS module, which is a standard
node JS module like say AWS,
a service you can only call not by HTTP but by directly
making JavaScript costs. So when you select, let's say,
let me show you, I will do this example a manage node JS server.
When I look to the second step is where I want to run
it. So I have access here to any kind of multiple
providers. I have Azure AWS, GCP OVH which
is a french provider. We can add
some of your custom providers if required.
So let's say I want to be on AWS, I can
select the region, I want to run the resource, let's say London.
Then I can select which type of resource. So it's pretty
fine resource in our catalog, let's say
by default everybody is using this, but we can
improve and extend the catalog according to each
customer choice if he needs, let's say more cpus,
more gpus or more ram or specification, we can
provide in fact the resources. So when you have selected the provider
and when you have selected the region, then you can select the right
configuration and you click on order. When you click on order, in fact
the resource is physically automatically provisioned
and configured to be able to deploy
containers on top of it. That's as simple as that.
You don't have to do other stuff. You click provider
region configuration and you order and
then the resource platform will provision automatically and set up
the resource for you. So instead of doing
this, we provide also shared resources which are free just
for let's say testing and evaluation purpose. So let
me use here a shared resource running where
I can deploy in a JS server. So it is as simple
as that. So as I showed to you, I created an
environment where I can deploy a container
on top of shared resource. So to be able to
deploy I need to install SDK
which is available on Linux, Mac and
Windows, which is named Warp. So with
the warp commands developers will be able to manage
their containers, meaning to deploy, to build their CI CD,
to access the logs of the containers,
to get the development, deployment information.
Everything requires to manage
in fact that containers on top of the resources.
So let me show you for example all the commands you can deploy, you can
manage your deployment, you can of course authenticate before deploying
because according to the rights you get,
you cannot do anything. So you can also control the build configuration,
your project and beyond. So instead
of let's say expliciting every command,
I'm going to use a getting started we provide
on our documentation.
And with this getting started in less than three minutes
you can deploy your node JS server and make it live on
the share resource. So instead of doing all
that let me just go and clone
that in my
stuff talk, I go on it my server then
I install. So when it
installed here it installed also all the package requires like
expressgs for example to create the node JS
server but also the wap that is already included
here. So when it's done we
are ready, we can deploy. So first let me for example also
have a look to what is that code of the node JS server.
Let me just change hello, I'm going to say
hello conf 42 hub which means
that it will be that code query that will be deployed.
So once done we
can be able to deploy very easily. The server
is running is okay, it's set up. So to deploy I
just have to deploy talk.
So I'm in organization, let's say I
use a comfort project in the demo. It is just
a project I have configured on the console.
So currently what the platform is doing
is taking the node JS server source code and
building the docker image and then it will
deploy that docker image on top of the shared resources
I have selected on the console and that's it.
At some point, as you can see I had to enter a hostname.
On this specific case I didn't understand hostname.
This is where for example you will toto.com or your
own company named domain.com or f or whatever
and you will be able to manage your custom domains
by setting the right domain name. If you leave blank,
we will use a testing domain that we create just
for testing purposes. So we are
now at the end of the deployment the docker has built build,
the image is installed and the container is going to be started
with the DNS and the certificate installation.
So we will be able to access our
node JS server in live in the
cloud on the specific resources
we just set on the console.
So now the server is deployed.
So if you want to have a look to the server, I just have to
open that. So up, let me open
tuck and you will see here,
sorry prop, you will see. I don't
know if you can have a look but you'll see. Hello comfortable world
for Node JS and it's live. You have
a URL and you can access it as a node JS server.
So as you can see it's very simple
to deploy stuff. You can deploy from a docker
file, you can deploy from a node JS server, you can deploy a static
asset. And the platform, what does the platform when you deploy,
it builds the final docker, then it push docker on the resource
you have selected and then make it live.
That's what does the platform. So let me
get back to the console and AWS, you remember, it's where
I've created in fact demo environment on
comfort e two project where I just have deployed my node JS server.
Suppose that now it's running on a shared resource.
I want to move elsewhere and I want, let's say to
move to an AWS resources. So let
me select where I want to go. So let me
select, as you can see you have AWS, GCP,
Azure or OVH. So let me select AWS,
let me select the right region I want to do.
Let me select that and click order. When I just
click order, what's going to happen is we are
going to move the container from the
shared resources to the resource we've just selected.
So regarding the use case of that, that means that
suppose you are on a small configuration with
minimal cpu and you have, let's say a huge traffic,
or you have huge traffic or traffic is coming
and the load of the machine is going to
be near 80%. What you should do is to go to a higher
resources. So as you can see, you can use the move button
to upgrade to the right resources, or if it
is not anymore used, you can,
let's say downscale to a small resources. You can
also use the move button to go from
one cloud to another cloud resource. That means when you
go and go to AWS small to an azure small,
why? Perhaps due to the cost or perhaps due to the performance
of the resource or any kind of other
criteria. So the move button is able
to move across multicloud, multi cloud regions
and is very flexible internally. The platform manage all
the redirection of the traffic for you. You don't have to take care of that.
The next deployment will be done on the last resource
you just have selected and that's it. So it's super simple.
It opens a way to go to any cloud or move to
any cloud according to a criteria.
This is really the objective of the platform and what we provide.
But the question is now how you can decide to
move. So to decide how to move,
what we provide in the platform is you can
have a look to the metrics of your container running on
top of the resource. And according to the metric you can decide
and have, you can decide what to do.
For example, let me show the metrics of this application looking
to the cpu load. For example, if you look to the cpu
load, you are on average 93%.
It does not require for me to upgrade or to upscale to
a higher resources because let's say the
memory is good here, the cpu load is good,
the memory usage is good.
When I look to the data,
it's super good. So there are not too much requests
on top of that resource. So that means that according to
these metrics, I don't have to change to get a higher latency,
a better latency, or to support my, let's say, increasing traffic.
Thanks to these metrics, you can easily understand
if you need to move up or if you can move down.
For example, if your cpu is at 80%,
you should of course upgrade. But if your cpu get back to
10%, you can degrade to a smaller resource.
Meaning by indirect that you control the cost
so you don't have to pay the maximum,
you can pay the right thing at the right time.
And the metrics are there to provide you information
to decide the other criteria we provide.
So we provide number of requests, we provide the downtime
time, we provide a number of restarts of your cpu,
of your docker. We provide the request execution
time, the request data in volume, the best data at
volume. So everything you can consider when you want to
optimize your container.
The other metric we provide is carbon footprint estimation.
It is in real time, and we have designed a
new way to create carbon footprint based on
american consortium results.
When we provide that estimation of the carbon footprint,
we take into account the type of the resource, meaning the
type of the architecture of the machine, the amount of
ram, the type of cpu, the type of the disk and the amount
of the disk. Then we take also,
and we have a model for every resource we manage on the platform.
So then to create the estimation,
we also take into account the
real cpu load of the machine and the real data that has
been loaded and emitted by the machine, which also impacts the
carbon footprint. And so by using these two criteria,
we have a very precise estimation
across the time. It's pretty similar to what
Google Carbon print provides,
but you can have a look precisely to
what is your carbon footprint across the time, across the day,
across the week and across the months. So based
on that, for people that are companies that are interested to optimize
their carbon footprint, this is another criteria you can use to
move from one resource to another resources, because perhaps moving to another
region, for example, impacts less the carbon footprint.
And we provide you access to all this data so you can decide.
So thank you for having, let's say following
this talk. I finished. Thanks a lot.
As you can see, I show you how we can really
do multicloud and workflow container
management on top of multicloud and hybrid using
our platform scale, Dynamicsmix. I hope you enjoy
the show. Feel free to visit our website scaledynamics.com
or send me an email if you want more information. Very pleased and
bye bye. Have a nice day on comfort two.