Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello, my name is Pavusk and together
with my friends Alaranievich, we want to
show you the most in innovative multicloud
deployment platform. Yeah, I know I'm out of that platform
or one of the coauthors, but I really really think
that it's a very innovative things. Let's start.
What is the melodic platform? It is the single
universal platform for the automatic multi cloud deployed.
So you can just simply automatically deploy to
the Azure AWS, Google Cloud, to any OpenStack
cloud providers and also to one steps cloud
provider and other cloud provided
everything is done automatically and even more
the usage of the resources is optimized.
So our platform is selecting the most
optimal set of the resources, the most optimal
cloud providers to deploy your application.
How it works the melodic is probably the simplest
and the easiest way to use the multi cloud. You don't
need to learn Azure AWS,
Google Cloud platform OpenStack, but you
have the one cloud agnostic independent models of the application
and it can be deployed automatically
to any cloud provider. It supports virtual machines
containers serverless function which I especially
covered on today's presentation and also big data
frameworks and we can deploy that to the different cloud
providers. As I already said, the deployment is fully automatic.
So it's probably the only one multicloud
platform which supports fully automatic deployment
and for sure the only one platform which is doing
advanced optimization of the cloud resources using the machine
learning based algorithm. So first step is to model the application
and for that we have deployed camel cloud application
modeling and execution language which allows to
model both application and infrastructure.
And even more it allows to model the
requirements, constration and optimization goals
for the application. So everything is modeled in one
model and this model is used CTO
deployed application to various cloud provided and for the
optimization proposals. It is quite similar
language based on Tosca, but it extends
Tosca a lot, allows for the modeling components,
connections between the components, security rules and so
on. And of course to model the infrastructure
requirements so we can model the requirements,
constraints and the value of the utility function
which is based as a goal for the optimization.
The utility function set what is the best deployment.
So how we know that the given deployment for the application,
given set of the resources is the best. One of the
criterion is of course the cost. But if we have only the
cost as a goal or as utility function,
then the best solution is just simply to not deploy
anything because in that case the cost will be zero.
But in reality we want to deployed our application.
So it is usually the trade off.
But before this optimization. We need to know what
is the context of the application, so how many resources
are used and what is the performance, what is
the cost and so on. And we are doing that
by the metric collection. Then we need CTO have
the utility function. So we need to know what is this
past deployed. And in melodic it
is possible to create in a very flexible way the
utility function. And this utility function is based on the business
value. So not on the cpu usage, memory usage,
but on the average response to the user,
average workload and so on. It is possible to add
any custom metric for the application. And this custom
metric measured by the custom sensor can
provide the business related metric
to the platform which is used for the optimization. And as
I said, it is usually a trade off between the cost and
performance, cost and availability, cost and
other requirements. And everything is modeled
in the camel model and automatically optimized by
melodic. So you can consider melodic as your smart and
autonomic DevOps operator. How it works
first step is manual, so you need to model application
in the camel model. But the good news is that it is
needed to be done only once. So if you have modeled
your application, set initial values of the parameters,
then everything is done automatically. So after
deploying the model, melodic is calculating the
initial deployment plan for the application.
Deployment plan, I mean which cloud providers to use
and how many resources, what type of the resources and
so on and so on. Everything is calculated automatically.
After finding the most optimal deployment,
the application and infrastructure is automatically
deployed to the selected cloud providers. So virtual
machines and other resources are provisioned.
Then the components are deployed fully,
automatically. And then the metric collection and
monitoring is started. So the melodic platform
is collecting the metrics and based on the values
of the metrics, melodic makes a decision if the
new deployed new solution should be
reasoning to start rezoning process to find
the new solution. Usually the decision about starting
new reasoning process is based on the threshold. So if the given
metric has the value above the certain threshold,
then the new reasoning part is started and melodic
is looking for the new solution. And if the new solution is
found, then it is automatically deployed,
but it's not deployed from the scratch. But the current one solution
is reconfiguration. So new resources are added or
removed if necessary or replaced.
But everything is done on the fly. It's fully
automatic process and how melodic is built.
I think it's very interesting, especially for
the developers. For the cloud native solution,
the melodic has the pure microservice architecture
as you can see. And these microservices are
orchestrated by the PM process.
We have implemented all of the logic as a business
process using the kamunda and
for managing the components and invoking
the components. We are using Mu ESB as
ESB and activeMQ with the esper as
the monitoring plan. Thanks CTO that we are
very flexible. We can change the logic just simply by
drag and drop in the BPM tool.
We avoid to hard code too much logic into
the system. We have evaluated different integration
method point to point so directly from the BPM to the
certain microservice cube based integration
ESB. But finally we have decided to use the
ESB with the BPM integration as it is implemented
currently in melodic. Yeah, we have an extended
comparison of the different methods and how they fulfill
the requirements including some kind of the
points calculation and preparing the planning.
And as I said finally we have evaluated
free USB solution and four BPM engines.
But the final solution is
built based on the mulesB and Kamonda
BPM workflow. We have
documentation why we have selected that
tools and it could be also very interesting thing to
take a look if someone wants to build this type of the
solutions. We have compared both ESB and
BPM solution and based on that we have
built the whole platform which is then
used to deploy the applications. And today I want to show
two examples of the use case application and then
Alicia will go through the melodic platform and
show the r1 deployment. One application is
image recognition on the fly. The images are
provided by the special augmented reality
glasses and directly connected to the cloud and we are using
serverless function which classify
images almost in the near real time and melodic
decides how much resources or how many instances
should be run in the given moments. And also the
second planning part is deployed using the spark
big data framework which is also managed by melodic.
Melodic decides how many workers should
be used. The second use case is big
data application spark application. It is the application used
by the one of the polish universities to analyze
genome data. It compares the genome of the given
person, CTo the reference genome data to find
the mutation and the genome data are
collected from the persons and then the
processing is started after sequencing to the
digital form of the genome and the operator is
starting the processing based on the number of the
patient and data to process
starting this processing, melodic is looking
for the parameters of the training, so collected the metrics
and checking how long it takes to process the data
and if it process too long the new resources are added
and the process repeats. So melodic continuously collect
the metrics and checking what is the expected estimated
time for the finishing job and if the time is
too long, then again the new resources are added
and if the time is below the expected time.
So the business goal the reconfiguration is
keep stable and the data are process and at the
end the data are removed as a summary
how the workflow in the melodic looks like. So we
model our application in camel, then deploy the
melodic platform, then submit the model to the melodic
platform, press big green button in
the melodic, Allah will show you how to do that
and then melodic automatically find the best solution and
deploy your application. And after that just
connect to your application and enjoy.
And now allah will present the demo of the melodic.
So the demo of the platform and also the deployment
of the application, how it is done and how
to handle that. So thank you and allah please
go on. Now I would like to present you how to
automatically deploy own application by
melodic platform. I will perform deployment of spark based
application. We will monitor application metrics and
observe reconfiguration process which is done by
melodic for reasons of optimization. My melodic
platform is installed on virtual machine on AWS
and it is up and running. I'm logged in.
Melodic users are managed by LDAP.
We have three possible roles of users common user
he can perform application deployed admin
user. He manages of users accounts
and also has all privileges from common
user and technical user. He is used
only internally by melodic components and he is
not important from client's point of view.
The first step in melodic usage is
the defining of cloud settings. In provider
settings menu part, we can check and update provided
credentials and options as we can see in
cloud definition for provided view,
filling these values is required in order to
perform successful deployment because they are based
in contact with providers. For example, by creating virtual
instances on my environment, I have already
defined these values for Amazon web service and
for OpenStack providers. In these definitions
we provide cloud credentials and properties,
for example settings for Amazon Security group
or set of private images which we would
like to use in our deployments. When our platform
is prepared configured, we can go to deployment bookmark.
Today I would like to deploy genome application which
was described by Pavel a moment ago. Before deployed
we need to model our application with its
requirements in camel model which is
human, understandable and editable form.
After that such model is transformed CTO XMI
format form understandable formaltic
we upload this file here by drag and drop.
Now our models is being validated and after that
it will be saved in database. In a minute I will
be asked for fill values of AWS developers
credentials. Providing these credentials is required
in order to save results of our genome application
in AWS as freebacket, but in view
of security reasons we shouldn't put them directly
in camel model file. So we use placeholders in
camel file and after that we need CTO provide these
values here. In this case it is not the first
upload of such model on this virtual machine.
So these variables already exist in dedicated secure
store. I can verify them update if they were changed
and after that choose save button.
In the last step I need to choose which application
I want to deploy and which cloud providers I
want to use. Here is also possible to run application in
simulation mode. Simulation mode is the case when we
don't want to deploy real virtual machines
on provided platform but only check which
solution will be chosen by melodic. We manually
set values of metrics in simulation part
and observe the result. But today our aim
is to perform real deployment of genome
application so I leave this option turned
off. We would like to deploy genome only on AWS
so we chose this cloud definition. Thanks to
that melodic has credentials for this provider.
After that we can go to the last step here where
starting deployed is available.
After starting the process, in a minute we are moved cto the
deployment process view. Here we can
observe the progress of that. In the meantime, I would like to
briefly describe application which is
being deployed by melodic. Now Genome is a big data
application which performs some calculations and
safe results in AWS as free
bucket so we need to provide developers credentials
to AWS. Genome's performance is managed by Spark.
In genome application we use Spark as platform for
big data operations which are performed parallel
on many machines and managed by one machine named
Sparkmaster. Sparkmaster is available by default
on melodic platform. Melodic creates proper number
of spark workers as virtual machines considered
our requirements from camel model thanks to
measurements of application metrics, melodic makes
a decision about creating additional instances
with workers or about deleting unnecessary
ones. Spark divides all calculations named tasks
between available workers in order to optimized application
performance and cost. Please let me come back to
our process. Phishing offers is the first step of
deployment process. We have information about current
total number of offers from previously selected
providers. So in this case from AWS from
these owners. Verlodic will choose the best solution
for worker component after choosing this box
or offer option from Mani which is available
here, we are directed to view of
all currently available offers. There are
cloud with my credentials and also with
my properties for security group and for filters
for our private images. Also we have
here hardware with information about
cores, ram and disk and available locations
where our virtual machines could be located
and the last element here images.
There are only private images visible here
but of course all public images are available
for us. Now I come back to our process
view and we can see that the next step
of process is generating constraint problem.
Constraint problem is generated based off our
requirements defined in camel model. In simple
process view there are visualized all variables
from constraint problem with the domain values
for genome worker, cardinality worker
course and provider for spark worker.
Detailed data are shown after click of this box and
here are presented list of variables with
additional information about component
type, domain and type of this domain.
Utility formula it is used for measure utility
of each possible solutions and choose the
best one list of constants with types
and values. They are created from user requirements
and are used in melody calculations. Here we
can see for example minimum and maximum
values for cardinality of spark worker
instances or the same type of restriction for number
of spark worker cores. So we can see
that in our deployed we would like to have
from one to maximum ten workers and
the last element here list of metrics with data types
and values initial values. They describe
current performance of this application. Thanks to
them, melodic can make a decision about triggering
the reconfiguration process which means creating
new additional instances or deleting
not fully used ones. Thanks to metrics, melodic can
do the most important task which is cost optimization.
We back to process view. When constraint problem is
generated, it is time for resoning. Melodic finds
here the best the most profitable solution for
the problem defined by us. When rezoning is completed,
we can observe information about calculated solution
utility value and values for each variables.
In that case one as worker cardinality
for worker cores and provided for spark
worker from zero index. So it is AWS.
The next step in process deployment is deployed
here melodic performs operations based on calculated
solution. This solution is deployed for each
application component. Melodic creates
proper instances, remains them or deletes.
If you want to have more detailed view,
it is possible to see the process view using Kamunda
by choosing advanced view button from
upper left corner. From this view,
Kamunda is tool for monitoring
and for modeling processes in BPM and standard
and for management of them. I log in by
the same credentials as for my melodic platform and
in order to see detailed view in Kamunda I
need to choose running process instances and
after that process to monitoring from the list.
And now we can see view of chosen process
with all the variables and also
detailed view of the whole process
with each steps. This view is for more
technical users. It could be useful for example during
diagnostic of some problems. We can see that now
we are even here. So it is the
end of our process. In order to verify this fact
I go to UI again and yes,
we can see that our application successfully
started. So the deployment process is
finished and I can check the result in
your application bookmark.
In this view there are deployed list of
created virtual machines and functions. Genome application
requires only virtual machines. We can see
that melodic creates one virtual machine. Thus far
this machine is created in AWS EC
two provider in Dublin. What is more we
have here button for web SSH connection which
is really useful in testing process. When I
successfully deployed spark application by melodic,
I can go to Grafana.
Grafana is tool for monitoring displaying
statistics and metrics. We can use them
for monitoring performance of applications deployed by
melodic. Each application has own metrics and
own parameters to control. So we need to create
dedicated Grafana dashboard for each of them.
Also genome applications has own Grafana
settings and we can see them here.
For now metrics from our applications are
not available yet. We can see only that
we have now one instance so one worker.
In the meantime we can control our application
in Sparkmaster UI.
Sparkmaster is built into melodic platform so
we go to the same IP address and 81
81 port in order to check the spark master
UI and here we can observe
a list of available workers.
After refreshing of this view, of course we can see that
now we have one worker and
one running applications and also one driver.
So now all tasks are sent
to this one worker by our spark
master. It is situation after initial deployment
decision about creating the new ones or
deleting. Some of workers
are made by melodic based on measured
metrics. In such situation new process is
triggered and it is named reconfiguration process.
I think that now we can go again
to our grafana dashboard and we can see
that metrics are using correctly calculated and
based to the melodic because they are visible
also in our grafana view. Color of
limitation of traffic lights inform us
if application will finish on time. Now we
can see the first estimation so it can be
not correct. I think because we
have no enough data for a good estimation.
So we need to wait a minute and even now
we can see that our light is red.
Also we can see that our time left
is not enough to finish our
calculation on time. Because the initial
indicated time is indicated in camel model.
In this case it is equal to 60 minutes. We can observe
how many minutes left from this time period. Under this time
left value based on current performance. It is calculated
the estimated time left on the left.
On the first chart we can monitor number of
instances. So now we have one node.
So one worker. In the bottom ones it is presented
number of remaining simulation. This value is decreasing
with performing next task by Spark. On the
right on chart named number of cores,
we can see value of minimum course needed to finish calculations
on time and current number of cores
under total course value. The green one
is the value of required number of course and the yellow
one means current number of them. Now melodic
claims that we need at least four cores
and even now six cores, seven cores and
we have only one. Also estimated time
is higher than time left. We can see
red light. So there are signals that
our application needs more resources.
In such situation, melodic makes a
decision about triggering reconfiguration
process. So we can suppose that
in the background reconfiguration process should
being done in order to verify it.
I back to our melodic UI
and I go to process view. And here we
can see current process and it is
our reconfiguration process. In reconfiguration
process, melodic doesn't fetch new offers and
uses the same constraint problem as for initial
deployment. For these reasons the first step
is rezoning. As result, we can see new
calculated solution which will be deployed.
Now melody claims that two workers will
be needed and this solution is
now deployed. So in a minute we will see
our new worker. Oh yes, even now
our new worker should be visible because
the reconfiguration process is finished. I can verify
this fact also in your application part. And yes,
now we have two virtual machines, two workers
and this is the new one from our reconfiguration
process. Also I can check this fact
in our Sparkmaster UI. I need to refresh
this view. And now two workers are available.
We have two life workers. So now
we can see that Sparkmaster divides tasks between
these two workers in case of genome.
In the first part of performing calculations,
additional workers are created. As far as melodic
measures that effectiveness of application is too
low. In the final part of performance of spark jobs,
melodic makes decision about deleting unnecessary
instances when it is feasible that applications will
finish on time. And now we are in our initial
part of the whole process because we have, as I
mentioned, we have 60 minutes to perform the
whole process. So we are at the beginning of them
and now additional workers are being created
because of effectiveness of our application.
Now I go to Grafana view. We can see
that now we have two workers, two nodes.
Next tasks are done and we can
see that now our estimated time is
cloud to time left and even now melody
claims that it will be possible CTO finish
the whole process on time. But now our estimated
time is bigger again.
So we can suppose that in a minute
our time will be red again. Probably we
will see the new reconfiguration process and
the whole process is using performed to the moment
where our estimated time will be enough for us,
enough for our requirements. Thanks to that,
finishing the whole process in our expected time
will be possible, right? So we successfully observed
the configuration process of spark application.
This is the end of spark application deployed
done by melodic demonstration and we can see
that the whole optimization process is
done fully automatically. Okay so thank
you very much for your attention and
this is all from my side. Okay thank you
AlA for the presentation. Just a few words from my
side about the melodic. Melodic is fully open source
so you can download the melodic here. The source
code is hosted on the OW CTO GitHub
so you can download the code.
It is released under the Mozilla Public License
20 so it can be used and challenges.
Anyhow, welcome. We are also looking for the
volunteers developers, open source developers.
So if you want to work on the interesting process
then please join us. We are currently developing
further melodic in the scope of the morphemic platform
with new context like polymorphic adaptation
and proactive adaptation. But I will tell more
about that on the next session. Probably something
in the future. One more thing, please take a look on our
website and also visit our Twitter, LinkedIn,
Facebook and follow us on the social media.
Thank you very much. Thank you for the invitation to have the session
on the cloud native. I hope you will
find that interesting and welcome to the
melodic and I really invite to join to
our community. Thank you very much.