Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hi, I am Chiara, and welcome to this session at the Confo 42 IoT.
Together with me is my speaker, is my co speaker, Peter.
Hello.
Today we'll be talking about creating advanced robotic application, the
leverage, edge computing, and cloud computing to create next level IoT.
So before we start, I know I cannot see you, but I'm going to ask you all that
are attending this talk a question.
Can you imagine a robot?
What do you think is the most common robot right now?
Is it a humanoid, so with like human like features?
Is it a droid, so perhaps it has weird shapes and functionalities?
Or is it more like an exoskeleton, thinking like Iron Man, for example?
And now from here I'm probably going to disappoint all of you because the
most common type of robot currently is just a robotic arm or a delta arm.
So these have different axes that can move, they don't look like a human,
they don't look like a droid, and they're definitely not exoskeletons.
but they are the most common and just to give you some facts and figures.
Just to give you some stats in 2023, there were over 541, 000 robots
installed in factories worldwide.
With a robot density of 162 units every 10, 000 employees.
Now, the robot density is just a sort of barometer to track the
degree of automation adoption in the manufacturing industry around the world.
for listening.
where are these robots?
most of these, 500, 000 units, they are in two sectors, the automotive and
the electronics, which make almost 50 percent of industrial robot installations.
Now, why, the automotive industry mostly deals with very heavy pieces of equipment
that needs to be assembled at high speed.
of course, robots are perfect for that.
Whereas in the electronic industries, the components are much, much
smaller, but they are also highly sensitive to contamination.
robots are perfect for clean room environments.
And With all these robots, I'm going to share another very sad fact.
They rarely see a cloud.
All these 541, 000 robots rarely see a cloud.
And here, of course, I'm not talking about real clouds, but digital cloud.
Now, why should robots see the cloud?
Why should they send data to the cloud?
To gain additional insights, pretty much, and to improve operations.
by having robots send some of their information to the cloud, the
nation can access them at any time to check for their performance,
identify issues, as well as take any preventative step to avoid downtime.
Or to resolve downtime as quickly as possible.
basically you can monitor and analyze in a near real time data, what the robots
are doing remotely from anywhere in the world, even if you're not in the plant.
so what.
What happens is that technicians can detect potential issues before
they escalate into major problems.
this supports, of course, proactive maintenance.
by identifying early signs of failure, technicians can schedule maintenance
activities during planned downtime, and this reduces unscheduled downtime and
improves overall machine reliability.
But also what you can do is then support more advanced predictive maintenance
strategies because you can analyze data from multiple robots across your factory
floor, for example, or across multiple manufacturing facilities, and you can
compare performance across different time periods to gain key insights, for
example, into potential improvements.
and make more informed data driven decisions.
And also, because you can leverage then, AI or machine learning algorithm, you
can process the data, for example, on maintenance or failures to predict,
machine failure before they actually occur based on the historical data available.
And of course, this means that you don't need to have a friend, almost random
scheduled maintenance activities, but you can take preemptive measures like
replacing worn out components only when it's needed, minimizing again, downtime
and improving cost effectiveness.
Now, why aren't robots sending data to the cloud?
the first point is there are way too many data generated on the shop floor
by robots, their controllers, etc.
And way too quickly.
And most of the data that are produced on the factory floor for these
automated operations, Is very fast.
it's a lot of data and, so of course, we need to consider that we still
want the robots and the industrial automation processes to operate.
We don't want to.
impact these operation because we want to do additional analytics.
of course, yes, the cloud can process this data, but it probably cannot
compete with the current capabilities of, for example, industrial Ethernet
for industrial automation processes.
Now, For example, generally for industrial automation networks, we
are talking about a bandwidth with, 100 megabit per second or one gigabit
per second, in terms of bandwidth.
And if the data, this time sensitive data are not transmitted, what happens
is that the entire operation stops.
So cloud computing here could be a potential issue because it
doesn't process data that quickly.
It cannot probably handle that volume of data.
And also the cost can be extremely elevated.
so of course, using cloud computing could potentially overwhelm
your, IT and shop floor system.
And you don't want that.
in addition, what we also need to consider is that, there are
cyber security and visibility issue that need to be considered.
typically, shop floor are, have been isolated with their industrial Ethernet
because the access to the internet and the cloud could potentially open up ways for
cyber criminals and hackers to, access, the equipment and potentially, stop all
operation on the, manufacturing plant.
in addition, related to the previous issues, we don't want any latency.
and finally, industrial automation operation, because they tend to have
extremely long service lives for their equipment of about 25 years, they need
to rely on standardized, So to ensure backward compatibility in particular.
So how can we connect robots to the cloud?
The simple answer is through edge computing.
edge computing means processing power is at the periphery of your
industrial automation network.
So close to the source.
for example, in this case, close to the robots.
instead of sending All the data to the cloud, so what you can do is filter.
What data do you actually want to send to the cloud for processing?
Because you probably don't want all the communication between the
robot and its controller, but you just want, for example, sensors,
to identify anywhere and tear.
So basically, what you have is a separation, cloud computing would be the
place where you develop the knowledge, whereas edge computing is where you, put
the knowledge into action almost so that cloud computing is not just a massive
storage space, which makes it, not really cost effective and also that the insight
that you can generate are then limited because there's just way too many data.
But you use edge computing as a way to filter data and to specify
which one needs to be processed.
So this can support then a timely reaction.
So you're not overwhelming the system, but also system resilience.
Now, how can we move to the edge and beyond to the cloud?
Basically, if we implement edge computing, or, if we expand the industrial
automation network, so include edge computing and then the cloud, you need
a number of, systems and applications.
And all, This needs to run somewhere.
You need middleware to support this data request, data exchange, etc.
So application servers, are needed.
Now when it comes to application servers, you can either create
your own solution or rely on something that is already available.
And, just like I doubt that, any, manufacturing facility would develop
its own robot, here, also in this case, it might be best just to rely on
existing application server technology.
And, for a number of reasons, you just don't want to.
Put additional workload on your existing team, which might not have the expertise,
might not be able to rely on support, but also the cost and time resource time
and resources would be extraordinary.
So here, I just wanted to show you what a sort of a potential, diagram of a
network where robots then communicate to the edge and the cloud look like.
this is based on a recent study and, now, my colleague Peter will talk
you a bit more about application servers and the application platforms.
Thank you, Tiara.
so now we are, going to application platform, what it
is and why we should, about it.
when we, develop an application, you, realize that you need, to access
lots of, various resources, organize them, introduce some configuration
and later you get into issues like Horizontal scaling and load balancing,
you will need to, ensure some level of security and of course compliance.
And this all functionality is something that can be just provided by application
platform and not necessarily to be developed by every single application.
So basically the, the role of the application platform is simplify
the development of applications.
It provides, lots of, services, to the developers and, do
these, these, functionalities.
It provides management of resources if it is, threat to pools, database
connection pools, the whole, stuff around security, set up it's,
done by the resource management.
if there is necessary to, connect to some other, middleware, the
service usually provide, plugins to, To connect to these services.
So if you want to be informed, if something happens, if you want to
be informed in some teams channel or some select channel, it's possible
if you need to integrate this.
Messaging systems, you can send messages just using the
existing, plugins platforms.
Very typically do care about failing and provide, support for code balancing.
So usually it's, Pretty easy to, scale to some level, just
automatically on the platform.
Not to the level which provides cloud, but, to some higher level,
several, machines or at least, or to level of several tens of machines.
And of course, security is a natural part of the platform, and the
servers are certified and verified, so they are much easier to be
compliant with the company policies.
If the application is developed with a platform, There is no need
to self assembly all the necessary libraries and parts together.
It's provided by the platform.
They all work together.
They provide versioned, versioned, runtime.
So it's, Easy to upgrade because every time you upgrade a new version, you are
sure that all the parts work together.
So the upgrade of one part doesn't break the whole setup.
The security is built in.
so it's easy to.
to implement it, the platforms typically, provide monitoring, health checking,
high availability, scalability.
So the development can just focus on the application and not to take
care of all the fun piece around it.
it, works as a managed software supply chain, just part of the libraries.
That's it.
How to choose the right framework, for, yes, it's important to, have,
open and, backward compatible, technologies because they allow
to be flexible if you don't like.
One, platform you can very easily to start using another, especially in the case
of Jakarta, the service are compatible.
they are interchange interchangeable to a big level and, they,
provide long, big compatibility.
So the programs which are developed years or 20 years ago,
they are still able to run today.
It's, easy to use, systems which support IOT because, plug plugins to the platform.
provide for example NM QTP, support.
So it's something that's already done for vendors.
the broader, broader language, range of devices is supported by the platform.
And, also, because, usage of a standardized platform, means, standard
using standards like JMS, messaging systems, it's, possible to, cooperate with
other industry players on this standard level and why we chose Jakarta ee.
This specific platform provides huge vendor neutrality.
The, the standard is, agreed among big players in the industry, and,
all the servers must, pass, tests.
The same test, so the same application, they'll run against each of the,
servers from various vendors.
So the customer only chooses some additional, some additional values of the
server or regard, or, takes care who is providing them, to what's the reliability
of the servers, other features, but it's possible to change the vendors.
Thanks.
There is no vendor locking.
The specifications are open, so it's very easy to, to check, what the,
what the functionality should do and what are the expectations and, what
is the level of, service and details.
the parts of Jakarta ee cooperate together, so the specification,
work together, without any issue and, the service, are able to
cooperate between themselves.
For example, the JMS.
In a simple way.
Okay.
The Jakarta is, driven by development, by community.
There is, board across the big, players in, and, they agree on the
new features, and, changes in the function of the, in the, in the global
direction, where the development goals.
The huge advantage is stability, backward compatibility.
So application written 15 years ago, they are still working.
there is only one problem with the package renaming in the, Java world.
this, and for this, there are tools which help to, which
help, to, upgrade the sources.
in a very fast way.
So the same program is able to the program written in 2008 is able to run
on the on today's servers very quickly.
there are very few backward incompatible changes.
Also, the servers, support multiple, multiple platforms from Java versions.
there are always several versions of LTS Javas, which are supported
and, also from operating systems.
So it's possible to run the server on an operating system you want in
various code environments if necessary.
And, the, simply Jakarta E servers run everywhere from huge server
to the smallest ARM machines.
we, we deliver, arm image.
So if, there is a application for that, we can run on, on a planning
arm machine ante is, modular.
plenty of components and, if, you want, you can choose the needed one.
And there are simpler servers, simpler versions of the servers.
Which, which, start faster and, are less demanding.
API is robust, as it is developed for a long time.
It, provides full, full functionality required from specific domains.
And if, there is a request to run in cloud, there is no problem with it.
the platforms, based on Jakarta EE usually provide auto scaling and
they are easily running in, in cloud.
The model of Jakarta EE, as you see, The Jakarta EE is based
on the set of specifications which provide various features.
From that, support, connection pools, the database connection pools, monitoring,
logging, messaging, and so on.
And this is all provided by the Jakarta E Server.
We also support MicroProfile, which is an additional set of specifications.
especially for cloud uses, but usages, but, they are very
handy also in, in one server.
You will see example of it in the demo.
And, the only missing part in this, model is actually your application.
And, so it doesn't need to take care of any service.
It's all provided.
The set of specification is pretty big.
they are, they are, upgraded, roughly around every two years.
There is a big new version.
So this is the description of the upcoming Jakarta 11.
the blue specifications are the ones which have a new version.
there is also one, completely new, specification, Jakarta Data, which
provides, easier access to, SQL databases.
And, in the preparation it will be also, providing data to NoSQL, databases.
So the question is, what's behind, every one specification?
for listening.
It's composed from three, actually four parts.
The first one is the specification itself.
It, describes, it is documentation of, all the features, which are
provided by one specification.
All the behavior of, each class in the specification and, what are
the expectation, expected outputs.
Next, there, there is, API Java, API, with interfaces, mostly interfaces,
providing that, API to, to the specification and a set of, tests.
Technology compatibility kit is a huge set of tests which verify that every
implementation of this specification is behaving exactly the same way.
And this is the fourth part, implementation.
there are several implementation of each of the specifications and the servers pick
the implementations, put them there and make, the platform working all together.
So the developers don't need to care about what implementation
is used, what is the version.
It all works together.
There is no need to take care of this.
It's just working.
There are a few things which go beyond the list of specifications
like performance and monitoring.
Every server provides some, some control over performance
and provides some monitoring.
For example, in the concurrency, it's possible to choose, to
configure, the size of the trap pool.
Also, there are many other features which control its behavior.
And also, it's possible to choose which types of threads are used there.
typically, there are the platform threads, which are not from Java very well.
But it's possible to use farm join threads, which is specific to Para.
Or it's possible to use web threats, which is the new feature in Java, 21
released last year, and it's part of the, upcoming, Jakarta level and everything
is just configurable on the server.
so the, confi the person who co configures the server can choose,
which, threats are the best.
version, best set up for the particular use case.
Of course, it's possible that it's, specified by a program, whatever can
override it, just on the size, for example, amount of memory or power of
the CPU or number of CPUs, which are available on the particular server.
So the behavior can change, just, without the code change.
And just based on, based on configuration, my profile
provides two nice, specifications for, the use case for on edge.
It's a metrics on health, which are just.
Just, part of the platform, there is nothing necessary to do, and they provide
endpoints for Prometheus, and we will see how to, how this can be displayed later
in, in Grafana in a nice graphical way.
PAR also provides, a tool called Monitoring Console, which Also to display
simple graphs directly in the server.
So there is even not need and external tools just built in.
And of course, there are some other ways how to monitor the server.
Oh, good.
JMX.
There are some other connectors, plugins, which are able to
send messages and measurements.
to the, to other, third party tools.
And of course, there is also specific rest, which allows to, display some
additional internal information if they are needed for monitoring.
That's all from me.
we are returning back to Chiara.
Thank you, Peter.
So now we looked at why Jakarta EE platform Could be the ideal
underlying technology to create edge computing and in particular edge
applications that support additional insights into industrial robots.
Now among all the Jakarta EE compatible solutions, here we
are using the Payara platform.
So why is that?
First, because it's highly committed to Jakarta EE.
and it's a, a main solutions for Jakarta E developers because after all the
platform is a Jakarta E first, solution.
in particular, we, Payara Services, so the company behind Payara Platform
is a contributor of the Eclipse Foundation, which is behind Jakarta Yi.
It's a strategic members of the Jakarta Yi Working Group and
other associated working groups.
In addition, when it comes to profile, we are members of the micro profile working
group, and we are part of the project management committee for Jakarta EE.
So we are really committed to this platform and we believe this is
the right solution for industrial automation applications of the future.
Now, Let's have a closer look at what the Payara platform can offer
for such specific application.
We said that, stability, resilience, and security were extremely important because,
of course, we don't want to accumulate.
Expose our robots to, additional risk and vulnerable vulnerabilities,
just to get some additional insights.
So the Payara platform is stable and fully supported, it's designed for
mission critical production environments.
With a long, software like cycle of a minimum of 10 years, so that, for
example, industrial automation plants, with, robots at different stages of
their life cycle can, be supported.
There are security alerts and patches with a monthly, release, so it complies
with regulatory bodies, and, Users can benefit from migration project support
as well as full engineering support that is available either 24 seven or
ten four five, depending on what the team needs and within the play our
platform, users can benefit from the pay our server, which is the, let's say,
traditional application server, as well as pay our micro that is, a bit more
tailored for, Cloud applications, and then we've got, Payara Cloud that is a
bit beyond the scope of today's talk.
it's a fully managed, application runtime for cloud deployments.
Now, here the Payara platform is really designed for deployments
that are, leveraging IoT, edge computing, and cloud computing.
So what we are talking exactly today.
And just to have a closer look, Basically, the Pajaro platform can
help you build intelligent servers.
And, this is because, for example, the IOT devices can send data over MQTT cloud
connector, with MQTT being pretty much the de facto standard in the industry.
and basically they can send this data, on the edge, and then,
Business logic, aggregate, and data analysis can be performed.
then if you look at what happens in the cloud, for example, you may think to
have Pyara Micro as your go to solution because it is optimized to work in
containerized environments, while being lightweight, of course, and compact so
that it doesn't use a lot of resources and it's ideal for cloud deployments.
So here, of course, when, if we are really tackling a, industrial
automation application, of course, it is recommended for any potential user
to rely on the Enterprise Edition, which offer exactly the full support.
And is suitable for, mission critical application in production environments.
Now, Peter prepared a very nice demo showcasing how we can, use,
the platform and Jakarta E of course, to, send robotic data.
To the cloud.
Thank you, Chiara.
So now we are going to a pretty simple demo of how such edge computing can
look like and how to do an easy way.
What on the screen is.
monitoring of the edge computer, it shows how the data is processed.
We will go through everything during the demo and on the right side,
you see the simulation of robots.
This is running on a separate computer.
So so it properly simulates sending remote requests.
The data is, sent from, from, re, from files with, recorded, real data.
And they are processed in the, they are stored and eventually can be filtered
and, pre processed on the Edge computer.
And then they are sent further for the next processing, as Chiara
mentioned, for some, analysis, some data mining on top of the data.
Thank you.
And, we don't care about this part, in this presentation, we assume that
there is some cluster of computers, or, that can be, more powerful computer
or just a cloud with load balancer.
So we will just send data somewhere else.
So let's start with the whole, procedure.
Now, the data which we are using.
There are various sets of, data and, there are various, examples of, samples
of from one, one source and, this is the format of the data time, position
of the arm temperature and voltage.
And there are, lots of the data like three and a half thousand,
measurements in, in one set.
And once this set is, sent to the, to the edge computer, it's considered one cycle.
so the clients, the machine says.
And now I finished, and the Edge computer, sends the whole set for further analysis
somewhere, for the, for the processing.
what do we need to start?
Is we need on the, on the remote machine, which is this one.
We need to start the edge computer.
So this is this machine.
Usually it starts with the application with the start of the server.
Okay.
And we also start monitoring.
We use Prometheus and Grafana configured.
to the, to the sources we have.
Also, we need to run the, the simulator of, of, the data mining and processes.
So we run on different computer this thing.
And we are ready.
We are ready to start, to start the data generator.
We will just verify that everything works.
So here is the edge computer.
It tells us that nothing happened so far.
It's fine.
And we will, we should look at the graphical data.
graphical representation.
So the server is up.
No data is processed.
Uptime is a minute.
Classes were loaded.
there is almost no, memory used so far.
And everything's fine.
Okay.
for the servers, we provide also, graphical interface.
Which, can control the whole, configuration of the server.
So here we see this is the machine which contains the data miner.
And the data miner has very simple, UI.
Okay, so now we can start with, the processing of, data.
So here we start, with the processors.
As it is simulation, we will start with just 10, 10 sources of
the data to send to the server.
A short compilation of the sources to verify that we are on the latest version.
And we start.
So you see there are 10 servers.
Sending, depending on the size of the, dataset they are sending, data.
And, also, there are various, cycles depending on the speed of the processing.
So we can easily, we can quickly review, the data in our, measurement to see that.
How far it is currently 150, 000, data, which are, 78 data sets, asked for data
mining and 53 of them is already done.
we can look at the metrics.
So right now we are about 300, 000, records.
The memory is warming up.
The, the system is forming up.
We see some timing, things.
In this diagram, we see, number of, requests and number of, process.
processed, data minings.
So everything is, working nicely.
And this is how we see how fast is the data process and, the amount of the data.
So what we can try now to have some little bit more interesting graphs, we can stop.
So we should be able to see that the, it stopped sending data, the graphs.
Should flatten, it, it takes several seconds, for, both, collecting data
and the refresh of, Grafana, but you'll see that, that, the data
stopped being sent, to make everything clear, we can clear the data.
So we start fresh.
second, that'll be.
Zero in the data received and we can, yeah, we can start with the,
let's say a little bit more, a little bit more, sources.
So imagine a really big factory with 100 different machines sending
its data as quickly as possible.
So let's start with 100.
So now we can.
We can run 100, subsystem.
Let's see what happens on the server.
Now there is 100, servers, trying to fill the, fill the server.
Can display the, internal.
they have 65, 000 sets.
And that it's increasing.
So the server is able to handle just without any specific configuration,
able to handle pretty big, loads.
so this also can be configured.
to provide, better, better performance.
Of course, the example is, is predator also, there is no, no clever,
clever caching or anything, just, just a stock version of the cell.
So this is how it works.
I would like to just show what happens if, the server will stop working.
so how it is visible in the, in the monitoring.
So here we will see.
On the upper left, corner that the server is down in, this graph is down.
So there, there can be some art, to the monitor of the whole setup
that something's go going wrong, and that the server is down, which
is of course, not the usual case.
Yep, so that's, something what, we can, end with the demo here, but I would like
to share how the, how the demo was done.
So this is the pretty trivial, edge computing.
It's just storing data in the cache and provide some, counters, So it's
typical, best endpoint with, with processing data, which only stores data.
This is just logging and some, functionality.
And all the measurement is done by just annotating the, method with the,
micro profile metrics annotation.
And, I also try to provide some more information.
Again, predatorial things, and they are then all visible here.
These are the values which are provided by the server.
This is the end of the demo, and I will just finish it's
a little bit more optimistic.
And so in a second, we will get it Up again, and this is the end of the demo.
So thanks and I'm resending words to back to Kiara for the final words Thank
you, Peter So just summarizing what we've discussed today robot based industrial
automation operation can greatly benefit from the cloud for additional insights
however You cannot just blindly adopt the cloud, but you need a edge and,
of course, a suitable technology is needed to create an effective edge.
the technology should rely on openness, a certain standardization, as well
as offer flexibility and scalability.
This is why we've chosen Jakarta E and Payara platform for the demo.
And we've shown how a Jakarta E specific Payara platform can offer the ideal
application server for creating a cutting edge, industrial internet of
things that leverages edge computing and cloud computing to advance.
robot applications.
Now, thank you so much for attending this talk.
I hope you had a good time.
If you've got any questions, feel free to get in touch with myself or
Peter and enjoy the rest of the talks.
So thank you.
And goodbye.
Hi, I am Chiara, and welcome to this session at the Confo 42 IoT.
Together with me is my speaker, is my co speaker, Peter.
Hello.
Today we'll be talking about creating advanced robotic application, the
leverage, edge computing, and cloud computing to create next level IoT.
So before we start, I know I cannot see you, but I'm going to ask you all that
are attending this talk a question.
Can you imagine a robot?
What do you think is the most common robot right now?
Is it a humanoid, so with like human like features?
Is it a droid, so perhaps it has weird shapes and functionalities?
Or is it more like an exoskeleton, thinking like Iron Man, for example?
And now from here I'm probably going to disappoint all of you because the
most common type of robot currently is just a robotic arm or a delta arm.
So these have different axes that can move, they don't look like a human,
they don't look like a droid, and they're definitely not exoskeletons.
but they are the most common and just to give you some facts and figures.
Just to give you some stats in 2023, there were over 541, 000 robots
installed in factories worldwide.
With a robot density of 162 units every 10, 000 employees.
Now, the robot density is just a sort of barometer to track the
degree of automation adoption in the manufacturing industry around the world.
for listening.
where are these robots?
most of these, 500, 000 units, they are in two sectors, the automotive and
the electronics, which make almost 50 percent of industrial robot installations.
Now, why, the automotive industry mostly deals with very heavy pieces of equipment
that needs to be assembled at high speed.
of course, robots are perfect for that.
Whereas in the electronic industries, the components are much, much
smaller, but they are also highly sensitive to contamination.
robots are perfect for clean room environments.
And With all these robots, I'm going to share another very sad fact.
They rarely see a cloud.
All these 541, 000 robots rarely see a cloud.
And here, of course, I'm not talking about real clouds, but digital cloud.
Now, why should robots see the cloud?
Why should they send data to the cloud?
To gain additional insights, pretty much, and to improve operations.
by having robots send some of their information to the cloud, the
nation can access them at any time to check for their performance,
identify issues, as well as take any preventative step to avoid downtime.
Or to resolve downtime as quickly as possible.
basically you can monitor and analyze in a near real time data, what the robots
are doing remotely from anywhere in the world, even if you're not in the plant.
so what.
What happens is that technicians can detect potential issues before
they escalate into major problems.
this supports, of course, proactive maintenance.
by identifying early signs of failure, technicians can schedule maintenance
activities during planned downtime, and this reduces unscheduled downtime and
improves overall machine reliability.
But also what you can do is then support more advanced predictive maintenance
strategies because you can analyze data from multiple robots across your factory
floor, for example, or across multiple manufacturing facilities, and you can
compare performance across different time periods to gain key insights, for
example, into potential improvements.
and make more informed data driven decisions.
And also, because you can leverage then, AI or machine learning algorithm, you
can process the data, for example, on maintenance or failures to predict,
machine failure before they actually occur based on the historical data available.
And of course, this means that you don't need to have a friend, almost random
scheduled maintenance activities, but you can take preemptive measures like
replacing worn out components only when it's needed, minimizing again, downtime
and improving cost effectiveness.
Now, why aren't robots sending data to the cloud?
the first point is there are way too many data generated on the shop floor
by robots, their controllers, etc.
And way too quickly.
And most of the data that are produced on the factory floor for these
automated operations, Is very fast.
it's a lot of data and, so of course, we need to consider that we still
want the robots and the industrial automation processes to operate.
We don't want to.
impact these operation because we want to do additional analytics.
of course, yes, the cloud can process this data, but it probably cannot
compete with the current capabilities of, for example, industrial Ethernet
for industrial automation processes.
Now, For example, generally for industrial automation networks, we
are talking about a bandwidth with, 100 megabit per second or one gigabit
per second, in terms of bandwidth.
And if the data, this time sensitive data are not transmitted, what happens
is that the entire operation stops.
So cloud computing here could be a potential issue because it
doesn't process data that quickly.
It cannot probably handle that volume of data.
And also the cost can be extremely elevated.
so of course, using cloud computing could potentially overwhelm
your, IT and shop floor system.
And you don't want that.
in addition, what we also need to consider is that, there are
cyber security and visibility issue that need to be considered.
typically, shop floor are, have been isolated with their industrial Ethernet
because the access to the internet and the cloud could potentially open up ways for
cyber criminals and hackers to, access, the equipment and potentially, stop all
operation on the, manufacturing plant.
in addition, related to the previous issues, we don't want any latency.
and finally, industrial automation operation, because they tend to have
extremely long service lives for their equipment of about 25 years, they need
to rely on standardized, So to ensure backward compatibility in particular.
So how can we connect robots to the cloud?
The simple answer is through edge computing.
edge computing means processing power is at the periphery of your
industrial automation network.
So close to the source.
for example, in this case, close to the robots.
instead of sending All the data to the cloud, so what you can do is filter.
What data do you actually want to send to the cloud for processing?
Because you probably don't want all the communication between the
robot and its controller, but you just want, for example, sensors,
to identify anywhere and tear.
So basically, what you have is a separation, cloud computing would be the
place where you develop the knowledge, whereas edge computing is where you, put
the knowledge into action almost so that cloud computing is not just a massive
storage space, which makes it, not really cost effective and also that the insight
that you can generate are then limited because there's just way too many data.
But you use edge computing as a way to filter data and to specify
which one needs to be processed.
So this can support then a timely reaction.
So you're not overwhelming the system, but also system resilience.
Now, how can we move to the edge and beyond to the cloud?
Basically, if we implement edge computing, or, if we expand the industrial
automation network, so include edge computing and then the cloud, you need
a number of, systems and applications.
And all, This needs to run somewhere.
You need middleware to support this data request, data exchange, etc.
So application servers, are needed.
Now when it comes to application servers, you can either create
your own solution or rely on something that is already available.
And, just like I doubt that, any, manufacturing facility would develop
its own robot, here, also in this case, it might be best just to rely on
existing application server technology.
And, for a number of reasons, you just don't want to.
Put additional workload on your existing team, which might not have the expertise,
might not be able to rely on support, but also the cost and time resource time
and resources would be extraordinary.
So here, I just wanted to show you what a sort of a potential, diagram of a
network where robots then communicate to the edge and the cloud look like.
this is based on a recent study and, now, my colleague Peter will talk
you a bit more about application servers and the application platforms.
Thank you, Tiara.
so now we are, going to application platform, what it
is and why we should, about it.
when we, develop an application, you, realize that you need, to access
lots of, various resources, organize them, introduce some configuration
and later you get into issues like Horizontal scaling and load balancing,
you will need to, ensure some level of security and of course compliance.
And this all functionality is something that can be just provided by application
platform and not necessarily to be developed by every single application.
So basically the, the role of the application platform is simplify
the development of applications.
It provides, lots of, services, to the developers and, do
these, these, functionalities.
It provides management of resources if it is, threat to pools, database
connection pools, the whole, stuff around security, set up it's,
done by the resource management.
if there is necessary to, connect to some other, middleware, the service provide,
plugins to, To connect to these services.
So if you want to be informed, if something happens, if you want to
be informed in some teams channel or some select channel, it's possible
if you need to integrate this.
Messaging systems, you can send messages just using the
existing, plugins platforms.
Very typically do care about failing and provide, support for code balancing.
So usually it's, Pretty easy to, scale to some level, just
automatically on the platform.
Not to the level which provides cloud, but, to some higher level,
several, machines or at least, or to level of several tens of machines.
And of course, security is a natural part of the platform, and the
servers are certified and verified, so they are much easier to be
compliant with the company policies.
If the application is developed with a platform, There is no need
to self assembly all the necessary libraries and parts together.
It's provided by the platform.
They all work together.
They provide versioned, versioned, runtime.
So it's, Easy to upgrade because every time you upgrade a new version, you are
sure that all the parts work together.
So the upgrade of one part doesn't break the whole setup.
The security is built in.
so it's easy to.
to implement it, the platforms typically, provide monitoring, health checking,
high availability, scalability.
So the development can just focus on the application and not to take
care of all the fun piece around it.
it, works as a managed software supply chain, just part of the libraries.
That's it.
How to choose the right framework, for, yes, it's important to, have,
open and, backward compatible, technologies because they allow
to be flexible if you don't like.
One, platform you can very easily to start using another, especially in the case
of Jakarta, the service are compatible.
they are interchange interchangeable to a big level and, they,
provide long, big compatibility.
So the programs which are developed years or 20 years ago,
they are still able to run today.
It's, easy to use, systems which support IOT because, plug plugins to the platform.
provide for example NM QTP, support.
So it's something that's already done for vendors.
the broader, broader language, range of devices is supported by the platform.
And, also, because, usage of a standardized platform, means, standard
using standards like JMS, messaging systems, it's, possible to, cooperate with
other industry players on this standard level and why we chose Jakarta ee.
This specific platform provides huge vendor neutrality.
The, the standard is, agreed among big players in the industry, and,
all the servers must, pass, tests.
The same test, so the same application, they'll run against each of the,
servers from various vendors.
So the customer only chooses some additional, some additional values of the
server or regard, or, takes care who is providing them, to what's the reliability
of the servers, other features, but it's possible to change the vendors.
Thanks.
There is no vendor locking.
The specifications are open, so it's very easy to, to check, what the,
what the functionality should do and what are the expectations and, what
is the level of, service and details.
the parts of Jakarta ee cooperate together, so the specification,
work together, without any issue and, the service, are able to
cooperate between themselves.
For example, the JMS.
In a simple way.
Okay.
The Jakarta is, driven by development, by community.
There is, board across the big, players in, and, they agree on the
new features, and, changes in the function of the, in the, in the global
direction, where the development goals.
The huge advantage is stability, backward compatibility.
So application written 15 years ago, they are still working.
there is only one problem with the package renaming in the, Java world.
this, and for this, there are tools which help to, which
help, to, upgrade the sources.
in a very fast way.
So the same program is able to the program written in 2008 is able to run
on the on today's servers very quickly.
there are very few backward incompatible changes.
Also, the servers, support multiple, multiple platforms from Java versions.
there are always several versions of LTS Javas, which are supported
and, also from operating systems.
So it's possible to run the server on an operating system you want in
various code environments if necessary.
And, the, simply Jakarta E servers run everywhere from huge server
to the smallest ARM machines.
we, we deliver, arm image.
So if, there is a application for that, we can run on, on a planning
arm machine ante is, modular.
plenty of components and, if, you want, you can choose the needed one.
And there are simpler servers, simpler versions of the servers.
Which, which, start faster and, are less demanding.
API is robust, as it is developed for a long time.
It, provides full, full functionality required from specific domains.
And if, there is a request to run in cloud, there is no problem with it.
the platforms, based on Jakarta EE usually provide auto scaling and
they are easily running in, in cloud.
The model of Jakarta EE, as you see, The Jakarta EE is based
on the set of specifications which provide various features.
From that, support, connection pools, the database connection pools, monitoring,
logging, messaging, and so on.
And this is all provided by the Jakarta E Server.
We also support MicroProfile, which is an additional set of specifications.
especially for cloud uses, but usages, but, they are very
handy also in, in one server.
You will see example of it in the demo.
And, the only missing part in this, model is actually your application.
And, so it doesn't need to take care of any service.
It's all provided.
The set of specification is pretty big.
they are, they are, upgraded, roughly around every two years.
There is a big new version.
So this is the description of the upcoming Jakarta 11.
the blue specifications are the ones which have a new version.
there is also one, completely new, specification, Jakarta Data, which
provides, easier access to, SQL databases.
And, in the preparation it will be also, providing data to NoSQL, databases.
So the question is, what's behind, every one specification?
for listening.
It's composed from three, actually four parts.
The first one is the specification itself.
It, describes, it is documentation of, all the features, which are
provided by one specification.
All the behavior of, each class in the specification and, what are
the expectation, expected outputs.
Next, there, there is, API Java, API, with interfaces, mostly interfaces,
providing that, API to, to the specification and a set of, tests.
Technology compatibility kit is a huge set of tests which verify that every
implementation of this specification is behaving exactly the same way.
And this is the fourth part, implementation.
there are several implementation of each of the specifications and the servers pick
the implementations, put them there and make, the platform working all together.
So the developers don't need to care about what implementation
is used, what is the version.
It all works together.
There is no need to take care of this.
It's just working.
There are a few things which go beyond the list of specifications
like performance and monitoring.
Every server provides some, some control over performance
and provides some monitoring.
For example, in the concurrency, it's possible to choose, to
configure, the size of the trap pool.
Also, there are many other features which control its behavior.
And also, it's possible to choose which types of threads are used there.
typically, there are the platform threads, which are not from Java very well.
But it's possible to use farm join threads, which is specific to Para.
Or it's possible to use web threats, which is the new feature in Java, 21
released last year, and it's part of the, upcoming, Jakarta level and everything
is just configurable on the server.
so the, confi the person who co configures the server can choose,
which, threats are the best.
version, best set up for the particular use case.
Of course, it's possible that it's, specified by a program, whatever can
override it, just on the size, for example, amount of memory or power of
the CPU or number of CPUs, which are available on the particular server.
So the behavior can change, just, without the code change.
And just based on, based on configuration, my profile
provides two nice, specifications for, the use case for on edge.
It's a metrics on health, which are just.
Just, part of the platform, there is nothing necessary to do, and they provide
endpoints for Prometheus, and we will see how to, how this can be displayed later
in, in Grafana in a nice graphical way.
PAR also provides, a tool called Monitoring Console, which Also to display
simple graphs directly in the server.
So there is even not need and external tools just built in.
And of course, there are some other ways how to monitor the server.
Oh, good.
JMX.
There are some other connectors, plugins, which are able to
send messages and measurements.
to the, to other, third party tools.
And of course, there is also specific rest, which allows to, display some
additional internal information if they are needed for monitoring.
That's all from me.
we are returning back to Chiara.
Thank you, Peter.
So now we looked at why Jakarta EE platform Could be the ideal
underlying technology to create edge computing and in particular edge
applications that support additional insights into industrial robots.
Now among all the Jakarta EE compatible solutions, here we
are using the Payara platform.
So why is that?
First, because it's highly committed to Jakarta EE.
and it's a, a main solutions for Jakarta E developers because after all the
platform is a Jakarta E first, solution.
in particular, we, Payara Services, so the company behind Payara Platform
is a contributor of the Eclipse Foundation, which is behind Jakarta Yi.
It's a strategic members of the Jakarta Yi Working Group and
other associated working groups.
In addition, when it comes to profile, we are members of the micro profile working
group, and we are part of the project management committee for Jakarta EE.
So we are really committed to this platform and we believe this is
the right solution for industrial automation applications of the future.
Now, Let's have a closer look at what the Payara platform can offer
for such specific application.
We said that, stability, resilience, and security were extremely important because,
of course, we don't want to accumulate.
Expose our robots to, additional risk and vulnerable vulnerabilities,
just to get some additional insights.
So the Payara platform is stable and fully supported, it's designed for
mission critical production environments.
With a long, software like cycle of a minimum of 10 years, so that, for
example, industrial automation plants, with, robots at different stages of
their life cycle can, be supported.
There are security alerts and patches with a monthly, release, so it
complies with the regulatory bodies.
and, Users can benefit from migration project support as well as full
engineering support that is available either 24 seven or ten four five,
depending on what the team needs and within the play our platform, users
can benefit from the pay our server, which is the, let's say, traditional
application server, as well as pay our micro that is, a bit more tailored
for, Cloud applications, and then we've got, Payara Cloud that is a
bit beyond the scope of today's talk.
it's a fully managed, application runtime for cloud deployments.
Now, here the Payara platform is really designed for deployments
that are, leveraging IoT, edge computing, and cloud computing.
So what we are talking exactly today.
And just to have a closer look, Basically, the Pajaro platform can
help you build intelligent servers.
And, this is because, for example, the IOT devices can send data over MQTT cloud
connector, with MQTT being pretty much the de facto standard in the industry.
and basically they can send this data, on the edge, and then,
Business logic, aggregate, and data analysis can be performed.
then if you look at what happens in the cloud, for example, you may think to
have Pyara Micro as your go to solution because it is optimized to work in
containerized environments, while being lightweight, of course, and compact so
that it doesn't use a lot of resources and it's ideal for cloud deployments.
So here, of course, when, if we are really tackling a, industrial
automation application, of course, it is recommended for any potential user
to rely on the Enterprise Edition, which offer exactly the full support.
And is suitable for, mission critical application in production environments.
Now, Peter prepared a very nice demo showcasing how we can, use,
the platform and Jakarta E of course, to, send robotic data.
To the cloud.
Thank you, Chiara.
So now we are going to a pretty simple demo of how such edge computing can
look like and how to do an easy way.
What on the screen is.
monitoring of the edge computer, it shows how the data is processed.
We will go through everything during the demo and on the right side,
you see the simulation of robots.
This is running on a separate computer.
So so it properly simulates sending remote requests.
The data is, sent from, from, re, from files with, recorded, real data.
And they are processed in the, they are stored and eventually can be filtered
and, pre processed on the Edge computer.
And then they are sent further for the next processing, as Chiara
mentioned, for some, analysis, some data mining on top of the data.
Thank you.
And, we don't care about this part, in this presentation, we assume that
there is some cluster of computers, or, that can be, more powerful computer
or just a cloud with load balancer.
So we will just send data somewhere else.
So let's start with the whole, procedure.
Now, the data which we are using.
There are various sets of, data and, there are various, examples of, samples
of from one, one source and, this is the format of the data time, position
of the arm temperature and voltage.
And there are, lots of the data like three and a half thousand,
measurements in, in one set.
And once this set is, sent to the, to the edge computer, it's considered one cycle.
so the clients, the machine says.
And now I finished, and the Edge computer, sends the whole set for further analysis
somewhere, for the, for the processing.
what do we need to start?
Is we need on the, on the remote machine, which is this one.
We need to start the edge computer.
So this is this machine.
Usually it starts with the application with the start of the server.
Okay.
And we also start monitoring.
We use Prometheus and Grafana configured.
to the, to the sources we have.
Also, we need to run the, the simulator of, of, the data mining and processes.
So we run on different computer this thing.
And we are ready.
We are ready to start, to start the data generator.
We will just verify that everything works.
So here is the edge computer.
It tells us that nothing happened so far.
It's fine.
And we will, we should look at the graphical graphical representation.
So the server is up.
No data is processed.
Uptime is a minute.
Classes were loaded.
there is almost no, memory used so far.
And everything's fine.
Okay.
for the servers, we provide also, graphical interface.
Which, can control the whole, configuration of the server.
So here we see this is the machine which contains the data miner.
And the data miner has very simple, UI.
Okay, so now we can start with, the processing of, data.
So here we start, with the processors.
As it is simulation, we will start with just 10, 10 sources of
the data to send to the server.
A short compilation of the sources to verify that we are on the latest version.
And we start.
So you see there are 10 servers.
Sending, depending on the size of the, dataset they are sending, data.
And, also, there are various, cycles depending on the speed of the processing.
So we can easily, we can quickly review, the data in our, measurement to see that.
How far it is currently 150, 000, data, which are, 78 data sets, asked for data
mining and 53 of them is already done.
we can look at the metrics.
So right now we are about 300, 000, records.
The memory is warming up.
The, the system is forming up.
We see some timing, things.
In this diagram, we see, number of, requests and number of, process.
processed, data minings.
So everything is, working nicely.
And this is how we see how fast is the data process and, the amount of the data.
So what we can try now to have some little bit more interesting graphs, we can stop.
So we should be able to see that the, it stopped sending data, the graphs.
Should flatten, it, it takes several seconds, for, both, collecting data
and the refresh of, Grafana, but you'll see that, that, the data
stopped being sent, to make everything clear, we can clear the data.
So we start fresh.
second, that'll be.
Zero in the data received and we can, yeah, we can start with the,
let's say a little bit more, a little bit more, sources.
So imagine a really big factory with 100 different machines sending
its data as quickly as possible.
So let's start with 100.
So now we can.
We can run 100, subsystem.
Let's see what happens on the server.
Now there is 100, servers, trying to fill the, fill the server.
Can display the, internal.
they have 65, 000 sets.
And that it's increasing.
So the server is able to handle just without any specific configuration,
able to handle pretty big, loads.
so this also can be configured.
to provide, better, better performance.
Of course, the example is, is predator also, there is no, no clever,
clever caching or anything, just, just a stock version of the cell.
So this is how it works.
I would like to just show what happens if, the server will stop working.
so how it is visible in the, in the monitoring.
So here we will see.
On the upper left, corner that the server is down in, this graph is down.
So there, there can be some art, to the monitor of the whole setup
that something's go going wrong, and that the server is down, which
is of course, not the usual case.
Yep, so that's, something what, we can, end with the demo here, but I would like
to share how the, how the demo was done.
So this is the pretty trivial, edge computing.
It's just storing data in the cache and provide some, counters, So it's
typical, best endpoint with, with processing data, which only stores data.
This is just logging and some, functionality.
And all the measurement is done by just annotating the, method with the,
micro profile metrics annotation.
And, I also try to provide some more information.
Again, predatorial things, and they are then all visible here.
These are the values which are provided by the server.
This is the end of the demo, and I will just finish it's
a little bit more optimistic.
And so in a second, we will get it Up again, and this is the end of the demo.
So thanks and I'm resending words to back to Kiara for the final words Thank
you, Peter So just summarizing what we've discussed today robot based industrial
automation operation can greatly benefit from the cloud for additional insights
however You cannot just blindly adopt the cloud, but you need a edge and,
of course, a suitable technology is needed to create an effective edge.
the technology should rely on openness, a certain standardization, as well
as offer flexibility and scalability.
This is why we've chosen Jakarta E and Payara platform for the demo.
And we've shown how a Jakarta E specific Payara platform can offer the ideal
application server for creating a cutting edge, industrial internet of
things that leverages edge computing and cloud computing to advance.
robot applications.
Now, thank you so much for attending this talk.
I hope you had a good time.
If you've got any questions, feel free to get in touch with myself or
Peter and enjoy the rest of the talks.
So thank you.
And goodbye.