Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello friends, welcome to comf. I'm Alex
from Barbara, the edge computing company. And today I'm
going to be speaking about mlops at the edge. I would like
to start with a very direct and very blunt statement for
us at Barbara. The cloud is broken and
you might be using the cloud a lot, probably.
You might be loving it, especially if you are building this great
models online and you might be deploying them online. So you
might asking, hey, how come that a
two digit growing industry might be broken? So what's
the problem with it? So why do you say that the cloud is broken?
Well, I bring today eight different reasons
why we think here at Barbara that the cloud is broken.
The first one is that it's too dependent on
the Internet. So if you're running a cloud service,
you obviously running all the workload somewhere else,
not in your own infrastructure, not in your pc,
not in your servers, you're doing it somewhere else. So you are relying on a
good connection to perform anything that you want to do right.
So you depend too much on the availability of that connection
and that might be a problem for some applications.
Secondly, there's a latency problem here.
It doesn't matter how good your connection is. There is a
round trip time of your petitions that
go to the cloud and come back. So it takes a little bit of time
and that might be not good enough for certain applications,
especially for real time applications. So that round trip of the
time that it takes for any data, any petition, anything to
go to the cloud and come back might not be fast enough. You might
have latency problems.
Then there is also a problem of service availability,
and this is related to the first one that I spoke about,
because you over rely on the Internet, then if there's
a downtime on the service or if there's an interruption on the
service, you're going to suffer. Your service is going to get interrupted as
well because you are over relying on the cloud.
Then number four, there is a loss of control here because
you don't own the infrastructure, which is great for some applications because
obviously it will take a lot of money and effort to maintain
those. But sometimes you might want to
do things here and there. You might need to troubleshoot,
to debug, and there's some
situations where you're going to want to have that control of the infrastructure,
that control of the underlying technology. So it
might be difficult for you to optimize or to customize
some services because you're not the owner of the infrastructure.
So there is a loss of control that you need to take care of,
then there is a problem
with cost. I mean, the cloud is some sort of a luxury
many times, and also the cost
is not easy to manage other times because they usually
bill you based on the usage that you do. And usage is something that
changes a lot depending on the needs of your application,
that might be a problem. But also if you deal with an application that
takes a lot of data, such as computer vision, for instance,
which is a very data greedy sort
of service, then you're going to run into trouble
if you want to upload all your video feeds
to the cloud in order to process them there. So here,
especially in data intensive tasks, you have
to take care of cloud costs because they can become
a liability. Then on number six,
we have vendor locking. And many of these
cloud tools, they usually impose you,
they usually force you to do
things their way, right, because they are different across different
vendors. There's not like a standard way of doing things. So chances
are that you will have to adapt to the way of this or
this other cloud. And then if you want to move vendors,
if you want to change vendors, so say that you want to, I don't know,
leave AWS and move on to Google or move on to azure,
then chances are that you will have to reprogram to change many
things in order to be able to use this new platform.
Right? So this sense of vendor locking,
which makes it a little bit difficult to change providers.
Then on number seven, there's always the
problem of Security and privacy. It doesn't matter how good
your encryptions are, whether you use vpns,
whether you use whatever means of hiding and
controlling what happens with your data, you are
in the end sending them through the Internet, through the public
Internet. So security and privacy is
something that you should take care of, right? And breaches happen
from time to time. So yeah, if you want your data
completely protected and secure, probably the cloud is not the best
way to go. And then on number eight, which is
also related with number seven, we have compliance many times.
There are some customers, there's some applications that we
need to keep in a
specific geographical area, for example, or to manage
in a very specific way. So whenever we're dealing with sensitive
data, there is this compliance where there are laws
basically that we have to follow to control where
our data is. So we might not be able to use the cloud for
this other application. Right. So here the cloud
is also a little bit, it gives a little bit
of trouble when it comes to managing where your data is stored.
So all these are the problems that we think
you are going to encounter with the cloud, or probably you are already
encountering at the cloud. But what is
the solution? And as you can imagine for us here
at Barbara, the solution is the edge. So edge
is the new cloud.
If we quote what the guys at garner say,
the ability to create results through AI and edge computing
is the most significant value refactoring
since cloud computing. And that's a high statement.
But what is edge computing anyway? So what
is all about this edge computing stuff? How is it different
from the cloud? Well, the idea is that if we have our typical
application where we have a device
that we are doing things with that's connected to a rotor
or an antenna, and that takes the connection to an Internet provider
through the Internet and then to some cloud services. So say that we are operating
in the cloud, chances are that
we're going to run into one or many of these problems
that we just quoted, right? So the idea with
edge computing is that rather than doing things in the Internet,
we tried to bring the computing power.
We try to do things closer to the place
where all the data is being generated. So we try to
bring it forward, ideally the
closest we can to the place where the data
is being generated, the closest to the device possible,
ideally to a local network. Right? So by computing
our data, by putting all our workloads,
all our machine learning algorithms, all our AI,
in a place that's close to the data, we're going
to be overcoming all the problems that we spoke about
a while ago. This is what we call edge computing, and we call it edge
because we think that we are computing at the
edge of the cloud. So if we think of this cloud like
a stain that covers a huge place,
then we are computing it at the edge of that stain. So at
the very border of the cloud.
Okay, so that's edge computing. But what does
it give to us? What's so good about edge
computing? Well, here we have to speak about the edge computing drivers,
and we have four main drivers here, the first being low
latency. Remember that we spoke about latency before being a
problem of the cloud? Well, edge computing doesn't have that problem because
we're computing things very close to where the data is
being generated and usually very close to where actions
can be taken. So I don't know, we might be capturing data from
a water management system. So if we need to open a valve
or close a valve or do something in that system as a consequence
of a piece of data that we have captured that says that there's a problem
somewhere, then you could do that very,
very quickly because you are computing very close to where the data
originated. Right? So this is key
for real time needs. Real time applications can benefit
a lot from edge computing. The second driver of
edge computing is lower bandwidth consumption.
And here we can go back to the example that we said before,
where we have this computer vision system that
we need to process in real time. So bringing
all the camera feeds online onto the cloud
is going to be very costly because
it's going to take a lot of bandwidth. And you know that these cloud services
are going to be invoicing us for all that usage
of bandwidth. So if we do this in the
edge, we're reducing dramatically the amount of data
that we need to send to the cloud, right? So this will lower our cloud
costs dramatically as well. So lower bandwidth
consumption. With edge computing, the third
driver is privacy and security.
Because our data doesn't live our premises,
it's in devices that are very close to where the data has
been generated. So ideally in our locations,
in our offices, or in our manufacturing sites,
or depending on the industry, we can call them different ways, but they
are just locally to us. Then it's very easy for us
to protect that data. If it doesn't leave the factory, then it's
very difficult to get hacked. And then
we have boost on autonomy and a boost on availability,
because these systems, any edge computing system,
is going to continue operating no matter
if there's been a problem with the Internet connection,
no matter if the service in the cloud is broken,
it's unavailable. They have this autonomous system operation
fashion to them, which is very nice for
critical infrastructure or for critical industries that can't stop operating
no matter what. So if we think of, I don't know,
as Margaret or water management, then it's
very nice that if they lose connection to the cloud, they continue
operating because these are high reliability
sites. All right?
But there are different types of edge. And if we look at
the diagram here by Gartner, they usually depict
this in five layers. So we have five different types of
edge. At the top of the pyramid
here, we have the cloud sort of services, and at the bottom we
have the people and the things, right? So the site. And then depending
on where we place our workloads, we will be speaking about
regional dc. So the cloud is like these huge data
centers that are usually in the big cities. I don't
know, here in Europe, you might find them in London,
Paris, in Frankfurt, or maybe Madrid.
So the idea is that we move from those
huge data centers to smaller data centers
that we call regional data centers. We could move a little bit
down to local data centers. So the idea is that you're going to be
computing your workloads, your ML models or
whatever, you're computing very close to your infrastructure, but you're
going to be doing in someone else's rented infrastructure in a DC,
if you go down, then you start to bring the computing
power in house. And here we arrive at the
compute edge. So the idea here is that we have an edge server,
usually locally, not in a data center where you compute things,
right? So we have compute edge, usually big
servers. You have gateway edge, which is usually small machines,
more portable sort of machines, and you have device edge,
which is workloads that are being executed
usually in embedded systems. So very, very small systems
such as plcs, wearables, small devices.
The idea is as you go down, you're going to
get less latency, which is good. If you're looking for a real time
system, you're going to be more scalable and you're going to be more secure.
Right? So this is in line with what we just spoke about,
about the drivers, the four big drivers of edge computing.
Now, which of these edges is
going to be the one that's going to have more business? Well, the deal here
is, according to Gardner, that the lower you go,
the better it's going to be in terms of business, which makes a
lot of sense because original DC is something very
close to the cloud, or local dc is something very
similar to the cloud. Right. So if you want to really get
to enjoy all the good things about edge computing,
then the further you are from the cloud, the more
edge computing you're going to be doing. Then for
those applications that really make sense in the edge, it's going
to be better. So here we can see that gateway edge and device edge are
probably the two that are going to make it in terms of being more
used by companies.
If we look to this impact, rather by garner,
again, we can see that edge computing is usually
at the center, and we can visit many of these diagrams
and we'll see edge in one of other fashion, appearing always very,
very centralized. Here you have edge computing vision
just at the center of the radar, but you also have edge AI very
close to it down there, edge AI, which is also an up and
coming discipline.
Okay, so this is our index today.
We've gone already through the two first points. So what
are the challenges of cloud computing and why the edge
is, for us, the solution now we're going to jump on to point number three,
edge computing plus AI. And the idea is that we invite
this new friend AI into the mix and see
what happens, because edge computing is a very powerful technology,
but when it couples with AI, then it just
becomes a massive commoditization,
a massive team. I'm going to
see how we could do that. And for this,
I've brought a couple of examples,
real examples of projects that we've done with real companies
here in Spain. And the first case is Akfiona. This is
a company that they do many, many things. In this case, we worked
with the water management side of the company,
and they usually manage many water treatment plants globally.
And they usually spend a lot of
money in chemicals because they
have to ensure that the quality of the water is
the best to be distributed. But when you put chemicals
on the water to treat that water, usually you have
to wait for some time, because there's some dynamics there. It takes some
time for the water to stabilize. So it
is always a matter of trial and error. So you would just put
some chemicals on the water, wait for a while, analyze it,
then put some other chemicals on the water, wait for some time, analyze it,
and there's this up and down sort of approaching the ideal
level. That takes a lot of time and
also takes a lot of money because you might be using more chemicals than you
should. So what they did is they come up with a predictive
system where they would, with machine learning,
predict the amount of chemicals that they would need from the start,
right? So they would avoid this going forwards and backs
with adding more chemicals and retesting. Right. And for that,
they created a model that they trained with all the data
that they had of past interactions, and they
wanted to deploy it in the edge. Why does it make sense
to deploy this in the edge? Well, because they have many
water treatment plants globally. Each plant
depends or belongs to a different company.
They're very wary of data, so they want their data to be
kept in the plant. So we have this privacy driver
here. But also, usually these algorithms are fine
tuned for each plant, right? So we have like this general
model that gets fine tuned down to the level
of the sensor, right. And in one plant,
they might have several sensors, 12, 20,
30 sensors. We have to train a model for each
sensor. And you know that with time,
those models come somehow decoupled from the
sensors. So somehow, because we are using
the model to predict things, we are somehow changing
the environment. And then the sensors might not be as coupled
to the environment. So we have to retrain the algorithm.
Also the sensors, they sort of change with time,
so they change their capacities,
they change their accuracies,
they get out of sync. So we have to
retrain the models to make sure that they are relevant
at that moment in time with that sensor.
Obviously you could do all this in the cloud, but probably it wouldn't make sense
because you would be training many different models for specific devices
down there in the cloud. It kind of makes more
sense to have an edge device in the plant and
train all those models locally. It's going to
give you a faster throughput and also it's going to be
more careful in terms of data. Right. So here
we also have some of the other drivers that we spoke about that
probably make the cloud not the best choice.
So this is one of the cases that I bring you, but we have another
one based on energy. And energy is especially these self consumption
markets that are emerging everywhere,
but also smart grids, which is more of a medium
term bet, but it's something that is going to arrive as well.
So in energy we have loads of edge computing applications, because if
you think of it, smart grids are usually highly distributed
infrastructure, the critical infrastructure.
So data and private and security is very important.
But also autonomy. You need all those transformation centers
or substations, you need them to work independently.
But also it would be very nice if you could manage for them to talk
to each other and somehow negotiate.
Where should they route the energy through so they ensure
the best usage of energy, right? So we avoid throwing
away energy on one side of the network while we are producing
more energy that we need on the other side of the network, right? Just because
we haven't rooted it good enough.
So if we could get all these nodes to speak to each other
and find the best way
to balance the network, that'd be great
from a smart grid perspective. But the case that I bring you today is
a case for EDP. And EDP wanted to work,
they usually work in this auto consumption space,
and they wanted to give their
customers at home a system that would allow them
to make an intelligent use of the energy. So they
will have these solar panels on top,
they will have these batteries to
get all the energy from the inverters.
They will also have probably charger for
their electric car, and they will also be connected to the
grid. So the idea will be here to make an intelligent use of the
energy. So wherever we have available energy, that's green
energy, we should use that. But we should also maybe take care
of when we charge our car or
when we use our dishwasher or whatever. So the idea is that you
make an intelligent use of all the energy, so you maximize
the amount of renewable energy that you use at home.
So this is a project, again, that makes sense in the edge, because we're speaking
of many households. So you can imagine
if you had to do in the cloud the computations for the models
of each home, that'd be a mess.
Also, we have here the problem of data and privacy.
So it really makes sense to do it in the edge. So we created a
system based on Barbara where they could do all
this distribution of energy, intelligent distribution of energy
internally in each home, using an
ML algorithm to ensure the
best usage of energy. The nice thing about this application is that it
can also be taken one level up. So what if all the
homes could speak to each other and somehow get
in touch and exchange all the energy that they
don't need anymore? Or probably take turns to charge
their cars, their electric cars.
So, I don't know. One charges from 10:00 p.m. To 11:00 p.m.
Then the next home charges from 11:00 p.m. To 12:00 p.m. And so on and
so forth. So we would avoid the peak in consumption,
and we'll have more of a distributed,
flat usage of energy in
a community, what we could call an energy community.
So we can see many applications of edge computing
in this space. This is one example, but you could see
how it makes sense to run all those algorithms in the
edge. Now, all these, the two
examples I spoke about, they were very industrial, and that's because we,
at Barbara, we work in industrial edge computing.
But I want to come back to a sort of
broad example that everyone knows, a more consumer example.
I want to speak about Alexa, the smart speaker that many of us have at
home. And the idea with Alexa is that there
is some edge computing in play, but there's also some cloud computing
in play. And I think this is a good example on how
we will see edge computing complement cloud computing.
So with Alexa, we have this flow here. We have a user.
We have the device that's connected to our network, usually through the
router. Then it goes to the Internet provider. And there are some services that are
usually cloud services. So whenever the user speaks,
Alexa takes the audio and digitalizes
it. Right? And here they use what they call a
keyword spotting algorithm, which is a small algorithm that
runs locally in the Alexa device that
is always looking for the keyword, in this
case, Alexa. Right. So whenever it spots it,
then it connects to the audio that's being spoken,
so it could digitalize it and send the petition to the cloud.
So here we have an ML algorithm running
locally. So we could somehow speak of edge computing
because we are running this algorithm in the edge,
right. Then that audio that's captured is
usually streamed towards the Internet, and that's
going to fly over the Internet. It's going to travel through the Internet,
and it's going to arrive at Amazon's server in the cloud.
And the first thing that's going to happen is the audio is going to be
converted into text, and that's going to be done through a voice
to text algorithm. So here we have an ML
algorithm that's working in the cloud. Then that petition is going
to get analyzed and a response is going to be prepared.
And usually here there might be other ais that
come into play. Again, these ais are going to be executed in the
cloud, right? So we saw how we started with an edge
computing algorithm, and then that was somehow sent to the
cloud for several cloud computing algorithms to come into
play. Next steps would be preparing
the response. Usually it's either a text
or an action. If it's a text, it's going to get converted into audio,
it's going to get streamed, backed through the Internet to the
device, and the audio is going to be converted in analog
sound, and it's going to be played back to the user. So the user gets
the answer. Summarizing what has just
happened is we've had a little bit of edge computing
with this keyword spotting ML algorithm, and then we've
had a lot of cloud computing with this voice and text conversion with
these other ais that come into play. So because
most of the computation is done in the cloud, we tend to think
that Alexa is a cloud computing solution. But truth is
that there's a little bit of machine learning also going on locally in
the edge, right? So this is really
truthful. This is the combination of edge plus
cloud computing, right? And here I want to speak about
this edge to cloud continuum, which for us is key,
right? There's not a single application.
Probably there are, but there are many applications
that make sense at some point between the cloud
and the edge. So we don't think that there's,
like cloud, all cloud applications,
or probably there's not all edge applications.
Many applications, they need to distribute their workloads
in this edge to cloud continuum. And we
think that one of the most important things that you have to do when you're
designing a system is trying to find in
which place you're going to put these or that computation,
right? So if you need low latency, if you need to protect
your data, if you need to be autonomous, then probably you should put things
on the edge. But that will probably be complemented with other things in
the cloud that will give you some centralized point of view,
that will give you some additional power of computation.
So the idea is to work with this continuum and be intelligent
about where you put your usually microservices.
And here I have to take my words back just a little bit.
So maybe the cloud is not broken. Maybe it
just needs some friends. So maybe it just feels
a little bit lonely. And edge computing is one of those friends
that comes to the rescue. Probably the other big friend is
AI. Right? So if we combine these three guys,
edge plus cloud plus AI,
we have a very powerful team that's capable of almost
anything. Almost anything.
All right, so we've just seen how
we should be deploying many models in different places from the edge
to the cloud. How do we manage this complexity? I mean, if we have
an application that's been deployed at several points, maybe we have
a microservice running in the edge, maybe we have another microservice running in the cloud.
Maybe we have a microservice running somewhere in between in a local data center,
or maybe we have an embedded part and then we have a gateway
part. So how do we manage this complexity?
Well, mlops comes to the rescue, as you
have imagined. It's all about mlops.
You know what mlops mean? It's about maintaining
machine learning models in production so you could
trust them and you can make sure that they're always there and doing their job.
But the idea here is that we should be able to
not only put them in production, but also monitor them and
also keep collecting data and also retrain
them and also redeploy them.
And obviously that becomes a little bit messy when you're dealing with,
I don't know, a deployment of 1000 devices
in the field. You might be used to do this in
the cloud with a single model or maybe with 1020 models,
but models that are centralized in the cloud,
you can imagine how messy that can come when
you move to the edge and you have 1000 devices,
different devices in different locations, some of them are very remote,
so very difficult to reach. Well, that can become really
a problem. And it's actually a problem. I mean, if you think of the challenges
of mlops, according to AI Infrastructure alliance,
one of the biggest challenges that we have when we want to deploy
models in the edge is that usually more
companies than not take from
two months to a year to put their models in production.
So from the moment that they have everything validated,
so they have the model, the model works and they just want to put them
in production. It takes up to a year,
between two months and a year for them to just deploy
them. Right. And that's especially due to the complexity
of deploying models. Then one
of the biggest mlops challenges is usually deploying and
also monitoring. Right. If we're going to do this in the edge,
well, we need a platform in the
same way that if we are doing cloud computing, we tend
to use a commercial platform. So we either go to AwS
or we go to Azure or to Google or to
whichever provider we like, we need
a platform to do that also with edge computing.
So that takes us to point number four,
architecture and frameworks.
So here we have different providers,
starting with traditional providers such as AWS.
AWS provides you with basically anything that you would
need. They are famous for not deprecating
any service, so they have many services running
and the stack is just huge. So if you wanted to do
edge computing with aws, chances are that you would be able to
do it. We also have Azure, which is
probably the other big contender in terms of traditional
cloud platforms that go down to the edge. Again, there's a huge
stack of products here, so if you wanted to use Azor,
there's very high possibilities that you would end up being able
to do whatever you want in the edge.
However, there are some traditional players,
such as Google, that are shutting
down their IoT edge computing offering.
Right? So there's a little bit of confusion in the market where we have
some big players that are betting on IoT and edge computing
and some others don't. Right. And this is a recent
news that they will be discontinuing that this year.
We also have IBM who have also retired
their IoT cloud services.
So again, there's a little bit of confusion here. It's not
clear why they have decided to do that, probably because
it's so complex, because the good thing about the
cloud is that they manage to offer like a standardized,
horizontal sort of service, whereas when we go
down to the edge, it becomes a mess because you have many different devices with
different hardware providers, different specs in
remote locations. So that might have been a
difficult problem to solve for these guys.
The idea here to me is that these traditional cloud providers,
they have seen the edge as a way to just go
there and get data to pump their
cloud services. So they're not native in the edge,
they're more of someone that finds
convenient to go to the edge to try and recruit
some workloads or to recruit some data that
they can use to bring customers
onto their services online.
Apart from these traditional guys, there's some native providers.
These are usually new breed companies that
have come alive, usually startups, usually in
their three to five year
of life. And they have come to life
just to do edge computing. So their only objective is
to do edge computing. So there are different breed of companies in comparison
to the big cloud providers, but they're also small. Right?
So we have CDA, which is an american company. We have crosser, we have
sunlight, we have Avasa, we have six XQ,
and we also have Barbara. Right. And today I'm
going to speak about Barbara because obviously that's the company I
work in and that's the platform. I believe it's going to become the
new standard. But also because speaking about Barbara
in a way means speaking about all the other guys
because we tend to do things in a similar fashion,
give it or take. Right? So if you think of the technological stack
that you can find with these providers, you're going to
always find a different flavor
of this, more or less. So we
can see in the right side of this slide that we
have the stack divided in the cloud part and
in the edge part that's below. The idea is that the cloud part
is unique. We only have one cloud for all
the edges because the edge part is just thousands or tens
of thousands, has many devices as you have in your
deployment. Right. And each of those boxes is one device in
a remote location that's doing things autonomously,
but is being managed by the cloud. Right. So we usually have like
a centralized point of view of all the decentralized distributed
devices. And the way it works is as follows.
In each device we have, first we have an OS. In our
case it's Barbara OS you could use usually any
breed of Linux. If you think of CDA for example,
they have Evo s, it's their own operating system as well.
And many other companies, they just use regular ubuntu or
debian or any sort of open source distribution.
So you have your os at the bottom side
of the device which is controlling the underlying
hardware. Then one step above that
you have what we call the Barbara node manager and that's the name that
we've given to it. But usually this is called
an agent in some spaces.
But what it really is is a piece of software that's
installed in the device and that manages all the connections
and all the commands and the relationship of the
device with the cloud sort of motherboard,
right? So sort of this centralized
point where all the command and commoditization and
all that should happen. So this note manager is
a small piece of software that runs internally and that ensures
that anything that should be executed in the device gets executed,
that anything important that's produced in the device gets
communicated to the cloud, and that ensures that the system
remains safe and working and autonomous and
that everything's fine. That's the node manager. Then on
the device we have also the workload, and I
usually run inside docker containers.
I'm not going to go into what a docker container is, but it's basically just
a way of packaging applications, a very comfortable,
convenient way of packaging applications. So we have there some
purple workloads and also some white workloads,
those small boxes that you find just there, the purple ones
is because they usually run off the marketplace.
We at Barbara, we have a marketplace where we offer pre built and
pre tested and validated applications that the
users can just deploy to their edge devices without
having to program a single line of code.
And then the white ones are the ones that users themselves can upload to
the system. Right? So you can either use pre built applications and
deploy them, or build your own applications and deploy them.
So all that is happening in each separate device.
Then when we come to the cloud, we usually have an
API. So it's like the central point where all the commands are
thrown and that also receives all the
monitoring feedback from all the devices. And then
on top of that we have what we call Barbara panel, which is
just a front end. It's a visualization panel where you could
see all the monitoring information, but also you could issue commands,
right? So that's the architecture you're going
to find in different providers. The idea is to manage
different edge remotely located devices from
a centralized point, but also maintaining them
safe, maintaining them with low
latency, maintaining them with
being autonomous, so fulfilling every
promise that the edge gives you, but also allowing you to control all
those from a centralized cloud platform.
All right, so let me show you an example of how these platforms work.
In this case, we're going to log in into Barbara with my account,
and you're going to see a list of devices that I have here. This is
my demo account. So most of these devices are just for
demonstration purposes. But the idea here is that you could have as
many devices as you wanted. So you could have huge deployments
in the thousands if you wanted to. And the idea is that
you could group them by groups. So here I have three groups, you could
have many, many more. Each group has several devices,
and then you could also visualize them altogether, or you
could maybe also visualize them by groups. So, I don't know, all the demo devices
or maybe all the AI models, I don't know,
you could just browse through them. And for each
device, you have to think that these devices, in this case,
they are in our laboratory, in our offices in Madrid,
but they could be just anywhere in the world, right? They cloud be in
remote spaces running algorithms without even
a connection to the Internet, or just a very sketchy
connection that comes on and off quite
often. Right? So the idea here is that you have all those devices and you
cloud do things to them. You could operate them individually.
So, I don't know, I can go to this artificial vision edge node,
and in this case, we can see all the workloads that are being
executed here. I could collapse all the cards.
So all these light blue cards,
they are apps that are being run in the device.
In this case, we have an image recognition model here.
We could see the logs of that model and see how
it's doing. But also there's other things running in
there. We have the Grafana, that's a graphical interface.
We have an influx that's storing all the data that
is being read by the camera, but also all the data
that's being produced by the model. Again here,
you could also visualize the log and see what's going on in there.
You could easily remove things from here. So if I want to
remove the model, I could just click on here and remove
it. So I've deleted this card. It's going to disappear
in a moment.
Yeah, it's disappearing.
Yeah. So it's gone, but now I cloud add it
again. So sending workloads,
machine learning algorithms or anything to your edge
devices is very easy. You just have to just
select the app you want to send. In this case, we're going to
be sending the algorithm. I don't know,
I have the image recognition algorithm model,
and then in this version, I would just send it
to the device. It should be shown in the same space
that it was. So now we have it again, and we'll be running
in a moment. So the idea is that you could handle
them individually, so you could ensure that your models,
your visualization systems, your databases, anything is running
on that particular device. So we are on this device that I have
selected, but this could also be done massively.
So if we go back to our nodes, we could
do this in batch, so we could just select different devices,
we could select the batch operation that we want to do and we cloud just
launch that and we will be doing that
for all those devices. So we could install a model in
many different devices with just a couple of clicks.
You could also update those models, you could also install other types of
applications. So it's very handy when you have thousands
of devices to be able to do that on batch.
Then there's another view that's very interesting here, which is the spaces
view, and that will allow us to see all the workloads that are
running in our system, right? So application per application,
models per model. So here we can see that in
this case we have three image recognition models running in
different devices. So one of them is running in this edge node,
VR. Another one is running in the artificial vision edge node, which is the one
that I showed. This is online, it's running at
the moment. We could stop if we wanted, we could just
see the logs in a new screen. So this
is also very neat and easy way
to see all the workloads that you're running in the edge. So you have to
imagine that this is not a cloud environment where you
are running everything centrally. Each of these
apps is going to be running in a different device, or maybe they are
grouped into a specific device that's somewhere
remotely. And the nice thing about this is that if they lose connection
for some reason, they're going to keep on going and they're going to keep doing
the magic that they're doing, right? So if they're routing
traffic in a, I don't know if they're routing energy, maybe in
a smart grid, they're going to keep doing that. So they're very
resilient to any problem with connections
and with cloud systems.
Point number five is about applications. We already spoke about
two applications, if you remember, in the water management space
and also in the smart grid, sort of auto consumption
energy space. There are many more. I'm not going to go into them because
I am running out of time, but you could check them out
in our website, which is www.barbara.com.
There you have a section devoted to many different use cases and
you're going to see why it makes sense in these spaces
to use edge computing. You're going to get some smart grids examples,
you're going to get some smart water examples. Also smart manufacturing,
computer vision, and many, many more. So if you want to know more
about what Apple applications make sense in the edge,
just go to our website and we've
reached the end of the presentation.
I hope that you've liked it. I hope that it's been useful.
If you need anything else from us, just please reach out and
I hope to see you soon at the next conference.