Transcript
This transcript was autogenerated. To make changes, submit a PR.
Everyone and welcome to my talk today IBM
here to present to you a topic on exploring stateful
mic services built with open liberty in Kubernetes.
My name is Mary Grygleski and I'm a senior developer advocate
at IBM. You may wonder who is Mary Grygleski?
I'm a developer advocate at IBM. I first started
talking about reactive systems. If you have seen me before
I was doing a lot more reactive systems for the
past year and a half or so. Then recently I transferred team to
the websphere team and Liberty and all the open source Java
tech stack within IBM. This is where I am now.
So my area as such also involves cloud native,
cloud DevOps, enterprise distributed systems and
my projects now include Open Liberty,
Microprofile, Jakarta Ee and more.
I'm essentially a developer relations software
engineer and I myself I have 25 years of experience
being an engineer in development work and
it's delivery work as well. And so I kind of understand
not just like coming out to talk but I myself was an engineer dealing
with delivery of software in the past. And outside of my
work I'm also a very active community builder.
I'm currently the president of the Chicago Java users group IBM,
also the co organizers for several IBM sponsored
meetups in the Chicago area and that's where I'm located and
I'm also an active church volunteer. So today's
topic is about stateful microservices.
But before we get into there I figure let me
start, have a bit of a background, make sure we
set the context correctly. But before we get into some
serious stuff let's kind of talk about what stateful versus
stateless means. Actually as I was using research on this topic and
by the way it's a new topic for me too so bear with me.
And we are all at the tip of the iceberg because to
me there are tons of information that have yet to be
mined from this topic area. It's so exciting too and
also too for me coming from a reactive background I see
a lot of overlapping concepts thats cut across.
It's not like black and white, that's reactive and that's stateful.
The fact is that reactive too also we have to handle
the state of any application and as such now IBM
perhaps not talking so much about the reactive side but as you
can see some of the concepts, the goals in being stateful
your microservices really touches upon reactive. But anyway
let's have some fun stuff first. So as you can see I
now have a picture of Nemo, his dad
Marlin and the forgetful fish, Dory. So if you
know thats story, then you may be able to
guess why I am borrowing this idea. I actually
have to say, too, is that it's somewhere where I read about somebody talking
about stateful microservices. They brought up Nemo,
and I kind of gave me ideas. So, just so you know,
it's not like my own original idea, but I certainly got into it,
and because I know the story. So I just thought, definitely,
it's a great analogy to be used at a conference, right? Less boring.
And let's talk about fish. So let's use them to
kind of illustrate. So, as you know, and for some
of you who are not familiar with the story, let me give kind of
like a two minute run through of it.
So Nemo is this small baby fish, and Nemo
actually lost his mom. So just living with his dad,
Marlin. So the bigger clownfish, the orange clownfish.
So both of them living together and happy. Then all of a sudden, one day,
Nemo got into trouble, and he got caught.
Captured by some. I wouldn't say bad people,
but just captured. And obviously, as you can imagine,
the dad was, like, frantic trying to look for Nemo.
So during the course of it, he was swimming around, and then he bumped
into this bluefish, Dory. She actually is a very
forgetful fish. She doesn't remember anything. Now, as the
story goes. And even if you are new to the story, you can imagine,
right? Have you ever dealt with people who are forgetful? And I really don't
mean to be, like, make fun of such situation. There are
people, too, with medical conditions that do not have memory,
too. But as you can see, in a cartoon kind of manner,
too. We just kind of laugh about things like that
in A funny story like thats. You can see, too, Dory is a
lot more agile. Like, she can do whatever she wants at any moment
without concern for thats has happened before, right?
Something like that. So you can kind of see. I mean, it's basically the
idea of if you don't need to remember things, you actually can go ahead and
do things. You're less concerned, you're less burdened, less carrying
of the baggage. Whereas Marlon is kind
of like a normal person, right? We remember things, so he naturally
remembers things. And as you can see, both of them embark on their journey
to look for. Nemo was full of know, really hilarious kind of
moments. One of the things maybe I should quote was that when
there was a group of jellyfish and Dory thought they were really
know they are pink in color, kind of round, kind of ball floating.
So Dory went and was like trying to, to so speak, jump on
these jellyfish, because she had no idea that jellyfish would
sting. Whereas Marlon is like, hey, wait a minute, what are you doing? Are you
crazy? You're going to be stained like that. So that's kind of an example
of how hilarious it can be. But it really illustrates too
that there's the stateful and stateless, some of these concepts
you can actually use that it's like an analogous to what
we are trying to talk about today on this topic, is that being
stateless is kind of a lot simpler, right? In a computing sense too.
Computing life used to be simpler too. In the beginning of time of
computers, there were no such thing has state, right? Thats we need to
worry about, other than the fact that the state would exist during,
let's say a computation, doing a math, one plus two, some kind
of mathematical formula. It only needs to exist the memory
for that period of time. Once you get the result,
essentially the memory gets thrown away. You don't keep the
data around. And that's the idea. But the thing is too.
Well, okay, let's kind of go in a bit now. So, stateless computing.
Now moving back into the serious, better about computing, what is Stateless
computing? So essentially it is a communication protocol
that does not retain any session information. So state of the data,
right? Anytime we deal with protocol, we deal with multiple
parties, or at least two. So your receiver receives the data.
You don't need to remember what the state was before
you receive it. So that's the idea. So the data itself
doesn't get recorded between transactions. As a result, you can
see thats the architecture of your system design implementation too.
Everything is a lot simpler. Scaling the system,
no problem, right? If you have processor processing some
systems, and here you are, there are more demands now,
more requests coming in. Then what do you do? You scale it up,
you add more processor to it to handle all the demands.
No big deal, just add it, right? So it kind of relates to what we
are doing today too. Even with cloud computing, we needed more
service in serverless computing, right? That's the thing, the infrastructure,
it keeps spinning. And that's why the idea of cloud native in a
strictest sense too, is really serverless. It helps to scale the
systems a lot easier. And of course too, if there's
system crash, then the recoverable, it's very recoverable,
it's very easy because there's no need for you to when you restart,
there's no need for you to have to remember, well, what was
the state before that? I need to kind of read memory and kind of restart.
So everything is a lot more agile, as you can see. Now,
realistically, we actually live in a stateful world and we
really cannot survive if we are living statelessly. Right.
But it doesn't mean thats we have to have all applications be stateful
in some circumstances. Stateless is fine. Functional style
of programming, serverless, that's okay. But the bulk of our
applications we do require state, right? When, let's say
you log on to your web page,
your web application, you log on, you want to retain the
credentials when you are still within the session, you don't want to have every request,
you don't need to reauthenticate yourself every time you just make a request
within the same session. So that kind of a statefulness is what we're looking at.
Stateful computing. This really is a communication protocol
that would retain all session information. So it gets recorded
at every step along the way, right? All transactions,
everything, supposedly, theoretically nothing should be lost.
And in some ways too, you guarantee, right, that you can kind of
look back at the system and dig up all the history information.
Now as a result too, because you do need to keep all of these
historical data, the logging and everything. Then naturally too
you can see the architecture, the implementation, the design of the
system is more complex, it's more convoluted,
and you have to handle many situation and what. But if
you need to spin off more components when you get
higher number of requests coming into your system, then you need to scale your system
will be a lot harder because what do you do if you kind of spin
off another node, then you want to make sure that node is synced up
to what the other running nodes already have. So as you can see,
right, the overall performance to scaling it, everything's just
more work, more cumbersome. Recoverability, same thing.
If there's a system failure, you need to restart. Well that's
kind of a pain in the neck, because then now you need to make sure
when you recover the system, you need to read from the history to
see how you can reconstruct the stateful that system was
in prior to the crash and all that. So as you can see,
it's not as simple, right? Doing stateful. How about then let's
take a look. How was statefulness being handled in
the cloudless days? So here I am using a blue sky, and you
can see all blue sky is just a jet plane flyby and
in the blue sky cloudless days before cloud. So it was all
like client server systems. Now we rely a lot to back
then, let's say not going back to all the way to the beginning
of history in computing, but let's say even between the time
or after, like the 90s, right, in the 2000s when Java came
out, let's talk about that time. So a lot of stateful too was
being done using database style transactions.
So we rely a lot too on the database to essentially keep data
to be persistent on the server side. So all of these transactions are
using also being handled by the database. We leverage a lot on the
database itself. Now as far as Java goes,
essentially too. If you have worked with enterprise Java beans,
the earlier specification of enterprise Java,
the EE edition, right? Java Ee or J two EE,
what it used to be and now it's called Jakarta ee, right.
Just recently change name about a year and a half ago. So Java
used to be enterprise Java beans. EJB, stateful EJB.
If you work with that, you know, you're aware of stateless EJB, it's a
lot simpler to do, whereas with stateful it's actually
a lot more work. And then there's also the enterprise level stateful
actually, now we'll kind of think of it, don't quite remember, I actually work for
a company that manufacture app server. So I actually knew the
spec quite well because I had to implement a lot of the management function.
But anyway, so stateful, it was being handled by stateful EJP.
And the thing is too, what about servlets? Right?
So servlets too is that is essentially rely on HTTP
session. So the session is what kind of keep track of the
session information and kind of communicates with all the web requests that
comes in. And then as far as the client side is concerned,
then client side would cache like server responses.
So say for example, these are examples, right? All web applications
will have to, any request you make to the server, you authenticate.
So the authentication process either kind of make use of a cookie
based type of authentication or like doing like token based authentication.
So cookie based as you know too, it has concern even though the
cookie itself is small piece of information,
essentially is your session id, right. That you communicate back to the server.
So that's how you can see thats external mechanism is needed in
order to be able to communicate back to the server that, hey, this request
coming in, I have this cookie, then server can look up and then know
that, okay, well this is which request it is kind of
coming from, who is the client that you can kind of reuse the same
session information. And then of course then there's also token based authentication
and that's JWT, it's more popular, right. The JSON web
token. And that too for JSON too. Actually JSON
is a bit more efficient than cookie, as you know. Too for cookie,
right. You basically will give the server your session id and
then the server would have to then look up the information. And then
basically you have to then kind of multi steps, you need to look up
the information, then you process the request. However, with a
token based like JWT, then the JWT is allowed to
keep more information, right. You can actually encapsulate also the session
information in that token. So then it's kind of more like one call to the
server, it already has the permission and it looks up and all those things.
But anyway, without going into all the detail, but those are kind of
the mechanism that has being used. Actually it's
not too much different than what we have to deal with today, but it's just
that back then was client service system. So now let's
then move into the stateful microservices in cloud
native environments. And that's what we're interested to talk about these
days. We live in kind of cloud native age and maybe not everybody
has started doing cloud native, but I would imagine very soon we
will be more and more, or I should say more and more applications
will move into the cloud native environment. And conf 42.
Then there's stateful, first of all, microservices. So they are like bite
sized chunk, right. Of component in a system that
actually kind of is a big enhancement over
the old style of doing things in a monolith, right? So now then
we pull into this concern of stateful. Let's take a look. You may know too
well, this is kind of a sums up, right? Essentially you're saying that, well,
cloud native, and then we're talking about cloud native is really dealing with
stateless containers. Why? Because containers needs to be
very agile. You don't want to keep a lot of history information about it
and kind of going back to the Nemo, Marlin and Dory scenario,
right. If you keep too much information, it just slows things down. The goal of
cloud native is to have very agile systems. So we're wondering
then what to do, right? So let's first kind of understand a bit too of
cloud native computing for those of you maybe who are newer to this
computing kind of approach. So it's really an overarching approach. Right.
It's really an extension to cloud computing. It addresses
the true needs of an enterprise level distributed business application
systems. So these true needs, what are they essentially? I go
back to history, who started this? Coined the term of cloud
native. It's really Netflix. Netflix has been really like ahead
of time in terms of software computing at the time. Well,
because of their business too, they need to kind of have streaming media
contents in high volumes and high demand.
So that's why it pushes them to get to that level of
computing early on in the game. And so they coined the term cloud native
in the early two thousand and ten s. And essentially they wanted to leverage
on the cloud to meet their goals for the systems to be highly
available, scalable and performant. So as you can see now,
if you are also familiar with reactive systems, and that's where
I kind of came from, kind of advocating, I was like, oh, wow,
cool. This is like the similarities, right? We're exploring into more
and more, deeper into the cloud native context for microservices.
We want to make sure microservices help systems to
be more highly available. Right. And this kind of pairs up really well with the
reactive concept of being very responsive, right.
Very scalable and very resilient. So as you can see,
highly available, scalable and performant performance is kind
of like the responsive aspect of reactive. So they kind of share these common
goals. But anyway, I know I get a bit excited when I kind of get
back into talking a bit about reactive. I just believe that sometimes
all these labeling eventually, but things is
not really black and white. They are so much like commonality things,
features, characters between different things that we label them for.
But I think ultimately the goal of us doing computing is to provide
solutions to solve problems. And that's what it is.
So it isn't saying that one thing is better than the other. It really all
depends, right? It depends on what you want to do for your system. What do
you need? So first of all, anytime you need to provide, look for a solution
for your problem, you need to understand your problem first, what are you trying to
solve? And then decide on the right tool to use. But anyway, I digress.
So let me get back. It's kind of interesting and important.
We also bring up cloud native within this context of stateful microservices.
So for cloud native, right, we're kind of having this debate. Well,
we're talking about stateful microservices. How is that possible?
So cloud native, right. You may also have heard of this twelve
factor application. It's a methodology, essentially helps guide
us. It doesn't dictate what you should use, but it's kind of
a guiding principle. Another kind of guiding principle. This kind of
describe what a cloud native application should do. And it's drafted
by developers at Heroku, again, a set of guidelines
for portable and resilient applications that are well suited in the
cloud environments. Now one of the factors, and that's, thats I'm highlighting
too, is the need for self contained services which are
to be deployed as stateless promises. So microservices
architecture. Well, so far, right, that's the primary thing is
so far as microservices is the one that can satisfy such a requirement.
So here we are. Why am I doing this talk? We're talking, well, what is
going on? Stateful microservices. Whereas in the
microservices, when we kind of talk about it today in a cloud native
way, it should be stateless. Like you're thinking that am I
crazy in here? Right? But like I said, I was trying to point out that
it is necessary, we needed to address the stateful aspect of
any applications and how do we do it is what we're trying to explore in
to see how this can be done in a microservices,
cloud native world. Twelve factor application. So let's
just may as well kind of show it a little bit too. Move over
here a little bit. So as you can see too, there are twelve factors.
But like I said, you can also read all of it, some of them you
already know. Right. We deal with making sure there's a lot of accountability,
keeping track of things and the dependencies,
configuration, all these things. But I wanted to highlight this, in which the
number six, the processes, so execute the app as one or
more stateless processes, and then another one is basically which
one? This is number nine, disposability too.
It's basically, it says any kind of cloud native apps,
it needs to be able to start up fast and have very stateful shutdown
fast. Everything is fast, fast. So these two kind of cloud native, I think
we already touched upon, but now let's kind of ask. The next question is,
well, how do we preserve state across session,
across transaction and network boundaries? Now in a cloud
native environment, now we are really dealing with true
distributed systems that can span across many
different types of environment, machines, physical.
It may not be across the ocean too, but it
can be, right. It just basically gives it a lot more flexibility.
And you can have data centers that are located all over the world.
With an app, you can scale it anywhere. The notes can be startup,
anywhere can be replicated. So how do we preserve state? Right? That's the
kind of a million dollar question in here. Now let's talk about then
some of the things is kind of like the techniques and mechanisms. It's what we
already kind of talk about. We make use of cache, right? We can do
caching. It's still kind of still very valid. And we'll have
an example. The demo will show you that and how we do it with open
Liberty. And then there are also like database style kind of transactions that
rely on database. So as you can see, all of these things are really not
intrinsic to the microservices itself. Or maybe the container itself.
Right, the container. Well, what used to be right, even like EJB,
is the EJB enterprise java beans like container and
all that. But now it translates to cloud native, which is container. We're talking
about infrastructure level, such as like kubernetes, very popular these
days for container orchestration. So anyway,
as you can see, all of these two, they are not intrinsic to the containers
themselves, but these are like techniques, mechanism that we can utilize.
Then there's also cookies, right? We still can make use of cookies too,
and sessions and tokens and all these things that we talked about earlier.
Again just now too, I touch upon cloud native infrastructure.
So now with cloud native, why don't we then take advantage of the
cloud native infrastructure? Infrastructure like kubernetes essentially
too, there's concepts of stateful sets, right? And that's what it
is. Like kubernetes on the container level, it actually maintains
the different set of information. They keep their statefulness in
that. And even like say with Openshift that comes from red hat,
they also actually have a cloud native kind of concept of stateful.
Cloud native DaC two, thats essentially is making use of
the container level persistence too for you.
So again, this is more infrastructural. And that there's also like persistent
volume and also cookie affinity. So those are kind of examples
of features within kubernetes that they can handle,
help you to persist data across different does, things that
are replicated. So then it takes the burden off of the application
itself. Okay, so now get back into a bit more. The programming
level. We're dealing with programming design pattern.
A very prominent pattern these days to use to think of is
the saga pattern, right? So saga is like any kind of saga.
So it implies in a story we like to talk about
saga or something. It's kind of like a long running story,
right? Something that will last. You're not only like processing things in
the spur of the moment, kind of stateless, you're dealing with a long running transaction.
So these two, I kind of group them together. But I wanted to also point
out too, there's also this thing called long running action, and this one
is a form of also saga too. It's like a saga interaction
pattern. I probably should be grouping them together, but I can't
at the moment find a very clear distinction if they are
two separate things, because they share a lot of similarities too.
But I'm not able to find a documentation that would kind of address both
all at once. So I kind of am being careful, at least
in terms of the names of these patterns. I kind of split them up
into two, but they could easily be kind of combined together. They share a
lot of, kind of like the similarities. Right. In terms of their concepts.
So let's get into it. Saga is really like we're helping
transactions to span across multiple services. That's the
goal. But we don't want to go back to the told ways of database
two phase commit. Some of you, maybe if you have worked
with that before, right. It used to be with transactions. We have two pc,
two phase commit. So that kind of make use of multiple
databases or say two different databases, you have different transaction coordinator
that pass kind of information. Are you done? Are you done? I mean, good.
Then you commit. It's a lot of communication going in between.
So you can see too, it's probably a lot of messaging. So it's not really
suggested. Instead using a saga pattern is thats
kind of being advocated in a microservices environment.
And of course to one of the most respected microservices
expert, Chris Richardson, who I actually interviewed too,
on one of our IBM video. If you want to look thats up too,
we talk a bit too about microservices. It's all these concepts around. It is
truly amazing. So back to saga, right? So others are two
ways of doing their coordination sagas. One is like choreography
and the other is orchestration. So without going into too deep,
because this talk is more like scratching the surface,
we're exploring this aspect, but I wanted to point but so if you have
not gotten into it, you may want to also start doing some research on it.
One type is called choreography sagas. So as you
can imagine too, then what it is, is that it leverage on
an event driven kind of way of notifying.
Right. If we're talking about sagas in here. It's like there are different components.
Say, for example, ecommerce. I think it's a very good example of an
ecommerce shopping cart example, right? So a shopping cart,
you have somebody that, it's like a dance ballet, right? Somebody choreograph
all the music and all thats. So you make sure all the flow is smooth
between the different scenes. So likewise, you can have
ecommerce systems. You have order kind of order management
part. Then you have the payment part, right. And then you have like
shipment, for example. Well, these are kind of broad categories of an ecommerce
system. So you have somebody who is there who is kind of like,
sort of think of it more like a coordinator.
Well, actually, I should say that one, you don't need a coordinator for
the choreographer, but you have a built in way of triggering.
If some order, the steps in your ordering gets done.
Let's say somebody place an order, then, you know, it triggers. It's event driven.
So an event will kind of be sent out to the payment part, let's say.
Now, I'm just kind of like broadly kind of using this example, but in reality,
it could be more complicated, too. But let's say a customer place an order on
the web, then you immediately, too. Then it triggers an event that
would tell that, well, order placed. So order place will then
is an event that would actually notify, right.
Essentially, whoever is listening to it, let's say, is the payment kind of receiver
of it. So I received this order entry is placed.
So now here I am. I'm going to then check the payment part.
But of course, too, before you actually do the payment, you do need to check
the payment method. Right. What type of method it is? Let's say
if it's credit card, then you do need to make sure that the credit card
information is being given correctly by the customer,
things like that. So all these small steps, too, as you can see, it kind
of coordinated well, in that sense, being choreographed, but through the use
of event driven way of doing things. And then there's another kind of sagas,
which is orchestration. So orchestration then over there, then you do
need sort of like a middleman, kind of a coordinator to coordinate.
So it's sort of like, okay, well, you need something done. Then it's sort of
a broker that tells all these components what's going on. These are kind
of like the two primary ways of how sagas deal with
coordinating the different pieces of your system that
are composed of different microservices. To do things at the right time,
essentially. So now then move on a bit to long running action.
I'd like to point that out too, is because now IBM working with open Liberty,
now open Liberty has a very good integration and supports
microprofile. It leverages on a lot of the microprofile, which is an eclipse
foundation open source project. And as you know, it's really good for building microservices.
Microprofile also has LRA, if you've heard of that.
It's long running action, and it's essentially leveraged on
a compensator model. So now when it gets into compensator,
that's what saga essentially does, too. So you may wonder,
right, we don't live everyday sunny days. Some days are rainy days.
In fact, sometimes we get more rainy days than sunny days, which means if there's
a failure, then what do we do, right? We don't roll back. There's no such
concept of rollback like a traditional database. This would be what
it uses is a compensator. It's essentially kind of look
at what didn't go right and basically then go back to reconstructing
the state of where your data was before this failure
of this thing. So in some ways, it's not really rollback. Right. You just kind
of go back in time and kind of sort of reinstate that
particular state like that. So that's kind of like a compensator
model in a very high level sense. Of course, there's a lot more details
that's kind of involved with this LRA. An example library
implementing LRA, again, is microprofile LRA.
So I have links to towards the end that if you want to look into
that, you're welcome to. And it's a relatively new feature that
just got released not too long ago. So even like for us too, we are
also trying to come up with examples and to show how we can make
use of it. But there's also a nice blog article that I suggest you to
also read that if you're interested. Okay, now then, it is a bit of a
demo time, and as such, too. All right. These examples are not like production
level yet. All of these are new and running
in a cloud native way. So I have to also say that they may be
a bit not as kind of detailed as you would just take it
and go and run and build upon, but certainly there are concepts thats
we want to illustrate there that you can certainly take and then plug
it into your particular application. The first one is a stateful open
liberty application in Kubernetes. So if you want to follow
along, you're welcome to go to this GitHub repository
and you're welcome to take a look. Well actually you know what, I think I
pressed the wrong thing, I'm sorry. I should kind of tell you to go here
first because I wanted to actually first get into persistence
session. Persistence I wanted to show you first and using JCash
and hazelcast. So this is actually an example that
taken from the open Liberty guide and I also have a
link to now as such, open Liberty is all open source and we
have a very nice team that know just joining a team,
I'm so impressed. We actually have a group of folks, and not many of them,
just a few and some students too helping us out building the guide. So please
if you want to follow thats along and let me then also get to
my now we probably don't have as much time for
me to actually do very deep dive, but like I said, the code
is here so you're welcome to take it and examine. So this
is the GitHub. So just go to Mary G Lab
and that's actually a repository that I use for my weekly Twitch
session where I do a lot of demo code. Anyway, so I have
actually forked this over here in this guide session.
So if you want to take a look, go there.
But like I said, I just wanted to kind of quickly step through and kind
of show it to you. It's actually this guide right is pretty
good too. What I should show you is here, I also provide
you with a link too in my slides. It will be available to you
from the conference too, so you can look that up. And the eclipse microprofile
lra, this is the draft. And as you can see this is kind of a
newer, just about a year old, this draft. So we have teams that are
working on it, so you're welcome to take a look into it. But in the
meantime let me kind of search for my.
Okay, I'm so sorry. Here we go. Okay, so sagas
too. I also give you a link to as you know, microservices IO.
Chris Richardson is the authority on this topic. Right. And then
saga if you want to read up more on that. Okay, this is the stateful
cube demo, so that's the next one that we will have. It's kind of extend
upon the first demo. So let me kind of go
over here. Okay, if you go to my thing and
I have a java. So this is my
Mary G. Lab and then if you go to guide sessions, and by
the way, if you want to join my project, by all means let me know
too or come and join my twitch streams on
Wednesday. I will be talking about that. So over here too,
thats is the guide session, but like I said, I probably won't
actually step through all of it today, but you are welcome
to take a look into it. But it will explain to what
is a session. So essentially the state that we're working
with right here is making use of the jcache.
As you know JCash is a Java spec,
right? Based on the Java spec for Java caching.
So it's essentially two we're illustrating open Liberty
supporting HTTP session data and then using JCash.
And also we're leveraging on Hazelcast in this case too.
So you're welcome to kind of take a look and you're welcome
to either, you can over here you
know what I'll do. So open Liberty IO
guides is where you can find a list of all the guides. Now if you
go to the persistence here, and this is where I've cloned from, is this cache
in httpsion data using jcash and hazelcast.
This actually is very good. If you want to follow along you can clone it
into your GitHub and work with it, or you can follow
along and use the guide because I think the guide is kind of nice too.
What you will notice is that with our guide here we have a version
that actually when you clone it, you notice two directories.
One is a start directory and one is a finish. Finish essentially
contains the finished product. The solution for
thats exercise. But if you want to try it out yourself please
we encourage everybody to try out at your own pace, at your own time,
right? So it's actually quite nice too. So we encourage you
to also use this and so we'll explain to you.
Session what is session persistence? But I just wanted to take the time
to kind of explain a bit too is thats these are like how distributed
cache is being handled in here. How do you use
open Liberty's session cache to persist HTTP sessions?
Right. And then it's basically leveraging on JCash,
the standard caching API for java. So this would be,
I think it's a good way for you to start and then after you set
this up then essentially too then you can deploy the
application to a local Kubernetes cluster. You can kind of set up
your kubernetes. If you have one on the cloud too, you can do that.
But the example here makes use of a local Kubernetes cluster
and you can kind of leverage on your docker desktop too.
And it will explain all the steps of how you actually over here.
First of all you will need Docker and if you kind of take
a look in this whole list of all the steps too,
it helps you create the application. It's also not too
complicated because we want to illustrate the whole process from end to end.
So then how do you run it? And in open
Liberty too. Now there are other exercises you can also follow along in the
guides to get used to. How do you create a restful application, how do you
consume it and how do you actually invoke it? We use maven
these examples and you can just run liberty Dev
and I wanted to point out to dev is a very nice feature because it
essentially allows you to do hot deploy. If you have any configuration changes,
you don't need to actually shut down your server, you can just simply
continue your merry way. You make a changes to your containerization,
it will read know hot deploy and life will be good
after a couple of seconds, right, or something. So that's that. You can kind
of take a look too over here too, right? We have a standard cart application.
Again, it's a simple case, right? And the server XML
will list out all of the specific server information,
your connection to the hazelcast library. It will
download it for you if you haven't already got it. Actually it will download
for you every time you try to do a maven running it, right? All of
these things are all here and it's all set up for you to look at.
It's not like way too much so then it's easily digestible for
you, especially if you're the first time doing it. And then you can then run
it and then it will show you the list of endpoints. And since I already
have it too, I can kind of quickly run it to show you. But over
here too, we also provide you with a hazelcast, the configuration to set
up the default like card clustered in here and
then over here too. Maybe we won't get into all the details, but you
are. Please take your time to try this out.
We'll give you example of doing docker pool. You do need to pull
your open liberty into your local docker, right? And then
after that then you do a build essentially like set up your images,
right? And then you can then take a look. Then after you set them up,
cart app and open Liberty, then you can run. Now our talk isn't about
the mechanics of how do you run it. So I won't step
otherwise this session will go for way too long for you.
All of these are pretty standard kubernetes. You can use Kubecontrol to
apply your Kubernetes. Yaml. The Yaml file is set up for you already.
It is pointing to the cart deployment and also deployment
part and also what service it is and all that stuff. So you can follow
this. It's pretty good. Now one caveat is that if you have multiple
kubernetes and you're using your control, as you may know,
then you want to make sure that your Kubecontrol containerization is
pointing to the correct. Let's say if you're running Docker desktop
then you want to make sure thats it is configured for that. Otherwise too.
If you try to just run it then it will say you can't run.
But like I said, there are some small steps like this kind of have some
assumptions that you need to be aware of. But let's get back to a
little more. We wanted to take a look into the cart application. The cart application
too essentially too, right. Just a typical simplest case
of doing a shopping cart. You have cart application extends the application.
We make use of the annotation application path and
basically too. Let me see and prepare.
Wait, where is it creating the application, I think. Okay,
here we go. And cart application. We also then told make use of resource,
right. The hypermedia kind of resource, restful endpoints
in here. So over here too, as you can see we have implementing add
to cart. You can take a look at all these takes the parameters
item is a string. So essentially to the path that you can see
we'll have a cart and then the item and separated by
an ampersand and a price. And that's what will tell this is the
command to add terms to the cart is thats it is.
So then all of these you add to cart and then you
can then get cart to see what's in the cart too. So that's what it
is. So okay, how about this? We do a quick run of this, right?
And then we can kind of take a
look and let's go back to here. So I have imported this
example into my ide, right?
In this case I'm using intellij. You can actually do a build.
Like I said, I've already set things up so you should be able to without
changing anything in the finished directory. You should be able to then run
it from there too without changing. But of course if you want to set up
different jdks and runtime. You can still kind of tweak your
particular environment, but if you don't, it should run too. Okay, so that's that.
And I think, okay, let's do this one thing. If we follow the example
over there, right, and over here.
Okay. And oops. It's essentially
two. If you see running the application is where we should run.
So that's all we need to do is MVN liberty Dev.
Okay, so now IBM already here, right. And I
want to also do this one thing, in my case,
Java. IBM using, by the way, openj nine
runtime for the JVM and 15 adopt OpenJDK
15. And you can of course download 16 at this point.
Now time of this, right.
And I set up my path correctly. I'm also on
the root directory of the guide sessions. Then you
should be able to say MVN liberty.
Oops, liberty Dev. So this would do
the hot deploy one now? I did actually
when I was testing it, I did run into some issues. So if you run
into issues, you don't need to. Yeah, see that's
what is happening. I think there is some interesting thing,
but don't take this as a problem because
it could be my environment. So if you say liberty run,
that should also run too. Now once it run,
I just wanted to show up to the point in which you can bring up
the, oh, I'm so sorry, what is going on?
Okay, well,
see, there are always something, right? That's kind of like surprising.
Now scanning for. Okay, you know what
we'll do if you run into problems like that? What I've done is
thats I bring up my shell and
run things from my Shell anytime choose things don't work.
Bring up your command line. So that should always
work. Okay, so here we go. As you can see, I've already
done some testing, right. That's actually late at night when I
was doing it. Okay, let's do that.
Oh, you know why? Because I wasn't in the correct directory. I should be in
the finished directory. Okay, let me clear it.
I'm in the finished directory and that's where the project is. So I
can actually do liberty. Okay, let me kind
of make sure too we have path set
correctly. Yes. And then it's good
because it's showing my adopt openj nine is 15.
Okay, so now let's
do that. Okay, so like I said, sometimes if ide doesn't
work, go to command line. It's supposed to work.
Okay, so now this is going to take a bit of time, but I want
to show you up to the point in which the
app startup, and then you'll be able to actually input your
cart data into that. I think it should be fun to
at least show this part for today's talk. Right.
Okay, let's see. Now if this is up, then you should also see
a message for open Liberty. It says you're ready to build a better
planet. Something like that. So it's pretty cool. So let's
kind of take a look. Something else
all illegal reflective access operation something.
Okay, well, over here actually I'm good. So the
default server, as you can see, the default server is
ready to run a smarter planet. You know, then my server is up.
Okay, so in order to run it, you just run localhost 90 80
Openapiui. Okay,
Tada. So now you have the application. Now this I think
is pretty cool. Then you can then play around with it. And like I
said, the application isn't very complicated.
That would take you too long to kind of get used to. I think it's
a good way for you to kind of get your feet wet into this.
And then you can build really serious applications after that.
Okay, so let's take a look. Right. We can do a post.
This one essentially will add a new item to cart, right?
So let's do that. And wait,
try it out. So you press try it out and
then over here you add the item. Let's say we want,
what should we get? Let's get bananas. I don't know why
because I think I just had a banana right before this. So I think of
bananas, but let's say bananas. And the price, let's say.
Okay, $2 takes a number.
So after this execute.
Okay, so it said it successfully added
to cart. Okay, now that's what it says, right?
And you see the response header does come back and it
also says bananas added. Your cart cost $2. And now let's do
a get. So get doesn't take any parameters. We just
say try it out and it should then execute.
Yay. I think we're good, right? Are we? Yes, we're good. So response
body will now show that we have our cart data.
We have bananas. And then the separator that says $2.
So subtotal is two. So that is correct. So please take a
look and then let me know what you think of
it. Right. Okay, I think I've been talking a
long time, but like I said, let me get back to my slides at this
point. And it's over here.
Now that's what we play with just now is the open
Liberty session persistence using JCash. But let's
also then now look into just give you the.
Somehow I think I lost track of this. Okay, here we go. Demo and
this is the stateful kube demo that it's going to say.
It's basically open liberty application and this one will actually
be making use of Nginx and the hazelcast
and build upon that and we'll illustrate to you
to running this in a true Kubernetes environment.
So this one, I think we do not have time to go over it today,
but I encourage all of you to visit that and you
should be able to follow the examples too. It has all the readmes in there.
You should be able to handle that. Yeah it is good.
But feel free to contact me, follow me on my discord
and talk to me about what you think of it too. Okay, so here
we are. I think I've already talked enough time, so let
me give you some list of resources
and links. And like I said, this is only the tip of the iceberg.
Please do look for me for any future talk. I do intend to
do more talks in this area now. Now I'm all excited wanting to
get more into say the saga patterns, things like that. If you're interested,
I'd love to have more conversations with that. Okay, here we go.
Resources so these are a bunch of code examples,
design patterns and open source libraries that I've touched upon.
So these two are the GitHub link to the two examples,
one of which we worked on, the other I just pointed out to you,
please take a look. It's actually developed by our offering
manager Graham charters. And so he was like gladly sharing with me.
I said okay, let me share with the audience about thats open Liberty application
in Kubernetes. And so now there is also saga design pattern.
Please take a look. That's from Chris Richardson's website,
microservices IO and microprofile LRA.
So please visit that. And also a nice, very nice blog written
by one of our engineers, Jason. He wrote that very
nice blog too that was published back in January. And these are links to
our open Liberty project microprofile and also
Jakarta ee. Then the next thing is a bunch of all the links to,
to developer ibm.com. We have a lot of resources for not
just microservices but for other things like the cloud DevOps,
cloud native, you name it. We got everything that you need and
programming with Java on IBM Cloud too has got some nice materials there.
Okay then outside of us we have the CNCF cloud native
Computing foundation, the twelve vector app and so
please take a look. And then if you want to get hands on experience with
a public cloud, we have our IBM cloud sign up and this
we have like about 40 free things that you can use in there.
There's no time limit to the free things that you can trial and
no credit card is required. We won't go after you, we won't terminate
your account even if you, there's actually no time limit for the free tier.
But of course then there are things that you do need to pay for if
you want to get more serious too. But we can talk about it if you
are a startup company. We actually have very nice support
for startup companies if you qualify. Right. So talk to us too
and we'll get that set up for you and join
our expert tv and meetups, right? And these are free training on
many topics. I really highly recommend to also the IBM developer
San Francisco Bay area or the New York City one. Well even like
in Germany in middle east if you are there everywhere to all of the IBM
developers have really great materials and also myself
Chicago Java users group if you want to join us we have like on average
one to two meetups on many different topics too. Okay.
And then yeah, please consider joining us there. Lots of
resource. And this is for IBM. We have a call for code giant worldwide
hackathon big top prize winner 200,000 us
dollar for top price. So visit callforcode.org and
then I mentioned about my Twitch stream IBM developer livestream
on Twitch. So if you want to visit that is twitch tv IBM
developer I actually run that every Wednesday at about 01:00
sometimes I may skip, but I like to encourage you to also join my discord
server because over there I'll communicate to you if I'm late,
if I have to skip or change time and all those things. And then we
have other developer advocate also presenting many other topics too.
For example JJ is on DevOps and Upcar is doing call for code.
So all these wonderful folks and with that comes to
a close. And thank you so much for having sat through my presentation.
I hope you're finding it useful. I may be kind of new to this area
myself, so IBM learning and I'm finding answers out. I want to find answers,
but for you to make your job easier, and so is all of us at
IBM as well. Please consider joining me on Discord if
you want to scan that or get this code discord GG,
if you don't get it, don't worry about it. Just follow me on Twitter
too, or find me on LinkedIn or GitHub or dev
too. And I promise that I will beef up all my blocks too.
So everybody, thank you so much and I hope to see you in person
at a concurrency at some point very soon.
And I also pray special prayers for those who are still affected
heavily by the pandemic. And thank you, thank you and you all take
care.