Transcript
This transcript was autogenerated. To make changes, submit a PR.
Everybody. Happy to virtually meet you all in
this event. I am Raghulakshmi
from Psi twenty four seven and I will be sharing
some tips on how you optimize your operations and
container lifecycle management. Most of us are
having our products moving to the cloud or adopting
the container model and I've
been in the industry for the past 22 years.
Based on my experience, I'm here to share some
tips. Let's get started.
So before I get into the actual container lifecycle
adoption and the industry stats on what are the various
technologies that are being used and how you can go about
with the operations and optimizing, I would like to
start off with the evolution with
a short history of how things have evolved in the past.
From what it was 20 years ago when I started as
a product software developer to what it is today.
Everything revolves around application building.
So how has the application architecture itself has
evolved? Just a quick introduction.
So it's all around client server architecture.
So you have a server component, you have a client component and how they
interact. So this is how a typical client server architecture used
to be, possibly two decades ago.
We have a server component in the back end
and then the client, sometimes the client is also within
the on premise system running in one corner of your
office premise and you connect with your server.
That's how client server architecture typically was.
Then with Web 20,
the cloud, when it started picking up where all along that
time from, the web was only considered a medium of consuming
content. But with Web 20, the web was also considered
a medium of contributing content. People started writing their own
content and uploading it. People started taking
videos and photos and uploading it. So there are lot of content on
the cloud and they're needed tools for collaborative
working environment. So with Web 20, the client server
architecture changed to the server can be anywhere
in the cloud and then clients can connect from across the
globe. So that was the transformation that happened. And then
with cloud picking up the public cloud servers,
there are various providers who are able to give the service.
The cloud services behind started picking
up cloud servers, public cloud servers, but people did not
or the businesses did not completely move to the public cloud.
There were some operations which still remain on premise even
today. There are on premise solutions that are still being followed
in businesses. So it became a hybrid model.
I can have a mix of public cloud as well
as private cloud and still communicate and collaborate and work together
as a system. So that is the model that is predominantly followed
even today. So this is how the evolution has been. And if
I have to put it, in another terminology, another way of representing it.
What used to be a monolith architecture where even though you
have different components, you have the user interface component, you have the business
logic, you have data access layer and then the database
layer as different components. All of them
acted as one monolith system.
If there is a failure, the entire system fails. So that
was the monolith architecture. So from monolith architecture
things have challenges to microservices architecture,
where each of these, be it your business logic,
or your data accessing layer, or your caching layer,
or your authorization and authentication layer, or your payment
gateway layer, each of them can be run on its own
in a small container. Each of them can be spawned,
you can work on it, you can deploy, you can
bring that down. All that can be done independently,
but all of this still work as a one system. So that's
the transformation from monolith architecture to microservices is
what the industry is moving towards. And one step
ahead of this or the next thing that is happening
is the serverless, I don't even want to
have any server layer. I will just be having functions
for all of these components that I'm talking about but still come
up with an application. So all of this we are talking with respect to building
applications because that's what is at the end of the day where the
business logic is there. I don't even want
to own the server. I will just have functions and completely deploy
my application. Using function as a service is something that is picking up
which the organizations and businesses are moving towards.
So from monolith to microservices to serverless is the
other way of putting that journey of the application evolution.
So with all this in place, what are the typical various
layers that are there in any cloud architecture? We are talking about application
development in a cloud. So what are the various layers?
There are four layers. There are many layers, but I have categorized them
as four layers. And the topmost is the end user layer which
comprises of your web browsers, your mobile applications
or your tablets, using which your customers connect to
the underneath application layer which is nothing but software
as a service. So there are many such software
as a service model vendors available which actually
host the business logic. If you are a business owner, you have
to be worried more about this software as a service model where
actually your business logic is hosted and leave the rest to.
If you are using public clouds you can leave the rest to the vendors on
whom you are deployments. So that's the application layer
and underneath is your platform layer which is again platform as a
service model which comprises of your caching servers,
your SQL databases, NoSQL databases,
queues, microservices, all of this comprises the platform layer
and there are vendors to give platform as a service model.
And underneath the bottom most is your infrastructure layer
which comprises of physical servers, virtual servers,
cloud servers, your load balancer, firewall, router switch,
the entire network, all of them form the infrastructure
layer. Again, infrastructure as a service is a model that is
available. So why am I talking about all these four layers
and why is it important? In a cloud architecture,
even though you are mostly worried about only your application,
the person has to know the IT administrator or the
application owner or the developer, the DevOps,
SRE, whatever role you talk them about you name,
you have to have a complete picture of what is happening in
all these layers. So the end to end visibility across
all these layers is important because we are talking about
things in the cloud. If something goes wrong, your website
or your application is down, the downtime could be anywhere in
this layer. It could be because of an ISP
problem in your end user layer, or it could be
because of a router or a particular port in a particular
switch is not working in your infrastructure layer. The problem could be
anywhere or in your application. The line of code, there is a bug in your
code which is causing an indefinite looping or it could be in your database
layer where the connection has not been closed properly. We can
attribute to n number of factors that can be a reason for
something going wrong in your cloud architecture. Being in the
cloud, you cannot have any downtime. So such being the case,
what are we going to do? How are we going to manage so end to
end visibility across all these layers is important. And let's look
at where does this container architecture
and what are the important things that we need to know of when we move
to the containerized microservice architecture world.
So kubernetes, let's look at some quick stats.
So I've been reading through some articles to know the adoption of
kubernetes and there was an article from VMware
that talked about the state of kubernetes. And there was this article in
2020 and again a survey was down done and
2021 there were stats that were put up. So I'm just giving a comparison
of these two. So organizations that use kubernetes in production
has grown from 59% in 2020% to 65%.
So we are seeing predominantly more and more organizations that
are using kubernetes not just for their development but also
for their production. And organizations also
say the beauty of kubernetes is it can be deployed even on
premise or it can be deployments in multi cloud and still work
together. So people who are having monolith architecture
in their on premise, who cannot afford to move to any cloud have
also started using kubernetes so that it
can be gradually moved to the cloud or it can work in sync with
their public cloud deployment as well. So the organizations that are
deploying kubernetes in on premise, it was 64% in
2020, but slowly with things moving to the cloud, the on
premise deployment is reducing. But still there
is enough adoption for kubernetes and on premise.
And there have been people who the important challenge
that if you have to talk about in moving to any or particularly the
Kubernetes technology is lack of experience and expertise
has been quoted as the top deployment challenge and
the stats have proven that that lack
of experience and expertise is gradually coming down because people
are appreciating the importance on gaining
that expertise slowly. But still there is a long way to go.
That's why there are still 65% of people who feel that lack of experience
and expertise is the top challenge. So moving
along and to talk about the benefits from kubernetes
based on this same survey that was done,
58% of people have felt that usage of kubernetes
have helped them in improving the resource utilization. So resource
utilization there is also another study that states that
when we move to the cloud there are lot of unused resources
and there are lot of almost 30% of resources are being wasted.
So that being the case, Kubernetes helps in improving your resource utilization
and 46% of people have felt that it is shortening
your software development lifecycle. We are moving towards
the agile model and we want to do quicker releases.
The shortening of software development lifecycle is important
for your business needs and 41% of
people have felt containerized. This monolith application
is possible using Kubernetes because it has the advantage
of having an on premise setup as well 48%
have felt that it has eased their application upgrades and maintenance.
It's easy for you to do it in deployment and
28% of people feel that it has reduced the public cloud cost
because it can be a combo. And when using Kubernetes have helped
them to reduce their public cloud cost, 39% have
felt that it has enabled their move to the cloud because people
thought that they will not be able to move to the cloud otherwise and kubernetes
helped them in moving to the cloud. So these are all some of the benefits
which are. I've just taken one survey. There are many articles
that talks about the benefits and the challenges as well. So if
you have to put all these benefits in one diagram with
respect to using the Kubernetes. Kubernetes is one of
the containerized model which people are adopting
more and more. It's because it's very portable and
flexible model. It's simple. I've just categorized them
and put it as points here. You can have it in multicloud
deployment. You need not be just stuck with only one cloud vendor because
it is multicloud. It's easy for you to change the vendor in
case you want to change. And you can also have a local
model on premise. Kubernetes model also is possible
scaling when we take cloud and when we take containerization,
easy scaling, faster scaling is possible and it
is a reliable solution and it's open source. Open source. The beauty of any
open source technology is you get a lot of help from other
experts who will be able to help you if you face any problem,
and they are the market leaders. So these are some of the benefits which
people have seen and we are also seeing when we
want to move our monolith architecture to the containerized
world, particularly the Kubernetes model. Now let's look
at some of the challenges. What are the, we saw lack of expertise as
the top challenge that was quoted in the survey. Let me also
cite some of the challenges based on our interaction with
some of our customers and based on how we ourselves
have been the challenges that we faced when we moved
our setup to the container model. So let's look at some of
the challenges. These are not in any particular order, but these are predominantly
the major challenges that are being faced. And first is the
lack of expertise because we don't know what is happening underneath.
You have to understand the technology and understand the deployment
clearly. That is when you will be able to do the
configuration correctly. So that is the challenge that people are facing
because human mind is more like we are stuck with or
we are comfortable with what we know and we don't want
to come out of it and learn new things and adopt
ourselves with the latest technologies. That's an hindrance that
pretty much many people face and that
is the major challenge that is being faced. But we can see people
are coming out of their comfort zone and trying to become expert
because that's how you can be at least
on the go in this technical world. So that is the major challenge,
which industries are facing. Getting the proper expertise in
the people is the major challenge. And the other important challenge is
the deployment complexities. Even though
the microservice architecture, the container containers,
each one can be spawned, deployed on its own. These all look very
simple, but the complexity lies in the actual deployment
and managing and monitoring them to the outside world. It is
very simple. It's a cool thing to do it in an architectural diagram,
but the actual deployments people know who are working on it will
know the deployment complexities. So that's again another challenge.
And there are lot of monitoring that you need to do around there too,
because it is being on the cloud, you have to make sure that all your
key performance metrics are being monitoring. All of
the components have to be up and running all the time, and they have
to be performing very quickly, have quicker turnaround
and response times. That is when you will be able to have
your production setup up and running and configuring them,
automating them, applying some configuration rules. So you
have to monitor all of them using the right tools. And that's again
another challenge. People usually neglect that part
and deploying it in the cloud. Once you take it to production
later on, when you face a problem, it will be very challenging to find
out where the problem is for which tools will really help.
And the other important challenge is security. Being in
the cloud, configuring things properly, making sure that one
user's data is not visible to the other user. And you
make sure that security is taken care at all the layers in your cluster
architecture, be it in your pod or in your node, or in your cluster,
in your service, all of them is important. From your database
design to how you deploy, how you show it in your client,
security plays a major role. And that's again, another challenge.
And complying with all the compliances based on the geographical
region is another challenge. And this,
we have seen few people say this, and we have also felt this.
Even though it might look like everything is taken care,
sometimes it feels that it is like a black box and we don't
have control of the underlying
framework. So loss of control has been cited as few
challenges, or one of the challenge by few people.
And the other challenge is the scaling cost.
Scalability is an advantage. We saw that in the benefits as
well, but you have to do it rightly. Not that every deployment
or everything that you do have
to be taken to the cloud or have to be taken to the containerized
world. Do not try to do things because somebody else is doing
your application is different, your environment is different,
your customers are different. You have to really
evaluate if you really have to move towards a containerized option
itself. You end up converting everything to
the container architecture just because that's the latest technology. Sometimes it
might backfire and it might cost you heavily. So depending
on what is the need, you might want to sometimes have it
in your local on premise itself or have it in a fixed server
instead of going for an auto scaling environment. Or you have to go for
an auto scaled environment based on your requirement. So it's based
on the application and the functionality that it does. The technology
has to be chosen. Lot of time people struggle just because
it's the latest technology, adopt that and then find out
that it's going to cost them more. And that's again another challenge.
So these are some major challenges that I wanted to bring
it to you and let's look at the
monitoring needs. How do we overcome the challenges?
Most of the challenges, be it your scaling cost or your monitoring needs
or your deployment configurations, most of them
that I talked about can be taken care of with the
help of tools and with monitoring needs. And when we talk about
monitoring needs, this is a typical Kubernetes architecture
that comprises of your entire cluster, the node
and then your kubelet and pod. And this is how it
is and you need to make sure that all of them are up and running.
So when we talk about monitoring, I would like to associate the monitoring
into these three pillars of observability that we call the first
one is metrics. In metrics in Kubernetes,
what are the things that you have to take care of then? I will just
give a quick overview because the pillars of observability itself can be talked at
length. So when we talk about metrics, that's the
first pillar of observability. The key metrics are availability.
The different components that we saw, be it your node or your
pod or your cluster, all of them have to be up and running.
99.99% 59. Uptime is what
the industry expects. Make sure they are up and running.
That's one of the important metric. And the second important metric
is performance. There is no point in having all
the components up and running if they are going to be performing very,
very slow. So make sure all those components
are doing their work in a high speed environment
because that's the industry expectation. Nobody has the
time or patience to sit and look through pages that are
going to take forever to load. We just move on to the next pages.
We just move on to the next service or the application, make sure your
kubernetes, all your layers are performing very good
for which tools are important. So when we talk about metrics,
availability and performance are the important metrics that have to
be taken care. And the second pillar is traces.
So in traces, what does trace mean? You know,
your applications that is deployed is taking some 10 seconds
to load. You need to know exactly where it is taking more
time. And in a distributed architecture, in a containerized environment,
each of these containers can be spawned and spawned.
It can do its function and destroyed on its own.
So in that situation, how do you know which
node is taking more time, which line of code is taking more time?
So that is the trace tracing to the exact line of code that
is causing issue. So that is tracing across all the application
platforms that is available and that's what traces will give you.
So in a distributed architecture, where each of these containers or
each of this can be written in its own language too, your authentication
service can be written in a different language, your payment service can be in a
different language. But still, if there is a problem, you must be able to track
and trace to the line of code. And that is what is
trace. And the third pillar is logs.
It's all distributed architecture. And when anything goes
wrong, we have to go for debugging. We have to look at the logs.
And it is not possible for the IT administrator to take remote
control of each of the distributed architecture the system
is deployed in and look at what is happening in each of the log
files. So you need to collect all those logs, process it
and store it in such a way it is easy for you to query and
see where the problem is. So converting your unstructured
data into a structured data is what is log management.
And that's the third pillar for observability.
So putting all these three together, metrics, traces,
logs, make sure the tool that you are trying to use has all
these three complete so that you can rely on that
tool. Let me quickly give some sample screenshots
of what all you need to look at in a monitoring solution. The health
dashboard has to be there, be it your node or your pod or your
services. How many of them are up and how many of them are down.
What are the top cpu intensive pods? Top memory intensive
pods, all of these in a health dashboard is important.
Inventory. Once you give one particular cluster
detail, you must be able to get all your nodes, pods, your deployments,
endpoints, replica sets, how many services are there and
what are their availability and performance. The inventory dashboard is important.
And the business view, the infrastructure view of
the nodes and the cluster, the nodes and pods to show you exactly
where the problem is. And that's about the metrics.
This is about the traces to show you the line of code that is having
issue and about the log management. Collecting all your kubernetes
containers and nodes logs and looking
at them in one place, just log type equal to container logs. You must
be able to get all the logs that have been collected so that you can
find where and what is the problem. Matic node.
So these are some things that you have to look for and
those are for monitoring, not just monitoring. Sometimes you
may have to take some actions because you know there is a
problem, you may have to do some action on it and your
tool must be able to help. You have some nodes and pods and your
pod is continuously failing or unable to restart. What do you do manually?
You have to manually reboot, which is time consuming. You have to
take remote control and do it in a manual way instead. You must
be able to write some scripts which does automatically
the reboot and associate it with the threshold profile. When you
see that the pod's cpu is increasing, you must be
able to, or when your pod is not responding continuously for
three times or five times, do a restart. Such actions
should be possible. Another example is when your cpu is high,
you may want to free up some resources. Again, you can write some
script and you can associate that with your threshold. When your cpu's node
cpu is greater than 90%, go and clean up the
process which will reduce the cpu. All those
manual things that you usually do should be automated
using the scripts and associating it with those
actions. And see if your tool that you're selecting has all this so
that it makes it easy for you. In your live deployment. When you identify a
problem, half of the problem, once you have identified and you want to make
some corrections, they have to be done automatically so that it is seamless
for the end user and your end users are not impacted
because of these problems in the system. So finding
out a right tool and monitoring using the right tool is what
is important. Those could address
your challenges. In addition, when you are looking for such tools, you can choose any
tool for your monitoring needs. In addition to the challenges
that we discussed and in addition to the pillars of observability
that we discussed, which you have to look for, I would say you have to
look for additionally three more things. I call it as look for
ice. I stands for integration, c for customization
and e for extension. So I'll just quickly tell what this is.
All of us, when we are having our deployments, we will have some in
house metrics that are being collected. So the tool that you are selecting should
have the integrability option so that you can do
all your import export in a seamless manner. It shouldn't be a data silos.
You have to integrate everything and you must be able to view. Sometimes you may
want to export it to export your alert to some
third party that you are already using. All those import export options
should be available. APA options should be available. See if the
tool supports such things. That is the integrability
customization. I don't want to use whatever you are giving out
of the box. I may want to change the color, I may want to change
the text. I may want to do some other operations based
on my need. See if the tool is having such customizability options
and extensibility API support.
I don't want drag and drop and build your own dashboard.
We are living in this era of citizen coders where people don't want to
take out of the box whatever is given. Give me the flexibility
to do things of my own is what is expected
in this generation, kids. So the tool that you are
using, you are building, or the tool that you are planning to use should
have this extensibility option, but at the same time it
has to work. It's not that I can extend, but it will not work.
So make sure it is extensible. So look for ice integration,
integrability, customizability and extensibility.
So there are many tools in the market. Psi twenty four seven
is one such tool. It's an aipowered full stack monitoring tool
that lets you take care of all your monitoring needs in
one single console. I talked about the various layers. So from
your end user layer to your infrastructure layer, we have monitoring
for all the needs. You can also monitor your containers,
be it your docker containers or your kubernetes. Monitoring for all
of them is available. On top of all this, we do have
alerting, reporting and integrations dashboards.
All of that is possible. And site twenty four seven
is hosted on Zoho's data center. So we do have our
data centers, our own data centers, ten different data centers
in five different regions. So depending on your region, you can
choose the data center. We have it in US, Europe, China,
India and Australia. Your data will reside within the
geographical boundaries of that particular data center that you are selecting.
Being a cloud provider, we do take privacy, security and
compliance very seriously and get all the certifications that are required.
So we have been in the industry. Zoho has been in the industry for the
past 25 years. Site 24/7 as a product has been in the industry
for close to 15 years now. So we are a mature product
in the market. Just feel free to try it out and see for yourself.
I would like to finish off with
a small snippet from this book. This is one of my
favorite books. I'm sure many of you would have read this book if
not the book. I'm sure many of you would be aware of
this golden circle. Start with a why, why, how and what.
I want you to apply this in anything and everything that you do
in business as well. So you should start
with your why the purpose, the five y technique why am I
doing this? Do I really have to move to a cloud native
technology? My system is already working fine. So what
are the pros and cons in moving to adopting
a kubernetes or any other container orchestration for that matter?
So ask your five whys. Depending on your application
and if you really know it is the reason, you will get the
purpose. That is when you have to go ahead with
the next of how and what you have to do. So start with your why.
Once you are clear with your why, how is just the process
how you want to monitor what tool you want to use. You want to build
your own tool or use some third party tool available. That's all
in the process. If you are clear with your why and choosing the right things
in how to do the result, what is the end result?
You will definitely be successful in whatever you're doing. Keep that
in mind. Don't do things just because somebody else
is doing. Everybody's requirements is different. So depending on your
application, depending on your customers, you have to choose what
you want to do. So key takeaways from this session.
The three important points we just talked about the evolution of application
architecture itself, how things have moved from monolith to microservices to serverless,
and all about kubernetes, some stats, some trends and benefits and challenges,
and the monitoring needs and how and what are the things
that you need to monitor. And PSi 24/7 itself is an aipowered
full stack monitoring tool that can help you monitor your entire
infrastructure from top to bottom, from your end user
layer to your infrastructure layer, including the application and the platform
layer. So I'll just finish off with this one quote. This is my
favorite quote, one of my favorite quotes.
This is called the Red Queen's quote and this comes
in the book Alice in Wonderland.
I did not read this book when I was a child, but I remember reading
it for my children. So the quote goes like this,
you have to be running as fast as you can in
order to stay in the same place and
if you want to make any progress, you have
to run twice as fast as you can. This is generally applicable
for individuals or for our business too.
Most of the time we are living in this world
where the technology is evolving in a very fast
manner, so if we don't keep ourselves updated we will be
outdated. So this is applicable for individuals
as well as for business. So at Zoho at site 24/7
we make sure that we adopt all the latest technologies and pass that
benefit to customers. I'm sure as users you also would
want to do that. Keep yourself updated.
Thank you so much for your time. I hope I
was able to give few tips on what you need to do in your container
world. You can reach me in any of the social
media platform or through email. And for anything related to
the product you can write to either. You can write to me or to the
support email Id that is available here. I didn't get into the product details.
We will arrange an on one session and we can arrange for a
demonstration if required. We do have a free version that is available and
we do have what is that? Six months free subscriptions that
are available for you to try it out and see. So feel free
to reach out to us and enjoy the rest of the session in the
event. Have a nice day. Thank you.