Transcript
This transcript was autogenerated. To make changes, submit a PR.
Today I want to discuss with you how to work with stateless
microservices, how to scale them, and what to
do if your microservices is not stateless. In modern technology
companies, a huge number of requests are processed daily,
making the optimization of system performance and reliability
critically important. Microservice architecture has established itself
as one of the most effective approaches to developing scalable
and resilient systems. However, to achieve
the best results, it is important to properly utilize and implement
microservices. Today, we will explore what stateless microservices
are and why they play a key role in handling requests in high load
systems. We will discuss their advantages regarding scalability
and reliability, Kdesign principles, and implementation
strategies. I will also share practical advice and examples
to help you optimize your systems performance and ensure their resilience
to higher loads. Lets begin our exploration of this
important aspect. In recent years, distributed systems
and microservice architecture have become the standards for developing
modern applications. These approaches enable companies
to create flexible, scalable and reliable solutions
capable of handling high loads and rapid changes in
requirements. Containers such as docker and
orchestration. Platforms like kubernetes have become integral
to the development and deployment of microservices.
They provide isolation and scalability and simplify
the management of application lifecycles,
DevOps and continuous integrations. Continuous deployment
approaches facilitate the automation of development,
testing and deployment processes. This enables
faster delivery of new features that fixes enhancing overall
development efficiencies. Cloud platforms such as Amazon
Web Services, Google Cloud, and Microsoft Azure offer tools
and services for developing, deploying and managing microservices.
This allows companies to quickly scale resources according to
current needs. Observability and monitoring
a crucial element of managing distributed systems is ensuring
observability and monitoring. Tools like promises and
Grafana help track the system state and respond
promptly to emerging issues.
I will leave security topics outside of the scope of this
presentation as they require separate coverage.
Obvious stateless microservices do not store
any state between requests. In the context of each request,
all necessary information is transmitted and processed
without retaining any data or state within the microservices
itself. This means that each request is handled independently
of previous requests. TL's microservices significantly
simplify her horizontal scaling since no state
is retained, microservices instances can be easily added
or removed to handle increasing or decreasing loads.
In the event of a failure of one instance of a microservice,
its request can be redirected to other instances without
any loss of data or state.
This enhances the overall fault tolerance of the system.
Stateless microservices are easy to deploy and manage because
they do not depend on storage state. They reduce complicity
and the number of years during the deployment and updating of services.
Since each microservice instance can handle any request,
load balancing systems can effectively distribute requests among
all available instances, improving the
systems overall performance and response speed. Stateless microservices
simplifies the implementation of security measures such as authentication
and authorization. Each request can be independently
verified and process, reducing the risk of data breaches and unauthorized
access due to their independence
from the state status may serve simplified development and testing.
The servo teams to develop, test, and deploy microservices
more quickly and efficiently. I want to provide
some examples of stateless microservices. First of all,
API API gateways as a central
entry point for all client requests and distribute them to the appropriate
microservices. They do not retain state between requests,
which allows them to be flexible and scalable. For example,
when a user requests profile data, the API gateway
direct the request to the corresponding microservice, which processes
the request and returns the data.
Authorization and all authentication servers the servers verify
users credentials and create access tokens.
They also do not store state between requests. Each authentication
request is proceeded independently. For instance. For instance,
all our servers generate access tokens based on credentials
provided with each request and do not retain usual state
between sessions. Load balancer distribute incoming
requests among multiply instances of microservices.
They download, store information about previous requests,
but simply direct each new request to the list loaded
or nearest server. This ensures even load distribution
and prevents individual servers from becoming overloaded.
Another example notification service says that sent alerts to users
can be stateless. Each request to send a notification is
processed independently. For example, a service might
accept requests to send SMS or email notifications,
process them, and send the appropriate message without needing to
store information about previous notifications.
Another good example, per trained model services,
services that host pretrained machine learning models process
requests to make prediction or classification without retaining state between
requests. For example, a service using a pre trained model for
image recognition accepts an image input,
processes this, and returns a specification result. Each MS is
processed independently, allowing the
service to easily scale as a number of request increases.
But what to do with stateful services?
One of the possible decisions is hybrid services. Hybrid microservices
represent an approach where the functionality of a service is divided
into stateful and stateless components.
This approach allows for efficient management of the state and
scaling of services. Lets explore how this works.
Stateless companions handle incoming requests. They perform
computations, data processing, validation, and other
operations that do not require state retention between requests.
These components are easily scalable because they do not depend on
state. They can be deployed in large numbers, allowing them to
handle a high volume of requests in parallel and evenly
distribute the load. Stateful companions are responsible for
storing and managing state. They handle operations
related to state changes and ensure long term data storage.
These companions can use specialized data storage systems and
ensure data consistently and integrity.
You can hold all your microservice code bytes inside one repository and
create different prompts or files to start stateless or stateful
components. Moreover, you can have more than one unique
instance of the stateful or stateless components depending on your
needs. Again, the key trait of the stateless
component is scalability. It should have an opportunity
to be run in any number of instances without losing functionality
and without possible race conditions. I will discuss race conditions
later. Common query responsible segregation
or securities is a pattern that separates
read and write operations, allowing for more efficient state management
in the context of microservice architecture. This pattern can be
implemented as you need to separate all requests
to your microservices in two types. First is commands
write operations that modifies the system state or data.
The separation are handled by stateful components and
queries read operations that retrieves data from the system.
These operations are handled by stateless components.
The CQRS pattern allows read and write
operations to be scaled independently. This is especially
useful in systems with high read or write
loads. Data reads can be optimized for quick access
and scaling, while write operations can focus on
ensuring data integrity and consistency. The separation
of operations allows for the use of different data models for
reading and writing, which can simplify development and improve
performance. Running a large number of stateless components
compared to stateful components is a key strategy for
ensuring high system scalability and performance.
Stateless components, which do not retain state
between requests, are easily horizontally scalable,
allowing for the handling of a large volume of incoming requests in
parallel and distributing the load evenly.
Meanwhile, stateful components which manage states and data
usually require more complex management and synchronization,
limiting the horizontal scalability.
Let's talk about the interaction between stateless and
stateful components. Synchronous interaction
I will call it sync interactions instead.
It would be much easier for me to handle this world.
How it works the stateless component initiates
a request. The stateful component processes the request and returns the
result. The stateless component receives a
response and completes the request. Processing simple
this type of interaction gives us several advantages
sync interaction via HTTP or RPC is
easy to implement. Integral integrate into existing
infrastructure. Clients receive an immediate
response useful for operations requiring real time
confirmations. It is easy to track data flow and interaction between
components, but everything has a price.
If the stateful component is unavailable, a request from
the stateless component cannot be proceed, leading to delays and failures.
500 crawls can
become a bottleneck under high loads as each request requires
immediate processing. SIM calls can increase latency,
especially if their stateful component processes
complex operations. Some practical recommendations
use load balancers to distribute requests among multiple instances
of stateful components. You can use remote procedure calls
using a message broker like RabbitMQ. Unfortunately,
it is not always possible to run multiplier instances of stateful
components, and it would limit your ability to scale your
application. It would be useful to
configure the mouse and retry mechanism
to handle errors and temporary failures. Not every
request can be retrieved in case of retry. We do not
know the status of the previous attempts processing. In such
cases, it is necessary. It is easy to organize an independent
request system where a repeated request will not change the
state of the state.
Auto abandon the idea of enterprise
rate condition it's my favorite part source of my daily headache.
Rate condition occurs when two or more components or processes
compete foxes to a shared resource, such as data or state
at the same time, leading to unpredictable or incorrect results.
In other words, when multiplying stateless components at the same time send
requests to a single multiply stateful component, a situation
can arise where they try to update the same state or data.
If the stateful component does not properly manage access to the data,
it can lead to incorrect updates and an inconsistent state.
Several solutions to prevent race conditions first of all,
locks implement locking mechanism in stateful components
to control access to charge resources. For example, a use
database that supports transactions to ensure data
integrity. Most SQL databases do it.
Combine operations into transactions to ensure ultimate execution
are all or nothing. You can try to achieve
depotency. Ensure that all operations can be
safely repeated without altering the result.
Strategies include using unique identifiers,
design inherently independent operations,
employing transactional cementing, utilizing conditional
requests, communication with appropriate response coders
and others and others. The potency is
a big deal to vintage companies. They really like it because
it helps them avoid spending money twice. You can also
try data versioning. Users record version
eg version fields to check the currency of data before
performing updates. Version checks before
attempting an update. Each request verifies the data
version has not changed since it was read.
If the version has changed, the request is registered and can be
retrieved. Another effective solution
is to move from a sync interaction to async interruption.
You don't need to waste stateful component fire
and forget. Some advice is how to implement it.
You can use message queues. Our goal is to ensure
reliable message delivery between stateless and stateful components
and increase system resilience to failures.
Queues like RabbitMQ or Kafka can be used to transmit
commands. There is an old joke that no
one has ever been fired for choosing Revitmq as
a message broker. Ok, return to process.
Messages placed in the queue are guaranteed to be
delivered and processed by stateful components and if
temporary, failure occur, how it works an example of some
online shop stateless component receives a request to create
a new order and performs necessary checks.
It forms a command to create the order and place it in the
message queue. The message is stored in the queue and awaits processing
by the stateful component. Stateful component retrieves
the message from the queue, processing it, and sends a new order data in
the database. If necessary, the stateful component can send
a confirmation back to the queue for subsequent processing,
but better to avoid it. A huge percentage percentage of
our request successfully proceed, so if
the situation allows, our stateless component
can respond with a positive answer without awaiting the stateful
component. It names an optimistic response.
Another step is event driven architecture.
Separate operations enhance system flexibility by allowing
stateful components response to events
generated by stateless components. How it works stateless
components generate events in a response to specific actions
or requests. Stateful components subscribe to events and process
them, updating their state accordingly. It will work similar
to the previous example with the added benefit
of additional handlers. Other components or
even microservices such as notification systems or analytics
can also subscribe to this event and perform
their actions, for example, sending an order confirmation
to the client or updating seller statistic.
It allows us to use optimistic responses by design,
with async interactions, a stateless component
may not wait for immediate response from a stateful component,
but can probably notify the client that they request
has been accepted. The notification can include preliminary
default data or simply a message confirming the requests.
Successful processing this approach ensures faster system
response to requests, improves user experience and more efficiency,
distributes the service load.
Advantages of async interaction message queues
ensure a reliable delivery and processing of commands.
Enhancing system resilience a sync interaction facilitates
easy scaling of individual components without the
blockages and delays associated with sync calls.
Ever driven architecture simplifies, adding new functionalities
and components as new subscribers can be added without
altering existing producing components. Some practical
recommendations it would be nice to implement monitoring alert
systems to track the status of queues and event processing,
ensuring timely identification and resolution of the
use. Pay particular attention to the number of unprocessed
messages in the queue. If this number grows, something may be
wrong. Also, it is important to establish clear contrast
between components and document events and commands to
ensure consistency and simplify the integration
of new components. Okay, let's finalize
the discussed aspects demonstrate the importance of
using stateless microservices and hybrid strategies
to create scalable and reliable distributed systems.
Stateless microservices significantly simplify with tau
scaling as it does not retain the state between requests.
This means that instance of microservices can be easily added or
removed depending of the current load, ensuring high performance
and fault tolerance of the system, especially with orchestration
like kubernetes. Consequently, load balancing
systems can effectively
distribute requests among all available instances,
improving overall performance and response speed.
Hybrid microservices in turn enable efficient state
management and scaling by dividing the services functionality
into stateful and stateless components. Stateless components
handle requirements, perform computations, validate data,
and carry out other operations that do not require
state retention, making them easy to scale.
Stateful components, on the other hand, manage data
persistence and long term state storage, which requires
more complex management and synchronization, but ensures
data integrity and consistent consistency.
Applying the CQRS pattern allows for
the separation of printed write operations, enhancing the efficiency
of state management. Data reads are performed by
stateless components while writes and handlers by stateful
components, allowing this operation to be scaled independently and
optimizing system performance. I sync interactions through message
queues and venn driving architecture ensures reliable,
common delivery and enhanced system resilience to
failures. Message queues such as Raybeat,
MQ or Kafka guarantee that commands are delivered
and proceed by stateful components even
if temporary failure. Secure, event driven architecture
allows stateful components to react to events generated
by stateless components, simplifying the additions
of new functionality and components
to the system. I greatly appreciate
your attention and the time you taken to engage with me.
Thankful your consideration.