Transcript
This transcript was autogenerated. To make changes, submit a PR.
Jamaica make a real
time feedback into the behavior of your distributed systems and
observing changes exceptions errors in
real time allows youve to not only experiment with confidence,
but respond instantly to get things working again.
Close event
driven applications with NestJs, which is a
modern framework for building back end node JS applications today
I will talk briefly about what is Nest JS?
How does it help build scalable applications?
I have a demo ready for you and we'll describe the
overall architecture and the tools used and
then we will run and see this demo in action.
So what is NestJs? It's a framework for building
node JS applications. It was inspired
by angular and relies heavily on typescript,
so it provides a somewhat typesafe development experience.
It's still a javascript after transpiling, so you
should cases when dealing with common security risks.
It's popular framework already and youve probably heard
about it. Let's quickly
recap what the framework offers us one
of the main advantages of using a framework is having
a dependency injection. It removes the
overhead of creating and supporting a class dependency tree.
It has abstracted integration with most
databases, so you don't have to think about it.
It has abstracted common use cases for web development like
caching, configuration API, versioning and documentation
queues and so on. For the HTTP server
youve can choose between exprs or fastify.
Yeah, it uses typescript and decorators. I think it simplifies
reading the code, especially in bigger projects,
and it allows the team of developers to be on the same page
when reasoning about components.
Also, of course, as with any framework,
it provides other application design elements like middleware,
exception filters, guards, pipes and so on.
And finally, we'll talk later about some other features
that are specific to scalability.
So how does Nest JS help us build scalable applications?
Let's first recap the main strategies for building such applications.
The common approaches are building the monolith
with a modular design,
microservices, event driven application,
or a mixed approach. And I think this is
the most common in long living projects.
For the first approach I want to talk about is monolith.
It's a single application that has components
tightly coupled. They are deployed together, they are supported
together, and usually they can't leave one without another.
If you write your application that way, it's best to use a modular
approach, which by the way,
NestJs is very good at.
When using modular approach, you can effectively have one code base,
but components of your system act as somewhat
independent entities and can be
worked on by different teams. This becomes harder
as your team and project grows. That's why we
have other models for development.
Microservices are when you have separate deployers for each service.
Usually each service is only responsible for a small unit
of work and will have its own store.
It will communicate with other services via HTTP
request or messaging. Next the
event driven approach is similar to microservices,
but usually you don't have direct communications
between them. Instead, each service will
emit an event, and then it simply doesn't
care. There can be listeners to this event, but there can
be no listeners. If the event is
consumed by someone, it can again produce another event
that can be again consumed by another service,
and so on. So every service
is independent of one another. They only listen and produce
events. Eventually someone will produce
a response for the client waiting. It could be a websocket
response, for example, or a webhook or whatever.
Usually our larger projects are a mix of all designs.
You have components that are tightly coupled and deployed together.
Some components are deployed separately and
some are communicating exclusively via event messaging.
Let's think about why NestJs simplifies
event driven development. First of all,
it allows really fast and simple integration of a popular
bulb package for queues for
microservices developing and communication.
It has integrations with the most popular messaging brokers
like Redis, Kafka,
RabbitMQ, MQTT, Nuts,
and so on.
Third, it promotes modular development, so it's naturally
easy for you to extract single units of work later in
the project's lifecycle, even if you start your project as
a monolith. My next point is it has
great documentation and examples, which is always nice to
have. You can be running your first distributed app
in minutes with Nest Js and
another thing I want to note is unit and integration testing
is bootstrapped for you. It has dependency
injection for testing and all other powerful features
of a jest testing framework. Now let's see how
a simple queue can be created in sjs.
First you install the required dependencies,
then you create a connection to redis and
finally register a queue and that's it.
Next, somewhere else in a service constructor,
you type hint your queue and it
gets injected by the dependency ejection container.
You now have full access to the queue and can start emitting events
some way. In another module youve decorate your processor class
and that is a minimal setup to have a queue system
working. You can have both producer and consumers
exist in one application separately. It's whatever up
to you and they will be communicating via your redis
instance, messaging provider connection
starts with adding a client model connection. In this
example we have redis transport and should provide redispecific
connection options. Next step is to inject the client
proxy interface. Our options further
are either send method or emit. Send is
usually a synchronous section similar to HTTP request,
but is abstracted by the framework to act via selected transport.
In the given example, the accumulate method response
will not be sent to the client until message is processed by the
listener. Application Emit command is can asynchronous
workflow start? It will act as fire and forget
or in some transports this will act as a durable queue
event. This will depend on the transport chosen
and its configuration. Send and
emit partners have a slightly different use case on the consumer side,
message pattern decorator is only for synchronous like
methods and can only be used inside a controller decorated
class, so we expect some kind of response
to the request received via our messaging protocol.
On the other hand, event pattern decorator can be
used in any custom class of your application and
will listen to events produced on the same queue or event
bus, and it does not expect our application to
return something. This setup is similar with
other messaging brokers and if it's something custom,
you can still use a dependency injection container and
create a custom event subsystem provider with NestJs interfaces.
And this is how easy it is to integrate with most common
messaging brokers in NestJs.
In this section I will review a part of real application which
is simplified. Of course you can get the source cases at
my GitHub page to follow along or try it out later.
I will demonstrate how a properly designed event driven
application can face challenges and
how we can quickly resolve them with the tools that framework has.
Let's first do a quick overview.
Our expected workflow is like this. We have
an action that has happened in our API gateway and
detaches the trade service which emits an event.
This event goes to the queue or event bus and
then we have four other services listening to it and processing it.
To observe how this application performs, I use a side application which
is my channel monitor. This is a very powerful pattern
to improve observability and can help automation
for scaling up and down based on channel metrics.
I'll show you how it works in a bit.
I prepared the make file so you can follow along.
First, run a make start command and this will start
docker with all required services.
Next you run a make monitor command to pick into
application metrics. The monitor shows
me the queue name and count of jobs that are waiting process
jobs and amount of worker instances online.
As you can see, under normal conditions the job waiting count is zero,
event flow is slow and we don't have any jobs piling up.
This application works fine with a low event count,
but what happens if traffic suddenly increases?
You can start next demo by running make start issue one
command and restarting the monitor. With make monitor
command, our event flow is
increased by three times. You will notice eventually
that the jobs waiting count will start to increase and
while we still are processing jobs with one worker, the queue has
already slowed down compared to the increased traffic.
Now we can see that our mission critical trade service confirmation
is throttled by this the worker would process
all events without any priority. So each
new trade confirmation must first wait for
some other events to complete.
And you can imagine this creating slow response times on your
front end. Applications for trade processing
let's explore the options that we have to fix this.
The first and most obvious is to scale the worker instance so
it will go faster. In the node js world,
this is rarely a good solution unless you are processing high cpu
intensive tasks such as video audio cryptography.
The second is to increase the worker instance count. This is
a valid option, but sometimes not very cost
effective. Next, we can think about application
applications that would include profiling,
investigating database queries and similar activities.
This can be very time consuming and can render no result
or very limited improvements.
And our last two options are where NestJs can
help us with it's to separate the queues
and prioritize some events.
I will start by applying a queue separation method.
The trade queue will only be responsible for processing
trade confirmation events.
My code for this will look like will
look like this. The first step is to ask our
producer to meet a trade confirm event to
a new queue.
On the consumer side, I extracted a new class called
Trades service and assign it as a
listener to the trades queue.
The queue default listener service stays the same.
I don't have to do any changes here now.
Whatever happens, whatever spike we have, the trades will never stop processing.
You can run the next example by starting
the start step one command and restarting the monitor with make
monitor command. You will notice that the trace queue
has a jobs waiting count of zero and the
default queue is still experiencing problems.
So now I will apply our second step for scaling.
Based on the information I have, I increase the worker
instance count to three for the default queue.
Youve can start this demo by running the start step two common and restarting
the monitor and over time
this application goes to zero jobs
waiting on both queues. So good job.
Let's recap. I applied two solutions here from
my list. I increased worker instance count for
the default queue. I created a separate trades
queue, and this was majorly done for
me by Docker and the Nestjs framework.
Next step you can implement by just using tools that
the framework has is to prioritize some events
over the hours. For example, anything related
to login or internal metrics can
be delayed in favor of more mission critical events
like database integrations, notifications, and so on.
You can get the demo application repository at my GitHub
with a link specified here. Feel free to connect at
LinkedIn. Thanks for watching and goodbye.