Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello everyone, I'm Trishul. I'm a frontend developer and a JavaScript enthusiast.
By day I architect some e commerce solution for Westwing and
by night I'm a Mozilla enthusiast. So today
I'll be talking about destructuring front end monolith with micro frontends.
So what is a monolith? By definition,
monolithic application is designed to be a self contained
complete of programs are interconnected and interdependent.
So monoliths application are self contained applications, the component of
which are tightly coupled to each other, interdependent and interconnected
components like UI elements, database, everything goes together.
In modern day this monolith has already been broken into backend and the frontend,
and eventually backend has been broken into microservices.
This worked fine for along, but with evolution of JavaScript we
have lots of code now in front end. Frontend hosts major chunks of
business logic. With lots of code comes lots of complete and
now what we have is again called a front end monolith.
And with this we have all the problems of monolith again all the
code is in a single place, increasing size of repo every
day. Every feature we add increases the size of the code and eventually
it increases the complexity of the code. Since lots of
team are working together on the same code, we have increased interteam dependencies
and as a result of all of this our development to deploy cycle has
been slowed down. Moreover, the whole team is as
strong has the weakest link. If any team has broken anything on
the production, all these teams are blocked. No one can deploy anything
else until this thing is fixed by the team. So what
is our solution? Enters micro front ends?
What is a micro frontend? By definition, micro frontend
architecture is a design pattern where a monolithic frontend app is destructure
into small independent apps. These micro apps are stitched together as a
single page on the fly. Consider your huge monolithic system. It has been
broken into small, let's say components, and each component behaves on its
own. So each macro frontend is a standalone system
capable of running on its own. Each macro frontend has
an independent repository, a separate piece of code, so that the complexity
of its code is contained into this individual repository.
It has independent CI pipelines, so you can have your own
testing suits, you can have your own deployment pipelines,
you can have your own regression pipelines whatsoever. This do
not bother other teams. And finally which I love
the most, they have independent deployments. Each micro frontend is deployed
individually. One team do not have to wait for the other team or is
not blocked by the other team. If your code is ready, it's tested.
You can already deploy and make sure your live site has
the live code as a whole. This whole
microfrontend architecture enables parallel development cycle so
that every team has their own domain and they can move with
their own velocity. Let's have a
look at the architecture. So consider this case. There are several micro frontend
ends, remote one, remote two, remote three, and a host. So these micro front
ends expose these packages in the wild and
host consumes those packages. Host does not have to worry about how
these were built, how these were compiled or how these were made available.
All it needs a remote address where it can get these remotes from.
Now let's have a look how we architected our e
commerce website at westwing. So at Westwing we
have this very simplistic destructure. There's a header, there's a footer,
and there are several pages, let's say homepage, product listing page,
product detail page, payments, page and cart page, and many more.
So in a monolithic word, all these code were in the
same code base. All the teams were working on the same code.
And obviously there will be lots of friction, lot of
to and fro, and most importantly there were lots of blockage
because of one team. The other teams have to wait for the deploy. We have
to sync our deploy, but with micro front ends,
we made every page into a micro frontend, individual from
each other and independent. And these micro front ends were
consumed by the host. The host has
the header and the footer, we generally call it as an app shell.
So when a request is made to a page, let's say for the home
page, the app shell takes the request and
renders the header and the footer with homepage micro frontend.
So what should appshell have apart from header and
footer? So what we put in app shell is more of shared
business logic, which will be kind of consistent throughout all the
micro front ends. So one of such logic is login logic.
So all the login mechanism we put in app shell if
user needs to log in. So instead of micro frontend initiating the logic
process, login process, it sends a signal
to Appshell and app shell initiates the login process. If the user is logged
in, Appshell returns the data of the logged in user, else it shows
a login pop up or whatsoever is the logic. But this is
consistent in the app shell. And micro frontends do not have to worry
about it, they just send the signal and get the response.
The next thing which we put here was tracking. So our
ecommerce site have different third party tracking, which is
common throughout the system. So instead of putting it in each micro frontend,
we put it in Appshell, so that it's uniformly available throughout microfrontend
ecosystem. So if any micro frontend needs to send a signal, they send
a signal to Appshell and Appshell eventually sends it to
the third party. This makes sure that we
maintain the consistency of tracking, and we have a single
place where we can monitor what tracking is being done if we want to
log it, if we want to debug it. So it comes really handy.
Next thing we put there was system config. So we use lots
of config, like the environment config, these pipeline config
and whatnot. So instead of putting it in all the micro frontend, we just
put it in appshell. And then when any micro frontend needs
some of it, they just request from App shell. So app shell basically
holds the single source of truth for all the configs throughout
the system. Whosoever Microsoft frontend want some
configs, they just request it from Appshell and get all of it.
And then the final thing which we put in Appshell is these
routing. So Appshell is responsible for building all
the pages because it has the header and the footer. So we put
all the routing logic within the app shell. So for example, if you
make a request for a listing page. So existing
micro frontend is invoked by the app shell and it is
rendered with the header and footer. And finally the complete page is served to
the user. So these are some shared business
logic, which we put in Appshell called,
maybe you can call it host. So we have been
talking about lots of data transfer between micro frontend
and app shell. So we need to understand how the communication between
micro frontend and Appshell has been set up. So there are
generally two ways. So in microfrontend ecosystem,
if we are using the same tech stack, let's say micro frontends
are in react and Appshell is also in react. So the
communication is a very easy process. We can just share the
props like any react component except the props, and it behaves.
Props can be a data or props can be a function. So if
you want to change the property of the parent, you can
pass some function and eventually the child
component will change some property in the parent. Simple react flow.
So this shared component is
still a react component. So it will still behave how the react behaves.
So there is nothing extra layer which we need to do.
And the other way is if we have, let's say view
js or react or angular, let's say basically some
different text stack, not the same text stack throughout ecosystem,
then we use something called as custom dome events. So what are custom
dome events? So custom dome events are just user generated events.
This behave exactly like any other events of the Javascript,
but it's just we can create our own in
our west Wing system. We also use custom events for communicating
between one microfrontend ten to another.
Sometimes it's easier rather than just passing down lots of props.
So yeah, let's have a look at
some custom Dom events. So it's super easy.
All you have to do is let's say I have defined an event called as
my event. You subscribe to this event like any other
event, like adding listener maybe to the document, to the element,
whatsoever you feel comfortable with and how to trigger this
event, how to fire this event. So you have to create a new
custom event with your name and you can
pass the data, whatever is required for you. For example, I pass
is clicked. And when this custom event is ready, all you
need to do is dispatch this event. So this dispatch event
will make it available for all the subscriber
in the window. So it has nothing to do with microfrontend, app shell
or anything. It is plain native JavaScript. So everything
which ends in the browser is JavaScript, HTML, CSS. So it
doesn't matter whichever microfrontend you are, whichever app shell you are, if an
event is fired and it has a subscriber, the subscriber will get the
event. So based on that logic, we build this whole
messaging system using custom dome events. And that works
pretty good out of the box solution for communication.
Now we have seen the architecture of the microfundance. Now let's see how we
implemented this one. So there were several ways we tried to
implement the microfrontend architecture.
Initially what we did is like we have small components and
then we exported them as node modules and imported them into the
projects. This way worked for a while, but still it
blocked our deployment because eventually at end
of the day, you have to deploy these node modules with the incremental version into
the main app. So we are still at the same problem, but with
webpack five we have module Federation. So module Federation
allows webpack to reference some modules which are not present
at the compile time and they can be available at the runtime.
So let's have a look at this again. So when we compile
host, we do not need to have remote one, remote two, remote three.
We just pass the reference of remote one, remote two, remote three with help of
module federation plugin and assume that these will be these when
the application will be up and running. So this allows
to compile the host without any external remotes.
And remote one, remote two, remote three are compiled separately
and ready for consumption. So on the runtime when
the app is up, it looks for the remote one, and when it
gets the packages it just chunk in into the host
compiled code and behaves like any other component.
Now let's have a look at some code.
So consider this main JSX. It's a simple react
app. It is just a component rendering
h three with a title as a prop. So we need to expose
this component into these wild. So how would we do that?
So it's pretty simple. This is our remote config.
This is basically webpack config. And in this webpack config the first
thing we need to do is include module federation plugin.
So this is native webpack plugin which comes
with webpack five. So it has
first entry as remote one, the name. So this is the namespace
with which it will be known into the host.
The second one is the file name remote entry. So remote entry is a file
which is created with the model federation. So consider this file
as a metadata file. It contains all
the address, it contains all
the address and the package, and the relation between
these packages when which package has to be loaded.
And then we have this exposes. So this
is these place where we define which component we have to expose.
For us it's src main JSx and with
which namespace it will be exposed. So it's main.
And the final thing which we have is shared. So what
module federation allows is like if you are in the same ecosystem,
let's say everywhere you are using react.
So you should not load two copies of react into your
app. So these shared
key allows you to define what all libraries
you expect to be provided from the host. So in our case we
say react and react dome, please use it from host and do not load
a copy of it from micro frontend.
So with this the module is already exported.
So now this exposed module has to be consumed
and let's see how we will do that.
So again this is the host config of webpack.
So again here we have to not
write exactly here we have to include module Federation plugin
and give a namespace, host namespace. And then we have
to define the remotes. So we need to define like what all
remotes will be available. So let's say
we define the remote one and its address these it can
pick up the remote entry js, the entry file which will kind of explain the
rest of the things. And these finally again, the shared so as
I explained before, we can define some modules which can be not
modules, libraries, basically vendor modules which can be shared from host
to all the micro front ends.
So with this, the connection has already been made between
the micro frontend and the host. So this host will
consume the micro frontend remote one which will be available at localhost 3002
and having remote entry on that address.
Now the connection has been made, but still we haven't used the component.
So let's check how to use that main component.
It's as simple as this. It's literally like this. If you look at the second
line import remote app from remote one main.
So remote one is the namespace of the exposed remote and
main is the namespace of the exposed component.
And then remote app can be used as any
other react component throughout your app. So you can
pass the props to it, you can render it, you can conditional render
it, whatsoever you want to do with it and with the simple steps.
You're already having microfundance setup running.
So I have prepared a small demo and let's have a look at
it. So these I have
two projects, host and remote. In remote
it's a simple react setup with a webpack config. In the webpack
config we have module federation plugin with namespace
of remote one remote entry and exposes a shared
component which is present in its source folder. And in addition
it expects some shared libraries like React and react
dome from the host. Let's have a look at the shared component.
The shared component is just h four rendering current count
which is counter these value passed from the props.
Let's have a look at these remote app it's a simple app, remote app,
it has a shared component and the chunk value here
is just one static one. Now let's have a look
at the host in the host in
the webpack config. Again we have model federation plugin. It's a host
namespace and it accepts a remote remote one at the address
localhost 3002. And it exposes some
shared libraries like React and React dome.
And now let's have a look at host app.
So host app basically imports react component remote
component from remote one namespace from
remote one namespace, and shared component namespace which was
exposed. And this, it simply uses it as
a component and pass a counter to it.
This counter is a state variable and
on a button click, on a button click,
it increases the counter by one. So this is basically a functional
component and it's passing state to these remote
component. So let's see how this works in terminal,
sorry, in browser. So let's make it run.
Let's run the remote first and then let's
run the host.
So this is on localhost 3002 and this is on localhost
3001. Let's have a look at both.
It's, let's have a look
at the remote first in remote. If you can see it's a simple
remote app with current count one. So the inner box is the shared
component which is available in the remote app. Now let's
check our host app at
3001. Here again
we have the same component, shared component, but it is being served
from the remote. Let's quickly have a look at
the network and just reload it.
And if we see we have a remote entry, main JS,
which is initiated by main js and then remote entry
in turn invokes lots of modules here,
like for example shared component which we need. And let's
check the functionality of this. We increase the counter
one by one. So the upper one, the count in host is five and current
count in shared component is five. So basically it's seamlessly
forwarding the state from host app
to the shared component. And this is the magic of model
Federation. It really seems out of the box, just same
native react component as if it was present in the scope
of this host project. So with this
we have lots of power from
module Federation to make sure our react components still work the same
way, even if it is not included in the project.
So let's get back to our slides.
Okay, so we have implemented microfrontend and investing in in
our shop, but it was not piece of cake.
We have a bit of challenges while developing this micro frontend,
so I want to share some of our challenges with you.
So first one was design consistency. When we were splitting out
micro front ends, one of the major concern of us was like how
to keep the design consistent throughout the micro frontend. Right now we have a
conscious like okay, we will do these, but when the team grows, when the code
grows, how will we make sure that the design is consistent throughout all the micro
front ends? So what we came up with was a design system.
We have something called, we call has UI kit.
UI Kit is basically house of all the components which are available for the
UI, for example buttons, search bar,
drop downs, checkbox, etc. So whatever
UI component we have, it should be in the UI kit
and then every micro frontend should use it from the UI kit instead of implementing
of their own, even if they want to implement their own.
First we implement that in UI kit and then we consume
from there. And we have to make a small
storybook for this one so that everyone knows
what elements are available in the UI kit. And this actually
helps the new developers and also the designers to
know what we have and so that these can infer from our design system.
The next challenge we had was initializing a new micro front end.
With each micro front end we have lots of setup, we have GitLab pipelines,
we have CI setup, we have Docker setup, we have
helm charts, we have publishing to
s three scripts, and a lot of stuff on the infra side.
So doing this manually is pretty kind
of extensive work and also very
error prone. So instead of doing it manually for every micro
frontend, what we use is a templating engine. We call it
temporeto. So temporato is very smart templating
engine. It maintains different templates for us,
and with the help of these complete, it spins up
a new micro frontend or whatever your project is
within a minute. So if we do something like
this, if you can see on my screen,
so these template create it, reuse a template
project where all the templates are present and we have
several templates, but for this we are interested in micro
frontend template. So when we select micro frontend template and
go ahead, we pass some variables. These variables
are basically kind of namespaces which can be kind of
find and replace while creating the template. So once
the template is created out of the box we have, as you can see,
docker file, helm charts, CI pipelines, end to
end test, everything set up already.
One more advantage of using this template engine for us is we
have lots of micro front ends and we need to make sure that
all the micro front ends are in sync on the infra level. For example,
if we change the, let's say s
three script, how we publish it, we change some stuff there.
So we need to forward it to every micro front end. So instead
of doing it that manually, we just update the template and micro front
ends update their part of these template and it propagates uniformly
throughout the micro front ends. So using a template
really kind of eased our lives as a
developer. Apart from these, we have some more
challenges for example, the biggest
one was decoupling code from app shell.
Okay, so what we did is when we decided to
move to micro frontend, we said what we
have this monolith, we will call this app shell. Instead of creating a new app
shell, we will have this as an app shell. And we will be
taking out components one by one, so that our app
shell will be leaner. And finally we will having only header and footer
and some business logic in app shell and rest, everything will be a micro
frontend. So to achieve that, the first thing
we did was moving components to the design system.
So every UI component, we moved into the design system.
And once we have in the design system, every reference of that
was only from UI kit. There was no reference,
internal reference from one place to another. In the code made be
microfrontend, made be Appshell, uniformly. Anything which is
related to UI has to be in UI kit and it has to be referenced
and used from there itself. So doing this actually kind
of half double work, because this enables us to decouple lots
of stuff already, even in the app shell.
Now sharing the logic between microfrontends, next thing
which we want to tackle was some logic
which is static, it does not require data, but for
example price formatters. So we want our website
to have the same price formatting throughout
the website. Like maybe any micro frontend may be any
page, we want it same. So instead of copying this logic into
every micro front end, we created some small utility NPM
packages, like price formatting helpers
or some date formatting helpers. So these are small helpers
which we use throughout our micro frontend system. So whenever we
find a utility which is shared by more than two micro front ends,
we create a small package out of it. And this
logic is used from the packages rather than just existing it twice
or twice. So this kind of still makes sure
that our code is consistent and we
are not kind of using different logic at different parts of our website,
which is eventually a different micro frontend.
And last but not the least, building an efficient
developer experience. So once we have all these micro
front ends ready, the biggest problem we had was like, let's say we have
one app shell and three micro front ends. To develop it locally,
we have to run all four together. So on 3001,
two, three, four, we were running micro front ends and
the app shell, and then we were able
to make some change. And this was pretty cumbersome process because it
takes lots of resources. My laptop fan was really
like I'm doing a coin mining or something.
But eventually we get over that. So in order to fix
this, what we did is we followed a docker based approach.
So we created a docker image of everything which
is deployed on staging, and we accepted the
macro front ends URL via environment variable in
this docker image. So if there is no environment variable present,
we use the staging URL, else the environment variable.
So when we develop on the local system, we run,
let's say on localhost, three housed and one the micro frontend,
and pass it into the docker image with
an environment variable. So when we run the docker image,
everything is used from the staging environment but the micro frontend
from the local environment. So this helped us to have a smooth
and more productive developer experience. There are still
things which can be improved in this, but for now it's way better than running
four node processes. So yeah,
this is kind of a snapshot, what we did at Westwing and
how we were solving the problems and how we broke our huge monolith
into small micro front ends and app shell.
Still, we have a long way to go, but from past six
months, the progress we have,
I can totally say this is super worth it
with that. Thank you everyone.
Thanks for attending my talk.