Transcript
This transcript was autogenerated. To make changes, submit a PR.
Today, I'm going to talk about service weaver new programming framework for
writing distributed applications in go based
on our experience of writing distributed applications at Google,
teams usually organize their applications into microservice,
where a microservice is a piece of code that exports an RPC service.
A team usually owns multiple microservices and based
on our analysis, teams frequently add new microservices.
Finally, they use an internal tool to manage their
microservices. There are many reasons
why people split their applications into microservices,
for example, to achieve fault tolerance and
to scale their applications.
Another reason is to improve the development velocity or
to make it easier to maintain. However,
we did some analysis and we found out that teams often
split into microservices without having valid concerns.
For example, they claim that they want to use different programming languages
for different microservices. However, we found
out that the vast majority of the teams use only one language.
Another argument to split is that the team wants to release microservice
using different rollout schedules. However,
our analysis shows that a significant fraction of the teams release all microservice
altogether. And finally, developers claim that
they want to release some microservices very often, like subdaily.
However, we found out that only a tiny fraction of
the microservice are released very often.
So the takeaway here is that teams split their applications
into microservices for some good reasons,
but often also not valid reasons.
However, splitting into an application into microservices
is challenging. For example, the developers now
have to deal with multiple versions of microservices,
and they have to implement logic to ensure different running instances
at different versions are backward compatible.
One implication of versioning is API hardening.
That means that once a developer splits into microservices and deploys
them, it is incredibly difficult to change the APIs
because of versioning concerns. Also,
to deploy a microservice, it requires complicated configuration files
and the developer has to add logic to connect microservices together
and organize application code into low level interactions through an ideal
and finally, end to end testing and local testing becomes incredibly
difficult.
Another popular paradigm to write distributed applications is to
organize your application into a monolith that consists of
a single binary, and usually it is deployed with a single config.
With the monolithic architecture, it's easier to address many
of the challenges introduced to using microservice,
for example, drano versioning concerns.
However, monoliths suffer from challenges
that microservice are designed to handle,
so service weaver bridges a gap between the
two models. It provides a programming model of
a monolith and enables the flexibility of using microservices.
If we have to remember only one thing about service Weaver,
then you should remember that service Weaver allows you to write your application as a
modular binary while is deploying the application as a
set of connected microservices.
Finally, service Weaver enables writing high performance
applications and enables portability.
This means it allows you to deploy the same application binary
into different cloud environments and can
support applications written in multiple languages,
although we don't support that for now.
So at a glance, service Weaver allows
you to develop applications using native language constructs
and organize your code around native language interfaces.
While writing your application business logic, you don't have to worry about versioning
and finally, you can use some embedded fields to
vivify the application. As I'm going to show slightly later,
to deploy your service weaver application, you will have a
single binary and a tiny config.
Service Weaver will run your application as a set of microservices at the
same code version and provides multiple deployment
environments. For now, you can deploy your application on the local
machine, in a cluster of machines or in Google Cloud.
However, we can add new deployers for AWS and Azure and
other cloud providers to roll out a new application
version. Service Weaver will ensure your rollout is safe,
and it does bluegrain deployments, a widely popular technique
to deploy new application versions in a distributed environment.
One nice thing about service Weaver is that it provides
observability, like logging metrics and tracing that
can be easily integrated with different monitoring tools.
Also, it allows easy testing and debugging.
Finally, service weaver runtime implements various high
performance mechanisms that enable high performance applications.
In the following slides, we'll go into more details into how service Weaver
handled each of these topics.
So, development a service Weaver application consists
of a set of components that call each other, where a
component is somehow similar to actors. For those familiar with the
actor model under the hood,
service Weaver uses a code generator to vivify the
application, for example, to generate,
you know, registration and so on.
To write your service weaver application, you write a modeler binary
and then you can deploy it on your local machine where components
can run in the same process or in multiple processes,
and then you can run it distributed on different machines pods,
it can be replicated, traffic is load balanced between replicas
and so on.
To define a service weaver component, you simply define
a go interface. Here we define a cache
component with a put method, and of course you can have other
methods as well. Next to implement the
component, you write a ghost struct, except that you have to add
weaver implements embedding. This allows the service Weaver
framework to identify that this is a component.
To instantiate a service weaver application in the main function,
you can simply call Weaver init and to get a client to
a component, you call Weaver get.
And finally, once you got a handle on the component,
you can interact with the component by simply doing method calls.
To deploy a service weaver application, you release a single binary
and a tiny config. You just
have to specify the name of the application binary.
Optionally, you can also choose whether to collocate certain components in the
same process for performance or other reasons.
Also, you can specify how long you want to roll out
a new instance of the application.
Once you write this tiny config, you can simply execute go
run on your local machine to run the application in a single process.
Or you can call Weaver multideeploy
to run the application on the same machine, but in multiple processes.
Now, if you want to deploy in a distributed environment, you can add a per
deployment information in the config file.
For example, to run on multiple machines via SSH,
you just need to specify a file that contains the names of the machines,
and then you can run Weaver SSH deploy to
run the same application binary that you run locally, but now on multiple machines,
if you want to run in the Google Cloud, you have to specify the
regions where you want to run the application and some public listeners
that enable you to get access to the application. Then you
can simply run Weaver GKE deploy to run the application
in the cloud. Under the hood,
the deployer will automatically create containers,
place the components into pods, replicate them,
load balance the traffic among different replicas, and so on.
Let's look a little bit in more detail about on how
a serviceiver application is deployed. As mentioned before,
you write the application as a monolith that consists of a set of
components. Service Weaver bill place these
components into processes. By default, it places each component
in a different process. However, if you specify any colocation
in the config, as we saw before, then service weaver
will respect those constraints.
Next, service Weaver will attach corresponding libraries to the application
that are responsible to manage the interaction between the application and the
runtime environment. Finally, service weaver
runtime can have different implementation. As mentioned before, for now we have
deployers for the local machine for a set of machines
and Google Cloud. However, service weaver
enables relatively easy to write new deployers, for example for AWS
or Azure. Now let's talk about telemetry
and testing. Service Weaver provides integrated
login with service weaver, each component
comes with an associated logger, and it's pretty straightforward to log in
to manipulate logs.
Also, it provides metrics like counters, gouges, and histograms.
One interesting observation is that service viewer generates some metrics by default
for your application. For example, metrics that capture the
throughput and the latency between various components.
For tracing, service viewer relies on open telemetry,
and to enable tracing, you simply have to create an autel handler
in your main function. Once tracing is enabled,
service Weaver will trace for you all the HTTP requests and component metal
calls. And finally, service Weaver allows you to
with a single command weaver GK profile, for example,
to capture the performance of your application as a whole
by profiling each individual process and aggregating into
a single profile.
In terms of monitoring, service weaver provides dashboards
for our application. One nice feature of service viewer is
that it can provide a bird's eye view into your application.
For example, as you can see on this slide, it can
display the call graph and interactions between all the components,
along with various metrics to reflect interaction between these components
and also to provide more insight into the behavior
of each component. Service Weaver also provides integration
with various monitoring frameworks. For example,
you can run your application on multiple machines with the SSH deployer,
and then you can open the dashboard on your local machine and click on tracing,
and you can see all the traces across all components
across all the machines in your local chrome browser.
And here is a list of all the monitoring frameworks
service Weaver integrates so far for
testing. Service Weaver provides a weaver test package
that allows you to run unit tests that can run in a single and
multi process mode. Writing unit test is as
easy as writing a service weaver application. The only difference
is that instead of using Weaver init to instantiate an
application, you have to use weavertest init in your unit
tests. For end
to end testing, service Weaver provides status commands and you also
can check the logs, metrics, traces, and the dashboards we provide.
For example, if you want to make a change to your application
and see if the application is still running, you can simply do go run.
Now, if you want to test whether your changes make any assumptions
about the distributed nature of the application,
you can run Weaver multideeploy.
And finally, if you want to make sure that the application still
works in the presence of multiple application versions running,
you can run Weaver GKE local let's
talk about performance.
Service Weaver provides a highly efficient runtime that
enables high performance applications for example,
it provides an efficient encoding decoding mechanism that has no
versioning overheads. It uses an efficient
transport protocol built on top of tcp that embeds
custom load balancing. It provides collocation that
enables flexibility on how components are complicated. For example,
chatty components can be placed together in the same process, and they can use simple
local method calls to interact with each other.
And finally, service people provides routing that
helps to balance the load across multiple component replicas and
also increases the cache hit ratio in case you
want to collocate caches with your components. For performance reasons,
we benchmarked a popular application called online boutique
that contains eleven microservices.
We ran the application using the Google Cloud deployer and compared
the performance using three different approaches.
Nonweaver, which is the microservice version of the application,
except that we rewrote all the microservice in go for a fair
comparison. Weaver split
which is the application written with service Weaver, except that all
the components run in a single process. And finally we
were merged, which is the applications written with service Weaver,
except that all the components run in separate processes.
Our results show that with service Weaver you
write less code into your application up to
1.25 x. And this is because you don't have to write
boilerplate code related to encoding decoding,
you don't have to add service discovery,
you don't have to define protos, you don't have to integrate to
the cloud provider, and so on.
Also with service Weaver you just write a tiny config,
while if you deploy as microservice, there are many configurations in
their very complicated YAML files.
Because of a high performancean runtime, service Weaver can handle the
same throughput as a nonweaver version of the application, but with
less resources. Hence it can reduce the cost by up to four
x. And finally, the application latency
significantly reduces service Weaver. In our benchmarking
it's up to 37 x better at 99
percentile before I
conclude the talk, I want to address briefly some of the common questions we
get with service Weaver. You write a single modular binary
and you can postpone the decisions on how to split
into microservices for later. A nice
property of service viewer is that you don't have to worry about the underlying
network transport or the underlying serialization mechanisms.
Also, by decoupling of the application code from RPC
bindings, service Weaver allows cross component calls within
the same process to be optimized down to local method calls.
However, service Weaver doesn't hide the network and the application
developers should treat method calls as remote by default.
Also, with service Weaver, you don't have to organize
the application code and low level interactions through an IDL.
And finally, you don't have to worry about code versioning issues and
rollouts. Service Weaver take care of these things for you.
So I presented service Weaver a framework
for writing distributed
applications. With service Weaver, it's easy to develop,
deploy, and monitor performance applications.
We are looking for community contributions. We want people
to get involved with us, give us feedback, and contribute to the project.
And so please don't hesitate to contact us.
With this I conclude my talk. Thank you.