Transcript
This transcript was autogenerated. To make changes, submit a PR.
You. Hello guys, welcome back.
Thanks for joining. My name is Gaston Cacheffo.
I'm a tech manager at Globant. Globant is
a digitally native company. We help organizations
to brainbend themselves and unleash their potential. We are
plus 2020 6000 employees and we are present
in more than 20 countries. We work for
companies like Google, Electronic Arts,
Santander, among others. So thanks for having
me today and let's jump into our
topic which is microservices from a DevOps point of
view. Before to begin, I want to thanks to Luis Pulido.
He is a solution architect at Globant and helped
me a lot to elaborate these topics.
Next, setting your expectations. About this presentation
we will introduce the concepts of microservices
and DevOps in order to discuss about its challenges
and how to get the most from them, avoiding falling
into high complex operability scenarios. Let's move
on. Let's begin with the topics that we are going to address.
What are microservices and why use them? What is
DevOps and its relation? What are the challenges?
Where to start or CI CD planification? What to consider and
where to go? Microservices let's introduce the
concept. According to Wikipedia, a microservice
architecture is an architectural pattern that arranged
an application as a collection of loosely complex, fine grained
services communicating through lightweight protocols.
Anyway you can find many others consider as
an evolvement from monolithics a method to breaks
down software into smaller pieces, et cetera. What is
important is not here in its benefits as
anything. Benefits carries complex challenges.
Also, we will see later on then why to
use them. Well, let's illuminate a series of
advantages. Bot isolation as simple
as the idea that in a distributed service architecture,
the affectation of one of them does not imply the total loss
of the service. Your application will still work,
probably data isolation, probably the root idea
for microservices architecture design. I'm not a theorist
on microservices. Anyway, each of them will own
an information or data domain and its related logic.
You get benefits such as smaller schemas,
changes, isolated risks and perhaps the base
for loosely complex concepts or expectations.
At least then scalability, the ability to independently
escalate different part of your infrastructure, either using
shared escalation rules or customized one. This carry with
a significant increase in resource consumption efficiency.
Independent deployment well, microservices is about having
independent lifecycles, so microservices
modifications can independently be promoted to
higher environments. Without expecting a service disruption. We will rely
on a battery of policies and tests to prevent merging and
promoting faulty changes. What is DevOps?
You may have heard about this duality. There exists.
It is a fact on one side a tech industry position and
in the other a more accurate approach. Consider DevOps as
a q two movement. By definition, the cultural site
is a set of practices that combine software development and it
operations. It aims to shorten the system development
lifecycle and provides continuous
delivery with high software quality. Again, there are many others definitions.
Then we have a tech industry software
who implies have these roles personified and
a team member who will care about integration and
deployment automations among others. From operational models
operational models we can find this position into a full DevOps
team or a team member into an application development team.
You may used to be part of one of these operational
models. A center of excellence may also be there in order to centralize
and standardize this discipline. Well, microservices and DevOps
its relation either as part of an
application team in a DevOps position or an app team
which embrace DevOps as culture will have to
lead with the software development lifecycle right of
many microservices and many could be a really big number. It will
become a multiply factor for the architecture components.
Your team will have to deal with the
microservices the more components in your architecture
let's jump now into challenges with microservices
and let's begin with promoting code. And this is a very
good question you have to make yourself have you ever
thought about how much time a developer dedicates to exclusively
code new features? I understand this is the DevOps side
of the force, but from a cultural perspective everyone have to
embrace DevOps, mostly in multi repository.
App teams will lead with merging code between many branches,
feature hot fix, dev release, main, et cetera.
And depending on how deployment are triggered, this frequently
could consume an important amount of provision time.
High complex deployment scenarios who haven't been
there. Well, this is a situation where you can easily
fall with microservices packing smaller changes by grouping
them in a giant release and those groups have to
consistently travel through different environments.
Consider the time and the effort this consider retro compatibility.
This is a challenge, but one you should be having.
Retro compatibility is a must, a way to code your change
considering they are going to be promoted independently
and current service consumers will stay working.
Next quality assurance the main challenge with this is
constant validation. This is not just
about a single environment. You will need your QA team
and tools available on demand. The more manual
the validation and group the changes, the greater the QA
bottleneck. Database updates well
microservices iterations may imply more schemas and database
updates than desired. From another point of view, this shouldn't be
a problem to bring to the team. In any case, it's used to convert
into one right due to manual script executions and code
deployment. Extra coordination. This may involves finally
debugging with monolithics. All logs are in the same
place with distributed architectures, logs are not
and obtained the root of a problem may become an slow process.
The more components, the more difficult to trace. Finally, here we
are. Planification Guide let's do what psychologists
doesn't so just advice for you. We are
going to describe these topics until the end of this
presentation. This is just a quick overview
branch strategy the base for automation something
that you have to consent with your team lightweight microservices
remove from microservices whatever role you can. Centralize CICD
strategy tips to automate your software
delivery lifecycle configuration establish policies
secure and get governance off quality assurance align
your QA with microservices objectives etwee
tracing love this one. Get a tool to correlate everything
related with monitoring your app and finally,
feature flux possible the same grail
with Microsoft, at least from my point
of view. And Luis also.
Well, let's begin with brand strategy.
There is a great video you must watch on this topic from
the great Victor Farsik. Probably you know him.
He has a channel on YouTube called DevOps
Toolkit. A great talk about branching
strategy. Well, our first advice or
topic here trunk based development. The main
message here is not getting to complex right into complex
branches strategies, right?
The more branches, the more automation complexity or
worst and extremely amount of dedication for getting
your changes ongoing. Trunk based development
or this kind of trunks. Similar ones like GitHub
flow for instance, or the ones you want to
customize based on any of those are
fairly straightforward. You'll be able to quickly
automate into your pipelines. Spend the necessary time with
your team to decide, document the type, process and invoke
this is important. Frequent talks to reinforce it.
Use git tags for versioning a simple advice
probably going too far here is using
git tags for versioning. This doesn't require commits into
your branches and you'll be able to code them everywhere from pipeline streakers
to app configuration environments, logs, traces and so
on. Build validation this
is a key feature you have to look into.
It's a key tool for code merge policies.
When prs are created,
a set of actions can be triggers, usually several
pipelines who validate your code looking for vulnerabilities regressions,
build integrity, performance, behavior, et cetera.
This is the place where you want to deeply check your
changes in search of faulty or unsecured code.
Finally, version your APIs.
Another architecture design converts into a tool close
to us. It's a tool that you need to develop
whenever our change doesn't go retro compatible and
it will happen and you have to introduce a major change.
You may deploy a new version of your API
and getting your consumers with enough space
for asynchronous migrations. Next topic
lightweight microservices what it's all
about the first one is remove
unnecessary roles from microservices. Keep your microservices
as the word suggest, remove whatever role
you have the ability to centralize authentication
and app roles is a great place to start. You can delegate
this role to your API manager.
Most of them will have the ability to handle the authentication layer,
and if your microservices need token information,
you can export this to headers or
parameters, right? IDP this
one is going to be quite obvious for majority. Do not waste
time coding an identity provided for your application.
Besides the invert amount of time, you'll probably stay at
halfway with unsecure or vulnerable
ones. Getting an external IDP, which nowadays
isn't costly at all. Which pricing I guess
pricing strategy based on traffic mount and premium
features are totally affordable. You will
obtain a considerable high security level by continuous
security updates, those provider involved
and lot of useful features you probably require further
on for your application. To name
a few of them. Google, Facebook, Microsoft,
Okta there are many big
ones abroads.
Okay. Another annex in the
line of the previous is to similarly deal with applications
role. Most of the applications fall by design.
With this requirement, then you can take advantage of
core IDP features. JWT tokens related
to OAuth authorization you are probably using.
It's an authorization protocol can include app
roles? Sure, for a given user.
For instance, you can get information into
the token user information and consider
its roles from different sources like directory groups or
specific declare application roles CI
CD strategy let's begin by splitting them.
It refers to pipelines. Of course your
configuration strategy will have the different roles than
your deployment and deal with them separately.
Had the advantage to isolate simplify its configuration.
You will rely on your triggers and merging policies to
invocate them. Promote artifacts
well, trust me, during a production incident
you don't want to deal with the phrase that code shouldn't be there
or this is not what I test as
a counterpart. If you're not promoting then you are over integrating
and these carry with several and critical disadvantages
in one side. As introduced, you cannot guarantee
that your artifacts will be the same worst
case when you are including environment configuration.
That happened a lot and on
the other side the excessive time consuming either be short or
large will be present by the number of microservices
you are dealing with. Next one use templates.
Avoid duplication as a cross rule for governance
of distributed architecture is mandatory.
Having said that, coding templates will centralize the most common tasks
and expected CI CD automation behavior.
When changes arrives, you can quickly iterate all
your automation. Consider additional triggers.
This is related with the idea of automate everything.
Automate as much as possible. Call your agreed
software delivery lifecycle policies, for instance,
into your pipeline as much as you can.
Triggers then will increase your automation alternatives. To invoke
your pipelines, consider git tagging.
Another pipeline succeeds. Commit over certain branches,
artifact updates, et cetera. Automate your schemas
iterations well here.
Not having a tool to handle the schemas updates in an automated
way into pipelines imply getting expert teams,
probably externals, getting involved during deployment,
planification and execution. Another pain to this
approach with multiple environments and low frequent
deployment is forgetting changes followed by
deep debugging into incidents until you realize to
name a few of them, I'm talking without sharing,
sorry to name a few of them you have flyway
or leaky base. Those are
pretty commons. Automate your API's definition
automation as code evolves,
so do its endpoints or methods as you
may probably have with microservices and API manager.
Those iterations need to be considered. Get a
tool to extract an API definition doing
configuration this during the bail and use it to update
your API into your API manager instance.
For instance for net you have swashbackle
for spring boot.
In Java you have springdoc operap
maven plugin configuration. This could be another
chapter to discuss, but let's stick to some of the key
ones. What was that? Variable policies
third, you have an example. Do not commit sensitive
information group or centralized common microservices variables.
Take your time and agree with your team. The list of rules
for variable governability well, in this example
we mentioned four types of combinations for
either sensitive variables as environment and
its environment relation environment affect viables
or not means its value change.
When artifacts change from one environment to another,
those sensitives needs to be hide
of the variables. The sensitive variables needs to be high and should be stored
into a vault. Nonsensitive environment
effect can be stored into a library or group depending
on your orchestration tool and finally nonsensitive
and environment not related can remain into app property
files. Of course, if you have common
microservices values environment
not related as condition, you can address that by also grouping
into a project share configuration file environment
variables for app properties kind
of obvious for most of us. Let me say
it is not obvious for everyone use environment
variables for applications configuration. It will facilitate
the promotion idea and you will find
almost every hosting and orchestration tool is
compatible with this approach.
Common infrastructure environment variables in kubernetes
well, sorry I have to say about pretending
everyone is doing kubernetes. In any case,
I found this quite interesting. Among other excellent features
for kubernetes, besides being able to group applications common
configurations into independent lifecycle by using
config maps, you can also consider this to handle common
infrastructure dependencies configuration. We can
mention base URLs, domains,
credentials, external service domains, et cetera.
This allows to massively update infrastructure configuration
change independently of your microservices software lifecycle.
Quality assurance understanding limitations
one of my favorites. If you discuss with your
team and application stakeholders, you may
find not everyone shares the same understanding about QA.
If you have this role not automated, then you will
find a trend to avoid continuous deployment.
It kind of becomes a barrier.
Distinguish between business detailed features
validations over the need for code changes validation.
You'll probably find space for improvement code your
test. The more
frequent and unattended site for deployments,
the more testing automated needed. Stick with this,
it's part of the mentioned problem. So for
your microservices you don't want a bottleneck within
your quality assurance, then go the tests and include
them into build validation. We've covered this topic and
also if you can, on environment validation. Also quality
assurance versus user acceptable environment
similar to the previous to the initial
one kind of reminder for not
as obvious situation that you may consider.
I'm going to literally read it. Understand the
purpose of each environment. Don't mix your roles during
QA and UAT QA as near
to developers and test automated as possible and
UAT to dedicate business user validation if
needed. E two e tracing use
an application's performance monitoring tool.
If you are not familiarized with this type of tools then you
should. I totally find this more useful than plain
logs. With a distributed architecture you
will get the features you need to understand how your application
is behaving with ITUI.
It refers to the correlation of different
types of traces, allowing a full visualization near
real time of your microservices interactions to resolve
their functionality. You may include your logs,
business events, exceptions, et cetera.
You will get code insights to understand where
something brokes or what part of the response
chain is consuming the higher amount of time real time
business monitoring from APM a recall
to something mentioned don't use your database
for operative business monitoring. Don't compete with
your microservices resources. Insert business
traces into software like APM for
executive business dashboards. You may need anyway access
to the database, right? But in those cases you can
recruit to your provided databases. Features like
replication. In any case,
avoid any direct access to the database could be
prohibited. Finally, limit the amount of traces
ingestion well this is related with ingestion and indexing.
This is a perform and economic cost you want to efficiently delimit.
Add daily caps into your ingestion services sampling
on higher environments and avoid duplication.
Sampling is a great way to deal with production.
Your business events doesn't have to folder
so you have full visibility of your business and
you will have a great sampling for
your applications behaving even
you can create rules like
having 100% of the errors, et cetera.
Feature flags have the potential to be
kind of sand grail or microservices,
but that's my opinion and you are allowed
not to take into consideration. You may have pretty better
ideas than feature flags. It is something that it's been used a lot
by the industry.
A new paradigm. It will help on this continuous
deployment idea. Consider this as
a business tool by having flux to control the availability
on your features that will allows you to separate
business requirements from continuous deployment.
It becomes important to convert business features into small
microservices will detail technical user stories finally
be able to unatten and independently deploy
strong validated change automatically to production.
Further considerations something that we
have a list of enumerated features
that we may include or consider within
our application canary deployments.
Consider blue, green or rollout also
valid alternatives, right? In Canary case,
you'll be able to add many stages into your deployment
to production as you may have identified an
isolated segment of users. Idea is
iteratively increment your code changes or business
features in a controlled and progressive way
and not affecting all your consumers or user with
a single deploy configuration server.
As it's all about centralization and governance,
this implementation will decrease your application configuration
changes among other advantages.
You want to go faster with
configuration change. Either those configurations
come from applications necessities or
infrastructure necessities. A configuration server
could be a key tool there infrastructure as
code, another St Grail,
another paradigm. You should get involved with
coding. Your infrastructure will allow you to
remove human errors when updating or deployment.
New architecture components also reduce the orbital risk.
You will have a real disaster recovery plan
and a tool for reusability.
Finally, center of Excellence,
another favorite here. If you're
part of one application team among others
in your organizations with similar functions, then you
may conform this office in order to standardize research,
support and training in a particular area of
knowledge. By definition, cross functional
team who provides best practices, research,
supports, trainings or focus area.
You may have different kinds of knowledge
areas. You can have an
expert, an agile expert in order to promote
your agile situation outside.
You may have a net speciality
with their improvements and their lifecycle.
You can generate several of them and you can split
them in terms not to occupy
the same projects for instance, but a center
of excellence will be a helpful
tool for standardize your
organization. So that's all for
today. I want to again give the thanks to Luis for
his collaboration. Here are information contact
hopefully also in the option.
I also want to thanks conf fourty
two and globant for this invitation and
finally to all of you for attending.
Hope this will help in some way to
make your DevOps life easier. Bye guys.