Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hi, welcome to this talk. Today we're going
to talk about code reviews and how to master
to be efficient in this code practice. My name
is Israel. I'm brazilian,
living in the UK. This is my family.
Two things that I do like doing games and traveling.
And currently I'm software engineer at Meta.
Just a brief introduction. Code reviews what
is it? It's a process that focus on
examining and reading someone's code.
There are good references in this site
that I put and why do we do code reveals? Well, there's plenty of
reasons we could do it to discover bugs earlier,
improve code quality, enhance security of the
code, share knowledge across the team,
mentor newer engineers, maintain compliance and so
on. But also, what's not code
review? Well, first, it's not blaming someone's code,
shaming someone. It's not also an opportunity to
show off your amazing engineering skills.
And also, it's not about executing, actually executing the code.
It's more about reading, analyzing it,
putting comments and helping the code the change to
get better. And there are three main approaches
to code review. First is pair programming.
So generally you have two engineers working side
by side while one is coding, the other one is observing
and giving some comments over the shoulder
reviews. It's usually done after some change is implemented.
You do it like live in person or maybe remotely.
And another one is tool reviews.
Usually, for example, you submit a code review,
an online pull request, and someone checks the
code replies with comments approving or declining,
something like that. Most of the examples and things we'll be
talking are more related to the third one, but I
believe almost everything could also be applied to
any of these approaches. With that said,
before actually starting code reviewing the team,
we need to do three things. We need to understand the
context that we are. First, about the team. So what's
the seniority of the team? Is the team composed only by
senior engineers who are working in the code base for like five years?
Or do we have a lot of junior engineers or newborns,
new people in the team who are getting on
boarded there? What's the familiarity that people have with the code?
What's the work dynamics? Do people work more independently
or do people work more collaboratively when building features?
What's the communication style of the team? What's the global
distribution time zones? There are many things that
we should take in consideration in the team before setting up the
process or before trying to establish some rules around
code reviewing. Secondly, we need to look at the code, of course.
So what are the coding standards? What's the testability of what
we already have. What are the riskier paths in the code?
Maybe, I don't know. You work in a bank and what's the core banking feature
that can't break in any way?
Also, for example, maybe you are working on legacy
code or you are working on a complete new project. These things do change
the dynamics of code reviews in many ways.
And third, the goals. So are
we trying to find bugs earlier? Are we trying to improve code
quality, test coverage? Are we trying to just
onboard new engineers and make sure they understand
the code ways? And what I want to say here is that
the goals may change over time, the issues may change over time, and that's
okay. We can move the focus of code reviews around.
But I do believe that code review is a process that's important and can
help any team in the long term. And it's a process that usually
is here to stay. It's implemented to stay, not temporarily.
That said, what is efficient
code reviews? How can we do them? Well, I think
almost everyone who work with code reviews saw this, and once, like,
you submit ten line pull requests and you
receive ten comments on how to improve it. But if you
submit 500 lines of code, someone reply,
yeah, looks good to me. Just like, yeah, that's not
exactly the most efficient way to do code reviews, of course.
So what's code review? First, we have to read
carefully reviews carefully. The code. Actually read the
code. It's not supposed to just skim it through and say, yeah,
looks directional, right? We are not trying
to be fast. We are trying to be efficient to do it. Well,
it's not a stamp competition. So some people say,
hey, I've reviewed 200 reviews.
I did 200 reviews in this month or in this
week, in this year. It's not a competition of
how much reviews you did and actually it's how much you improved
the team, the code base, you helped the team to get better.
And also, just be clear, it's not a competition.
It's about improving. And when
doing. Any comments, leaving comments, suggestions, we need to be
clear, leave actionable comments. If you just
say, actually, the code doesn't look good and doesn't
give an alternative or material, let's say
for research to check coding practices that are
related to what you are seeing, you are not actually helping the
code to get better, you are just creating problems
in your team. Also, focus on
what's important. I'm not here to say what is important,
what is not important, but usually we want to see
the functionality of the code. Like, let's say I
want to change behavior of a to
do b and in the code instead of calling
b and calling c. That's something I would expect to
be caught in the cold review if of course it
was not caught earlier by the developer with automated tests or
something. But what's the design? So is it overall complex?
Is it simple enough? Is it too over engineered? What the
test coverage? Is it being well tested?
Do we have evidence that this code is doing what it did?
Some teams have different practices. I know some companies can
only submit code with automated tests back in it.
Some companies allow you not submitting anything while
some accept manual tests like a video
recording on a screenshot. So anyways,
what works best for you? It's important to have the
best the evidence. Also it may be important
like the code style, the consistency,
naming and a lot of other things.
And also it may be
important like comments, documentation and
relevance of these things change with the context
and which should be splicit when these things
are majorly blocking or the comments are just about
minor improvements. Nitpicking if you don't know, is when you
are trying to do like you do. Small suggestions that
could be done, but it's optional. Usually a good practice.
Like you write nit and put what your suggestion
is so the owner has clear feedback.
Hey, okay, this could be improved, but we are
happy with the code as it is. So this is
a way to do efficiently to reduce the back and
forth of the challenges being proposed.
Another very important thing, automating anything
that's possible to automate. We have like rent of tools,
we have auto formatters, we have linkers for almost any language
that's used in modern software. Engineer CI warnings error
so it's possible to automate a lot of the work. I remember
one project that I worked like many years ago.
We sometimes had to send back
pull requests because the engineer was using
tabs instead of spaces and that would be a problem later. I mean,
you can have many ways to do that without having
a human review to check which charter was used there.
So this is really not a good use
of engineers time. And also
one very important thing, code review is not an individual task,
an individual effort. It's not just like you individually looking
at the code and having the sole responsibility over
it. When we should look at the other side of
the table, like who's writing the code, we should write the code meaning
it would be read by other people, it will be understood
by other people, it will be evolved by other people.
Actually, keep in mind that code is often read
more often read than written, which is quite
an important statistic because readability should
be very important. Also, one very important thing
is that when you are submitting pull
request change, if you submit something that's very big,
it's going to take a lot of time,
and the review will take longer. It's harder to review,
it's harder to iterate on it, because any change affects
a handful of other files and related changes.
So the best thing is split it into smaller
and meaningful changes. What's small, what's big. Of course
it will change depending on the context, the project,
et cetera. But that's a very important thing. And when reviewing the
code, don't be bossy,
don't be judgmental. The language
we use can be helpful, especially when we don't
have a history of interaction in this
way before. So instead of just saying, hey, you should
have done that, you could say, hey, have you considered doing x
instead of y? Or maybe z is another option.
So we should be always careful of
how we say things. And lastly, don't forget
testing scenarios and evidences. So if
you have automated testing, which is a good thing,
add it, say, hey, I did run this test. It's backing
the challenges, so it does help the reviewer, too.
And last part, some things
I want to talk about of how to kick off the process in
the team or how to make the process better when you start
reviews in the team. So, well, first thing is
to keep engineers autonomy. No one wants to have that feeling
that their work is not autonomous anymore. They don't have any
power in the change they are doing. Everything is bureaucratic.
So, yeah, first, avoid the gatekeeper. Gatekeeper is that person
that, intentionally or not, becomes, like,
the guardian of the code. So every code review, this person
checks and has to approve for some reason.
And this is bad for both the person and the team.
Also, map exceptional scenarios and workarounds.
So let's say, for example, we implement a
process that always requires an approval to
land a code in production. Okay, looks very nice
for me. But what if someone is on call,
receives, like, a problem mid of the night, and there's no one around to
approve code? We should
understand what are the exception scenarios and workarounds
that we have in place to make sure people still are able to
fix problems when they appear? And lastly,
it's relatable before about code reviews should not be
a high friction process. Code review should be part
of a sequence of developing and putting code into production,
but it definitely should not feel as a high friction.
Okay, how to do that. So first encourage open discussion
in the team. We shouldn't start pointer fingering
or we need to understand that this is a team process.
As I said before, it's a good practice to often
review pain points, evaluate results. How is it going
if it's the first time it's been implemented in a team and actually
iterate on the process? One good tip I would have here is recognize
contribution. So people who take their time to do good reviews
to help the teammates, help the projects moving forward recognize
that. Make this a good example and it could be
a motivation for other people who maybe are not willing to spend more time
doing code reviews. Also some ideas
to unblock if the team doesn't feel like they
can absorb this process all at once.
So we can start by selecting projects, selecting critical
code paths. For example, in the example that I said like about banking,
maybe you can start reviewing only code that affect the car banking
instead of reviewing every single feature. And then as
the team grows with the process, we can
incorporate into one more things. Other example, sampling reviews.
So instead of reviews every single thing, we can sample it
preferentially randomly. So it doesn't seem like you are just
reviewing someone specific person's code.
Another example, usually I don't see it applied much,
but you could even have non blocking reviews. So we
could say hey yeah, every code should be reviews. But after one,
two, three days, I don't know what the time would be.
Maybe the person is allowed to lend it without a review.
Just some tips. Ideally we want every
single piece of code reviewed by someone to make
sure the code has a good quality. But this could help to start and iterate
over it. So to wrap up what we said today, we talked
about what and whatnot. What is not code review.
Some reasons to do code review we were
understanding a bit the context of the team code goals, what we
want to do code reviews for.
Then we took a look for what
we should look when reviewing code, what to avoid looking at
when reviewing code remembering that code review is a
team process. And also I shared some ideas to help
to kick off this process within the team.
Thank you. Feel free to ping me on LinkedIn. This is
also my GitHub here, although I haven't
populated much. Thank you. I hope you enjoyed it.