Transcript
This transcript was autogenerated. To make changes, submit a PR.
In this session we are going to learn about test driven development using
Golang. We will be looking some best practices of it and how we
can start working on TDD on our company's project. Or either we're just
starting to practicing it, practices it out.
All right, so who am I? I'm Mohammad Quanit.
I'm working as product engineering manager manager at Timegram IO, which is
basically a SaaS based startup,
basically calculates or manage
the time for freelancer agencies. Also,
I also am AWS community builder as well, along with
I do write technical content on dev two which you may probably have heard.
And I also do some public speaking as well.
And these are my hobbies that I provided, I mentioned.
Okay, so let's see our agenda for today's talk.
First we will be looking test driven development. What actually test driven development
is, what should we do, how should we care about and
what is basically the motivation for it. And then we
are looking some ways to test in Golang, to write
testing in Golang. There are some approaches that I've already mentioned in this
list. Go testing package. We'll be discussing about HTTP server
rest API testing. We'll be looking at table driven testing,
which is basically an approach for writing test
cases in Golang. We will be looking some open source testing
framework that you can use, and at the end of our talk
we will be looking some TDD best practices.
And after that we will be uploading our talk.
So TDD, what is TDD? You probably have heard
of TDD as a most hyped term. So TDD is
nothing but just a software development process that involves repeatedly
writing test cases first and then write actual code.
So basically what happens that developers, traditionally developers
do whenever they write a software,
whenever they write a feature in a software, they probably write some code
first, and then after that they actually start implementing test cases.
But there's a high chance that developers or engineers can miss
some of the cases that actually if
that feature contains. Right. So for that issue,
for that problem, test driven development is an approach
that introduced by some of the early
engineers in early 90s that why
you shouldn't write test cases before actual development.
So this is what best driven development is. It actually forces
the developers in terms of implementing implementers
or users. So basically when developers are writing test cases
before writing actual production code,
they know that, okay, so this is a feature we need to cover
this amount of cases.
And then if all our cases are covered, then we
can start writing our actual code. So by writing
test cases first, we developers can
catch errors early in the development process and ensure that
our code is easy to test, maintainable and
refactorable. So what does mean by that? Okay, so if developers
are required to write a feature for
their application, what they can do is they can first
assess all the test cases that could cover
the actual features. Then they can start writing the test cases.
After writing test cases and
after their production code, writing production code, they can catch
errors early in the development process. If somehow things are breaking
in production production code, what they can do is they can actually check
in test cases like what they did miss or what
they have done wrong or what could they have done wrong? Right, what could
they have done wrong? So that's how they can ensure our test case are
easy to maintain. And after that, when they maintain,
it's easy to refactor as well. Best driven development
approach also helps engineer to write better code and reduce time on debugging.
As I said about if developers are able to catch errors
early on development time on actual production code,
they are able to reduce the time on debugging because they knew that
they actually wrote best cases first and they
knew that where the error could occur or where the issue could
happen. So it eventually
reduces time on debugging part. And this can
lead to more predictable and reliable software as they
are already reducing the time or reduce the time on debugging.
And the last part is that TDD is not just part of merely testing
mechanism. It requires a lot of practice, to be honest,
to implement in real world project, because it's
not just like that. You start writing test cases before, and then
after you write production board, you need to be mentally prepared. That,
okay, if I need to write this feature, what are the test cases
actually covered? So it requires a lot of analysis and assessment
of our feature that we are supposed to deliver and then we
have to discuss with our leads that, okay, so what are the things that
this feature can cover or should not
cover? And then according to this, we need to write
cases according to that. So it's basically a like mental model.
If you go along with at first it will be overwhelming,
but if you are doing practice with your side projects or any
small scale projects, you will be good to go in TDD
first because this is a mental approach for writing proper
software. Okay, so TDD has some stages.
The first stage is when we write a test case, we need
to write a test case. Okay, so after writing the test cases, we need
to check the best case. Okay, so if
we have some provider, if we have some road and test cases. Let's say
if we are testing some function that actually testing
or some function that takes numbers and incrementing them,
we need to verify that, okay, if we are providing
this length of numbers, then it could result in this.
Okay, then we will see in live code as well.
After writing our test cases, we then need to
write our production code so that we can actually run
our test by our production code.
After writing our production code, we can run, or we do run all
our tests, and if things are good, we actually can
our code or refactoring our code that if this is
required,
both refactoring required in test cases code or the production code.
All right, so this is stages of TDD.
Okay? So why should we even care about TDD?
Okay, so first point is it shortens the programming feedback loop.
Okay, so what does it mean by that? It reduces
the feedback time. Let's say
if there is something, if there's a requirement
of a feature and you have already wrote
a test suit test case of that feature,
after writing the production code, all things are
valid work. But suddenly there's a change required
in that feature. You can easily go through your
test cases code and then you can actually update it because you
don't need to write the whole test case again for that feature. You just
need to maybe tweaks in your test cases code.
So it actually shortens the feedback loop of your
basic requirement for your test cases.
As I said about it also encourages engineers to write
modular, testable and maintainable code.
Modular in the sense like, if I
have an XYZ feature and it consists of some, let's say,
five steps, we can write five separate test
cases for that specific feature, which we can wrote
it in a test suite, and we can test it
via separately. So we can test those five steps separately
in a modular manner. It is also testable.
And if we write our test cases in a modular form,
it is also maintainable as well. It also sketches errors
early in the development. It also reduces debugging time as well,
because if you know what you are doing actually, and if there's some
issue happens in your production code, you know where
to look at your test cases, and then you can fix out
your actual time. It also reduces the cost of change.
Let's say if there's
something new requirement in that specific XYZ feature,
you can also change the test cases along with that feature, and then
you can just probably add some
age cases that that feature actually was supposed to cover.
Right. It also boosts confidence with sense of continuous
reliability and success. So then as a developer, when you
are writing test cases, you know that, all right,
you have done your part in terms of testing
and you have covered all of your cases. So it
helps you with the sense of reliability. Okay, so I already
wrote this test case or those test cases. Now I'm good
to go with this feature. If there's any issue
occur, I know where I have to look
at on our testboard. And if you don't write test cases,
how do youre know that your code is doing the right thing? Right. Test cases
should be mandatory for every company.
In our company we are also doing some test
driven development. We just started in our company
implementing TDD in our project and so far is going good
now because we know that whatever feature we are supposed to ship,
it will supposed to work, it will work. And if there's
any issue comes up, we know where we have to make
some changes in our test cases. So these are some motivations that
we should consider when implementing TDD because
it helps you to focus on your path that,
okay, so you are doing TDD for this purpose and
it will pay off in long term of time.
Okay, so let's see some of the packages
in our Golang. So the first and the foremost testing
package actually provided by go. So whenever you install
a Golang environment in your computers,
you actually get testing package along with it.
So go testing package is a built in by default testing
package that comes with go environment. It is
basically a companys line tool that automates the process of running tests.
Okay so whenever you write test cases in Golang,
you wrote test cases, right? And then how?
Then you have to run those test cases, you provide a command
called go test and then it will actually run
all the tests which you
have written code of. Right. So we will
see it in an example as well. So test functions with a specific.
So whenever we write test cases in go, we need to provide
with a specific signature that must started with test.
Okay, so whenever we test go, let's say we are writing
a test for our feature xyz. So the
file name should be our xyz underscore
test which takes a pointer and then
we then create a function on it and then it takes a pointer testing
t which is basically a struct that actually provides
some of the methods and methods and properties
that we can use to write or assert test cases in a web
testing package comes along with testing coverage tool
as well. That can actually helps you to generate coverage report which shows
how much of your code is covered by best. So let's say
if you have write 20 test cases in your project and you
want to see the coverage of your test cases, maybe it could be 80%
or 90%. So you can use a go
coverage tool which can actually generate and visualize the
coverage of your actual test cases.
It also supports benchmarking as well, which is used to
measure the performance of your code. So let's say if you are writing an HTTP
server and if you want to benchmark the latency of
the rest and server or client to database connection,
you can actually benchmark that as well.
Not just rest APIs, but in fact whatever youre
are writing. So you get a point. And it also comes with
some of these flags as well as youre know, Golang supports some
flags. We can provide some of the flags to see the
more detailed logs in youre test cases.
We mostly use Vlag which is basically purpose
flag which is to show the
more logs in our test cases or the behavior of our test cases.
How youre test case going or what
is the behavior of our test cases are Golang best
package as I said, return on a file ending with underscore test go
and every best function start with test keyword
which takes a testing parameter testing t pointer
that's basically a strut. So let's hear on a live code.
Okay, so as you can see here, we have file
called main co and we
have written two functions on it. One is hello world and other is
sum. So we will see the test cases of
both of these two functions. Let's see on our main
underscore test go which is our test case file. Test cases file.
So let's see for test hello world test cases. So as
you can see, as I said before, that whatever
function we have to write for run
our test cases, the convert for boolean testing is
the function should start with that keyword test.
We have two array, one is God and one is bond. And if for
some reason God and bond is not equal, our test case will be failed.
Same as for test sum function.
As you can see as well that we are
providing a parameter testing t which is basically a struct
that provides some common methods,
interfaces and properties.
If you see this test function. So basically test function,
the sum function is taken a number array
of numbers and then it actually run loop on
it. Runs loop for loop on it and returns the sum of all the
numbers provided in our arrays. And we have written our test case for
test sum as well. The d run function provide is basically
supposed to run our test case in
a separate thread. So if I write another t run
in our test sum function, it works in a
separate thread. So for now, for the
sake of this example, I will just run a
single t dot run function which basically best our sum of numbers in an array.
So the numbers of array we have provided here is
5.3 to one. That makes it basically the sum
that 15 will be the sum of this function when
the sum function returns and the run value contains
16. So somehow it should fail, right?
So let's check the,
let's see how we
can run our test kit.
And if I provide b flag,
it's basically providing me enough information
to see the steps on our test case. If I
click on it. And you can see here that,
you can see here my best string function has been
passed test case function because this headover function
is written exactly the same string as I provided what variable.
But my test some number of arrays to
test some function has been failed because
we are getting the result in
this function is 15 and I am asserting with the value of
16. So it actually failed because it
is a t error function run which is basically a log app to
print the logs that your actual test case had been failed.
To see this on premise. So that's basically how
the day you write test cases. HTTP testing
testing can HTTP server in go involves some sending HTTP
requests to the server and verifying the responses that it returns.
So whenever we create an HTTP server in
a form of best endpoint, what we need to check is that if
our data is coming out in a proper manner, or the length of our data
is coming correct from the server, or if we have to see,
we have to check some status code. So there are some
examples that we can test on HTTP server. So let's see on the
code. Okay, so here you can see that we have a file
called endpoints go, and I've already
wrote two functions on it. One is get posts that is
basically returning all the hundred posts that's coming from
this API endpoint, this public API endpoint. And we
have along with a header and some status
code, okay, and we have another function called get single post
which takes a parameter and then it will return a single
post on the basis of an id.
And if we see the main function which is HTTP example,
which we are calling in our main go function, main function
in go environment. So you can see here the endpoint HTTP example.
So if we see this function implementation, we can see that we
have provided a port. I'm using the mux new
router to set up the routers, to set up the routes for
my API. I have set it up two routes,
one for posts, one for single post along with the id parameter.
And I have started and I have did some logging
to see the actual log for our
server and I've already up and run the server. So if you can see
this, I already started server on port 3001.
So let's see the test cases of it. Okay,
so I have created a function called test post endpoint,
which is supposed to test the endpoint. Supposed to test the endpoint,
as I said that it takes a testing parameter t testing
t struct in a parameter. We are creating a new request which
actually hit the request and post something.
The new request basically runs with a context
background and which helps us to
generate a new request on the specific endpoint
which is post in our case.
And we are getting our get request.
And if there is an error, we can see an error. So what actually
we are doing here? So there's something called HTTP
test, which is a package, comes with environment
and we are using the new recorder function. So basically this function
is an initialized response recorder. So it
is the enhanced implementation of response writer. So if
you have ever worked with best APIs in Go, if you have ever worked with
best cases in go, so you have used response writers
most every time, right? So this is something like,
this is something on top of response writer, but it
is basically a response recorder that helps us to record
the response. We then create another variable
called handler and we simply provide the handle function which
is basically getting this get post. So get post basically the function that we
have implemented for getting all the posts,
right. So again we go to our best
function. We are then serving our HTTP
because we need to test our
endpoint in such a way that it can actually run
the endpoint and get the response from
the actual API. And then we are Golang to look at the
length of the response that we need to verify. Okay,
so I then did all this stuff,
then decoding and sharing stuff for converting
data into struct. All right? And then we have created two variables,
got expected length and want expected length. Okay, so this API
that the post public API is supposed to return
error object of 100 objects, right?
So I want to verify that, okay,
if my API is returning me the
length of 100 objects in an array,
and we are checking with God expected length, that what actually
the length of the data is getting from the API endpoint,
then simply we just did if condition and if somehow
our length does not match, it will fail,
right? So if I run
the terminal here server
and I run go run go
best. So you
can see that my best are passed because how it's
passed because the length of the
data which came from the API endpoint, the post API has 100
items and I want to check it with 100 as well
if I want to see the failed version
of my test case. So if I provide 10 one and
I do this,
if I run again, vote sv youre see it must
fail because unexpected length of data got 100 want xo one
because I want the length of data to be 101
and I am getting the data, the length of the data is 100.
So that's how you can actually write HTTP and test
cases. Another one like you can also check
some status code as well if you want to see if this API,
if this response of this API is getting 200, okay server,
or if you are trying to check some other status as well,
or if you are finding out that okay,
if this API comes with some data that
I am expecting with this specific field. So these are some of the examples
that we can cover in HTTP. But for the sake
of this session, I am just showing you this
example for HTTP test. So the main purpose of this HTTPs
basically we basically use the new recorder.
So the new recorder is basically like we
get all the properties and methods which comes in
response writer when we create some API
HTTP API. All right,
so let's move on to our table
driven testing. So table driven testing basically allows
you to test your features or
function with multiple inputs and expected outputs, right?
So what basically means that if youre see this
example, let me show you here,
we only created got and want variable. If you remember our first test
case, we only use got and want variable. But if we want to check
multiple, but if we want to test our feature
according to multiple use cases, then what we can do here is
let me show you the example. So there's a simple
sum function which I've created that is simply returning a plus b
response. And if we go to the test function,
so you can see here the best sum, and you can see here
that I've created a simple struct, named cases and
I've provided a description number and expected. So what basically
does is that I can create as many as
use cases as I want for testing my sum function.
Let's say if you see the first two objects,
basically. So here I am providing the description one plus
two and number one and one plus one. Two and expected will be three
okay, so if we add one and two, one plus two expected should
be three. And if we provide three and four input, like num
one from three and num two to four expected should be seven.
So you can see that we have created a struct along with,
and then we ran the loop on our struct cases
by using range keyword. Now,
as I said before, that t run is basically responsible
for running our test case, responsible for running our best
cases, right? So here we are providing t run, and then we provide our
t description, which is basically the text for the text that
we are supposed to see on terminal. Then we have provided a function
that is getting testing t, which we already discussed.
And in this loop I am getting the result of
sum, and I'm providing the parameter num one.
And on the loop side,
in the loop we are getting num one and um, two.
And for each of these cases, like for
case number one and case number two,
it will return the response, either our best case is passed or failed.
So if I run my code,
let me clear the screen, I go to CD
and table.
If I run go test.
Okay, so as you can see that in youre verbus
flag, youre can see that we have run
a couple of test cases, one plus two and three plus four, and all of
them are passed. Okay, so now
you have the idea like table driven testing. What? Table driven testing is the
table driven testing is basically an approach where we actually provide
as many as use cases, as many as inputs to
get different outputs. So let's say, let me add another
one, another input,
and let me do this. Ten plus
45, and I provide
number one and number 210 or 45. But I'm expecting,
let's say 70, which is not basically, which should be failed. Right?
If I run this test case again, we can see that our
test sum one plus two is passed. Test sum three plus four
is actually passed. But ten plus 45 has actually failed because
we actually wanted the number, because we actually
wanted the expected 70. But we are getting ten
plus 45, which is 55. So that's how you can provide as many as
input as you want and get different outputs according to your use
case. All right, so you got the idea of table driven
testing. There are some testing frameworks already provided
by Go community in which youre can see the Gomega, which is basically a
matcher assertion library. So if you
have some advanced level of assertion and you want to
test some complex use
cases that require some matching features, then you can
go for Gomega library.
Another was in group, another was in Pocheck. Basically it's a feature
rich testing library which includes that in
a more advanced and complex features testify toolkit
for mocks and assertions. It is also similar as Bomega,
but it also provides you some mocking feature to provide
some fake data or fake responses,
I should say. So. Go mock is another dedicated framework
that you can use to test your actual code base.
And there's another one called Jinko which is basically a BDD testing
framework. BDD stands for behavior driven development. So it's like
something where we have to check specific specs.
So basically it means like we can see the behavior
of our code in a form of specific specs.
So you see here are testing frameworks that is already introduced
by both community, okay, so the main thing
to follow is whenever you are working with TDD, you need to follow
some best practices, which you should basically.
So the first and foremost, which I already discussed as well
in the start of my session, that always
write test case before the actual code. Because whenever you
write test cases, you know what you are actually supposed to do
in your actual code, right? Write a small and
focused test. Okay, so if you have a
feature that contains some, that contains some different sort
of performed different sorts
of algorithm, you can write a small and focus test of
that feature, like test
for algorithm one, best for test for algorithm two, and you
can set it up in a specific suite. So make sure
to write small and focus best so that it can be easily manageable
and maintainable. Use go test command
to test case along with v flag for verbus logs as
I show you in the terminal. As I show you in live code that always
use go test for testing your Google test cases use
mock dependencies to simulate actual behavior of their feature.
As we see in our table driven testing
example, we use mock inputs, right? So not just inputs,
we can also mimic some dependencies
to simulate our actual behavior that our code have
to follow, right? Yes. So basically
fake, you can use fake dependency as well utilizing port coverage tool. As I
said, as I said in my starting of the session, always use
go test cover. So basically if you want to see the coverage of youre
test, you can use go test cover, which is basically
a good thing to do along with when you are writing your
test cases, automate and refactor your
test cases using CI tools. Okay, so after writing your
test cases, make sure after time to
time you are updating your code or you are refactoring your code,
because at any given time that feature
got some changes or client have some requirement that okay, we need to do
something this and that or we
need to do some replacement or we need to some add or
remove things. So we need to have our test cases
along with our code.
And if you are deciding to automate it,
it will be a great practice to do it. You can use different CI tools
like databases, travis CI, Jenkins, et cetera,
et cetera. Always keep your test cases up to date as
I said, always keep your test case up to date.
You may not know that any given time requirements get changed,
so you make sure that you are already
up to date in your desk already. Update your test
cases along in the convert of feature or even if you are using some external
framework or library. You need to update that as well
so you do not get break in the actual runtime.
All right guys, so thank you very much. This was my session
and thank you conf 42 Golang team for having me
here, for inviting me to talk on this. You can follow me on
Twitter at mkhan, GitHub, LinkedIn where I'm mostly active on.
So let me know if you have any questions. You can reach me out on
social media. And thank you.