Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hi everyone, welcome to my talk that is called testy,
a minimal testing tool designed for teaching a
little bit about me my name is Nahuel Garbezza. I'm a software engineer from Buenos
Aires, Argentina. I'd like to make high quality software.
I work at a company called Tempines and I'm also
a teacher. I teach object oriented programming and test driven development.
It well I'm going to share you
a tool. So first of all I wanted to share
you how the tool was born in the first place.
I had a challenge three years ago. I was about
to teach object oriented programming using JavaScript. I've been using
a small talk for the last six or seven years and this was the
first time that I add Javascript to the syllabus.
The topics were object oriented design and test driven development mostly.
And one thing that I always do
in all the courses that I do
is to make sure that I have the right tools, to make sure
that the concepts that I'm presenting are
easily adopted and I don't have too much
complexity on the tools that I'm using. So that was the
whole mindset. So this
is how a tool like this was born.
One day in a class three years ago
I ran out of exercises and I thought well,
what can be my next exercise to practice with my students?
And I has inspired
in a Ken Beck's idea. I started to
implement our testing tool with my students in
the class with like a mopar programming idea.
I didn't know exactly what the result would look like,
but we started from the very beginning,
which is the assert. So the most important part of a testing
tool is the assert function.
I even made fun of that and I was tweeting
oh, I'm building a new testing framework. And that was
just an assert function that you can see from the tweet.
It's just a function that takes two values compare, and if those are
equal it responds okay. If not it responds failure.
That was like the walking skeleton for the
tool. That was the seed.
All the next functionality was built on top of that small
function. We use that
for do some sort of automated testing
with our students. Of course this is very limited, but this is how
we started.
Then I continued developing this tool.
We started using it on our classes like any other testing
tool that you might imagine.
When I was starting to do that along the course
I started to build a vision,
why do we need another testing tool? What do
I want for this tool?
I knew from the very beginning that I wanted to build something
simple,
like a source of inspiration is this article called the science
principles behind a small talk that I truly recommend you to
read it. If you haven't read it yet, it's a
great piece of our computer science and
software engineer history.
If you don't know about the small talk, you should read it too. I truly
recommend all the history to truly understand object
oriented programming and simplicity in general.
Most of the ideas that I try to reflect
in this tool are from that article.
What do I mean by simplicity?
I can identify three aspects that can make
my software simple and this tool in particular.
This is a testing tool that doesn't have dependencies
at all, which if you compare
it to other testing tools like jest.
If you install jest from the very beginning, you will have more
than 300 dependencies and 42 megabytes on your node
modules. I didn't want that for my students
because we needed to use the tool for
very simple exercises and cocatas.
I even wanted for this tool to be able
to be installed in places where the Internet connection is not good.
So that's why the tool is just a few
kilobytes and no dependencies at all.
It should be fast, it should be easy to install and configure.
That was the spirit. Another thing that I wanted
for this tool is that all the code should be understandable.
There should be no dark magic behind
this tool, which sometimes
we see in frameworks and libraries that we are
even afraid to go and take a
look at the code because we don't understand it. It's too
complex. One decision that I made
from the very beginning is to not use metaboramming unless
it's strictly necessary. So if we look at
the code today, there's no metaboramming usage.
There's no need to do that.
I wanted for students to take a look at the code,
understand it, and even modify it if they
have the knowledge to do that.
Another thing that I wanted for this tool
is to not have too many features,
just have the set of features that I wanted.
And it's not the idea to do
a copy of all our testing tool. We are started
from the very beginning like I showed you before, from an assert function, and then
we build all the features on top of that.
As I mentioned earlier, my inspiration was Ken Beck
in the tester and development by example book, which is one of my
favorite books and I recommend it to
read. Kent starts implementing
a testing tool using TDD. So it's like
implementing and testing the wrong tool in the same process,
which is very mind blowing because it's
how you can test something that you already don't have.
But if you read the book, it's brilliant.
I tried to do the same on
this tool. Every feature that
was added has implemented using test driven development.
That's why the tool today has a very high percentage
of test coverage is beyond 95%.
That's the spirit. So the
tool was initially built three years
ago, but I have been developing it to add in more
features, correct bug and try to make it
usable for my students. I basically got
feedback from them and from even myself using the
tool and build improvements on top
of that. So today you
can check the code in GitHub and it's downloadable using
NPM. The current version is 5.1.
I'm planning to make a new release soon, so stay
tuned. And it's
very important for me to have
this project built in a way that everyone
can contribute. So the opensource
aspect of it is very important to me.
That's why I try to do my best writing
documentation about how to use the tool
and how to contribute if you want to.
Also there are documentation in English and Spanish because
my course is in Argentina and it's all in Spanish.
I also added issue templates, PR templates.
If someone wants to report an issue,
they will come across this template to fill.
Every change that is added to the tool is documented in a change
log. The project also used semantic
versioning to know which release are breaking and
which ones are for new features and which others are just back fixes.
And that's very important because for me it was the first project where
I started getting contribution from people.
I am very thankful to 13 people that have
contributed as of today to this project.
I think that making this for to build documentation
and write nice backlog with issues
and especially good first issues that are
tagged in the project help people to contribute on
the project. I'm really grateful for that.
That's all the open source spirit of that.
One thing regarding that aspect of contributors that
I recommend if you're building an opensource project is this all
contributors bot, which is very
good tool to recognize contributions. It makes you
a section on the readme where you can displace all
contributors faces and profiles. And it's a way
to build a community that right now is small,
but maybe one day it could be large.
Regarding tool itself, I wanted
for this tool to have assertions, which is the most
important thing, and also a
way to describe different tests and also a way to
group test in test suites. That's the only thing
that I wanted, I didn't want something more complex than
that. The assertions are
written in a fluent interface style that we are going to see
in the demo later on. The idea is
that you can check for object equality for inclusion in
array exception racing, and some common
scenarios that we face
during testing.
All the output from the tool is written in a console with
color outputs where you can see errors,
failures and things that were okay. It's also
multilanguage because I wanted the output to be
in other language as well right now, English and Spanish, but it's
configurable to extend more languages
in the future. And there's
a feature for marking test suspending as many other testing
tools have this idea of marking like a work in progress type
of test. These two
configuration parameters are the fail fast mode and random
order that are the first one is to making sure
that all the process stops at
the very first failure if it's enabled. And instead of
running all the test and report all the potential failures
and the random order which is setting that I recommend it
to enable by default is this idea of writing
running the test in different order every time in
random order. And that makes sure that your tests
are independent, which is a very good sign
and it's a good practice for your test to be independent. So if you want
to enforce that, you can enable the random order setting and all the
tests will run each time in a different
order. So far those
were all features that every testing tool may have.
So there is no difference between using jest
or using mocha or using this tool.
But there are two things that I haven't
identified in other testing tools,
and I wanted for this tool because I think they
are beneficial for students. That's why I
mark them as unique. Maybe there are other testing tool tools that
do that, but I'm not aware of that.
The first one is that if somehow you
run a test and the test doesn't run any assertions,
that is an invalid scenario. That shouldn't happen.
You should have at least one assertion.
So if that happens,
usually most tests in tools will report the test as success
because there are no failures. That's the usual interpretation.
But for this tool I try to make something different.
In that case I raise an error.
The error message says the test doesn't have any assertion,
that is an invalid scenario. So if
students make that mistake while they are learning
to test or to make TDD, they can have the feedback and
say, oh, I need to write an assertion for this test.
That's one thing that I wanted for this tool,
another feature that I added because I also
saw that problem and see
how can I make this tool to report feedback,
proper feedback to the student is this idea of
comparing undefined with undefined using
an equal comparison, I'm going to
show you why is there a problem in the demo,
but the current
behavior in the tool is that if you compare undefined with undefined, it will give
you an error because it's like an undetermination scenario.
In case you really want to test if a
value is undefined, there's a special assertion for that which is
undefined that is also built in in the tool that
you can use for that.
And going back a little bit on the simplicity
aspect, I wanted to talk
about how to control the complexity,
because once you keep adding features and keep adding
code, eventually tool might become complex.
So can you control that somehow?
Well, I don't have the right answer for that.
What I do is to look for different things. Look for patterns
in the code, look for some quality metrics
like code complexity,
or lines of code, or technical depth ratio.
But one visual hint that helped me to control
the complexity is this idea of module
chart. So in this chart I'm displaying all the modules
that are in the tool. As you can see, there are less than 20
the dependencies between each other.
So if I implement something and generate the
graph for that, and it's too complex to understand, that means
that I'm adding complexity. So I try to make this
chart as simple as possible.
This is just to have an idea of like a visual idea of how
complex your software is. You can also detect, use it
for detecting circular dependencies, which is
usually a sign of something that is complex
and might be written in a different way.
By the way, the tool is called match, and I recommend
it because it's interesting to see your software
in a different way than just lines of code.
All right, that's enough talking. I'm going to
show you a little bit of demo about how the tool works and what
you can do with it. So I'm switching
to my ide. First of all,
I have my testing tool checked
out here. And as I mentioned before, the test
tool is tested in itself. So all testing tools
are written in testy. So what I'm going to do is to run my test.
And if I run my test, I got 213
tests passed. That's the number
of tests that I have right now. This is the summary.
You can see how many tests were passed. If there are failures, we will
see them here and at the very beginning you will see
like a starting message with all the configuration values.
In this case, I'm not doing a fail fast thing and I'm running the
test in random order.
So I'm going to show you
an example that I made for this forest conference.
It's how to
write test using this tool. So writing
test is as easy as calling a test
function that gets imported from our testing
module, the test receive
name and a function that is the test body, pretty much
similar to other testing tools. If you use jest or if
you use mocha, it's exactly the same name and a function
representing the body. And tests are grouped inside
test suite. So that's inside the function
that is all the suite body. And you can
write many tests in a test suite. As you can imagine,
as I mentioned before, the test should
have at least one assertion. And here I'm making a test very
very simple, which is creating a cat and a dog
with both have a name. So this is the code
for cat and dog. Those are pretty simple, they have the same property
called name. And I am asserting that the
name of the cat is equal to the name of the dog.
So I am expecting this test to run so I can do Npm
test and pass the name of
this file.
And in this case it just runs the file that
I specified, just like other testing tool tools do.
Let's see some things
that the tool provides. Let's say that I
forget to write this assert I
run my test. So if I run my test I
got an error and the error says this test
doesn't have any assertion. And this is the feedback
that students find very helpful when writing test.
So that is one of the unique features that I try
to build for this tool.
And the other unique feature that I mentioned before is
this idea of undetermination if both things are
undefined. So I guess you might experience
this issue at some point in your career as a JavaScript
developer where you misspell a feature,
sorry, a property, right?
You have a typo and this
doesn't fail in JavaScript, these returns undefined.
If I end up with something like this,
most of the testing tool tools will report this as a success because
this will be undefined and this will be undefined
and the equality is undefined versus undefined.
It's okay. Well, this tool doesn't do that.
This reports a failure and the failure says that
there is an equality. But equality is an undetermination
because both parts are undefined and this
is to prevent false positives because this test
is definitely a false positive because
I'm not testing that names are equal. In fact, names can be different
and the test in regular testing tool tools will pass,
right? That's why I implemented
this tool, because it's a problem that I see when people start writing
tests. It can be confusing to have this
type of false positive. Regarding the
fluent interface of assertions,
the basic structure is that you have an object that
is called assert, where you specify that a particular
substrict matches a particular expectation,
and you can use something
like cert, an array
is empty, something like this,
or include certain value.
It has always the same structure, but you also
have some shorter ways to do that,
like the one that is right below. So the assert
are equal. It's an equivalent way of saying assert
that something is equal to another thing.
Those are equivalent and those are all documented. You can
check for is null. You can also check for is undefined.
There are check for raising an exception, for instance. That's another
useful test. Those are all
generated using the assert object that you also get imported
from the testing module. So you just
need to import the module,
grab the assertion the assert object to make assertion suite
and test, and I will show you how these pending
and failed things work.
Let's say that wanted to mark this test at work in progress
because I didn't have time to fix that. So I can use this
pending due to a particular reason.
So if I run that, you will see that in yellow.
This is something that most testing tool tools do.
It's a way to mark test has it's
not a success, it's not an error, it's just pending and it
gets reported as pending. You can
also explicitly fail a test
if you reach to a certain point where you know the test should fail.
You can say fail with and give an error message here.
And this, as you might expect,
it will return a failure. So in summary,
you have five different states
for a test. It can be success, it can be failure.
If an assertion failed or the
fail was called explicitly, it can be error. If something is
invalid, or if an expected exception was raised
in the middle of the test execution, it can
be pending. And there's also
another state
that is called skipped that happens
when you have the fail fast. So let's say that you run ten
tests and the second one failed. So the remaining eight
tests are skipped. They are not executed at all.
That's pretty much the summary of the
tool. There are more features, but this
is like the overview. I wanted
to thank you for watching this
talk. If you are interested in the tool, you can visit the
repo in GitHub. There's plenty of documentation
and issues. I would recommend you for
small projects like calling katas or like
exercises. If you are teachers and you want to use this tool for teaching,
I would love your feedback on that.
If you are interested in what I'm doing,
you can follow me on Twitter and GitHub.
That's it. Thank you so much.