Transcript
This transcript was autogenerated. To make changes, submit a PR.
Good day everyone. Welcome to my talk on who broke the bell
using Kuttl to improve end to end testing and release faster.
No one wants to be responsible for breaking the bell,
but what can you as a developer to avoid being the
bad guy? How can project leads enable their teams
to reduce the occurrence of broken bells?
Have you ever seen such kind of scenario in your company where
manager asks who has actually broken the build?
He's little furious but
the developer would say that I don't know who has actually broken the build but
I will find out. I'm not sure if I
could have myself broken the build or let me check
with the team and come back. Okay. A quick
introduction about myself I'm Ram Mohan Rao Chuka, senior software
engineer R D India Bangalore. I'm passionate
about open source. I love playing table
tennis. I can be reachable on both
LinkedIn or on Twitter or newly called
x at ichuka ichuka. Let me
create a quick introduction of my company so that
you would understand the scenario
much better. JFrog was founded
in 2008. It's a publicly
listed company, around 1100 plus employees
and growing. And in nine locations headquartered
in Netanya, we have around eight 7000
plus nearly 8000 customers. And most
of these customers are fortune hundred companies
and out of those fortune hundred companies,
70% of them are our customers. We have around 62
expanding products. Almost all
the products are hybrid and we have a
universal DevOps platform and more importantly
we contribute to the community. So does
software upgrades matter? In the
current world, software is run on everywhere.
It is used in healthcare, politics, social interaction,
like Twitter or any other food and
water, transportation and energy.
Do really software updates really matter?
Yes. Every company is a software company and
the quality of the software part of your company
or the product is giving you a differentiator.
So let's take a quick example how software
upgrades actually really matter. So you can think of you
have a Tesla model and you would like to do
an upgrade. It clearly says in the instructions
saying that when you do an upgrade, during the upgrade
you will not be able to drive your vehicle.
So you would be actually struck not
driving a vehicle when a software upgrade is happening.
So to summarize that JFrog mission
is to power all the software upgrades in the world.
JFrog mission and the goal has always been to help customers
deliver upgrades faster, quicker so
that to move the speed of the business.
So we have large
number of enterprises using our software
primarily into Internet and software technology
companies like LinkedIn, banking and finance and
you name it, we have it and we actually
work with the open source cloud
vendors as well as AWS, Azure Global Cloud,
IBM Cloud and all the logos that you see.
So let us switch back to the context of my talk.
So lastly, around a decade
back, people used to follow a waterfall
model where they used to have the requirements freezed,
then go with an architecture and a design, then a development
post that, a QA and a release, and post production
and maintenance, which would almost take a year
or two to complete the entire features released
to the customers. Then we had something
like an agile model where you do the
design requirements and probably do
a rollout to production within
a couple of weeks or so. And there was a new
model that evolved. It's more like dev and
operations collaborated with each other to
release faster. It's more like an agile way fast forward
where if needed, releases can be
done multiple times in a day.
So DevOps actually means it's a collaboration
between both devs and operation, working in silos to
give a desired output for the customers
to release much stable
updates to the product or the software upgrades,
which would mean we have actually moved from a waterfall
model from can yearly or a monthly release to a
daily release cycle in DevOps.
Let's start with the agenda of my talk,
just an overview of what I would cover in this session.
I would first start with what an ideal development environment would
look like, and this is more to do with developer productivity.
In this session I will cover a quick history of our testing
challenges and what led us to KutTl and
the benefits of our new testing approach, which is easy to
configure and a minimal investment,
and how we combine KutTl and CI pipelines
for a more streamlined and few broken belts.
So we were evaluating a couple of tools.
How can we leverage probably local testing on our
dev environments where developers don't need
to actually look up for a remote CI CD to run their test.
So let's walk through each one
of them and see how
our testing challenges and what led us to cuddle.
Just an overview of how a development ideal development environment
would look like. It's more like a single
click setup where everything is automated,
you develop and test locally, and more
importantly it's same as production environment which
is to reproduce the production issues on local environments
to release. If you can debug the issues, you can release
faster, fix it and release faster.
So say for example,
a developer joins a new team as
part of his onboarding process. He would be given a manual
steps to look up to a wiki page,
do all the installations based on the setup, which would actually take
probably a couple of days to set up his environment or
a day or two to set up his environment. But using automation you
can actually have some scripts written or some modules
that you can just run it so that the development environment is set up
within minutes, which would mean instead
of taking hours or days, you can actually set up the environment using
some automation within minutes.
Which also means no manual steps, implies error
free and also more importantly quick
reload. When you can actually develop everything locally,
you can actually deploy test it locally,
so the reload cycle is much faster.
And as I previously said, dev environment should be as same
as production environment. This helps to save
us time and to reproduce production issues at ease.
Okay, let's come to the main
problem that most of our developers face.
So when you take a feature, branch development developer works
on a feature,
he writes some unit test based on the feature developed,
then writes some teams which may be unit teams,
and then commits and pushes
to a git remote repository.
Ideally in most of the scenarios, what we
have seen is these end to end tests are very difficult to
set up and most of the developers don't write end
to end teams and these teams are actually executed on
a remote CI series server. So the problem that when
a pull request is raised or a merge request is raised,
since these end to end tests are executed in a
remote CI server, say for example when
I raise a PR test run for a couple of hours, maybe more
than that, then if a test fails I
need to come back and understand the logs on
the remote CI CD server to look up what has failed.
Then if I identify the issue, then I would be able
to fix it and commit a joint. So this round trip
of fixing an issue is
huge. So instead of is there any better
way to actually avoid running these
end to end teams on a remote server? So we
had an idea how these end
to end tests can be leveraged on a local environment.
So let me quickly go walk
through the remote end to end test with a pictorial depiction.
So when a developer writes code, he writes the unit test. If something
passes, then he would raise us a pull request or
a merger request on a remote CI CD server.
The test run, if it failed, the round cycle
actually continues until a merge request. After a
successful end to end test run with the code review,
you would be able to do a merge. But is there
any alternative for this problem that most
developers face not running end to end test locally?
So we had an idea, how can we leverage
these end to end tests locally instead of
running these remote end to end test use local end to
end test which are exactly similar to how these
tests run on a remote CI CD server, which would mean exact
replica of testing on a production instance or some
environment very similar in a dev environment, which would mean
you don't need to push your changes directly to a git branch,
you can just locally commit or have it in a
feature branch, test it and then run that with
that. What it would mean is instead of using the changes
since the environment is entirely local,
you would run it. If teams fail and if you have
a good configuration of the
system, these end to end tests would run much faster
and you would be able to fix them
locally. Test it if everything works then you can raise
a pull request or a merge request. So let me quickly give
you an example or a pictorial depiction
of it. So what we are saying is instead of
running these end to end tests remotely,
move this end to end test locally, set up an infrastructure
very similar to having a dev environment which would set
up your end to end environment. Run those teams
and see everything works.
So while we were evaluating how we can run
these end to end teams locally, we were fortunate to
look but for some open source tool and we identified
a tool called Kuttl. So in talking to our teams,
we discovered that most developers weren't running sufficient
integration and end to end test in their local environments
because it's too difficult to set up and administer
test environments in an efficient way. So that's
why we decided to rethink our entire local testing in
hopes of cutting down the headaches and
valuable time wasted. So we discovered a tool called
Kuttl. Connecting Kuttl to our CI builds has empowered our developers
to easily configure and configure
a dev environment locally which accurately matches the
final test environment where you run the end to end test.
So let me give you a quick overview of Kuttl.
Then I would give you a quick demo how
Kuttl can be tested.
Using Kuttl you can run end to end teams locally.
So Kuttl is a Kubernetes test tool.
It is a toolkit for writing teams,
mainly designed for testing operators. I know testing
Kubernetes operators is not easy. The motivation here is
to leverage the existing Kubernetes ecosystem for
a resource management way of doing
a setup and easily asserting state within the
cluster. So what I mean by that is instead of writing a
go code or a Java code to get the
current status of a Kubernetes cluster,
to do can assertion do a yaml based
declarative way of testing it. So Kuttl
provides a Yaml based declarative way of doing testing,
which would mean you don't need to learn a new language.
Even if you are not comfortable with
Java or a Go, you can still write these end to
end test using a yaml based approach, which is
very easy. I would explain the different steps in the subsequent slides,
which would mean it also accelerates the ability to create
the end to end testing environment.
So let's get started with how can I install kuttl?
If you are a Mac user you can use a brew tab kudo
builder tab. So Kuttl is actually a part of
pudobuilder was CNCF project for
developing operators and they developed a tool called Kuttl for
testing the operators in a declarative way. If you are a Linux user,
you can use Kubectl
crew install kuttl if you are
a Go developer you can still do an API integration with
your code, very similar to how selenium can
be done in Java. So go get GitHub pudo
builder Kuttl crew is a Kubectl
packet manager. Kuttl is
both CLI tool as well as an API for testing.
So Kuttl is for if you are a developer who wants to
test operators or any kubernetes resources for
that matter without writing any go code, and if
you want to test kubernetes applications in different versions,
Kuttl would provide a framework that is very easy
to create and execute as well.
And if you are an application admin, if you want to operate the creation
of a kubernetes cluster you can use ansible as well. But Kuttl
also provides a cloud native way of doing testing.
So let me first start
with three main parts of Kuttl.
First is a test suit test suit.
You can see line number two there kind test suit
and with an API version defined as Kuttl dev
v one beta one. This is actually a CRD custom
resource definition and you can see
line number three kind start
kind as false which would mean if you
don't provide any kubernetes cluster
you can use the existing cluster as well. If you want to use the
local kind is also a kubernetes cluster which you can
spin it off spin up using docker desktop.
So for my demo I would be using start kind as true.
So I don't want to use any external GCP or azure
or any external AWS clusters.
I would like to demo entire thing on my local environment. So I would be
using a docker desktop. The prerequisite for this
demo, or using kuttl in conjunction
with kind is to have a docker desktop and
the name name is the end to end test
that I've just given it. Next is
the test directories where you want to run these tests
and the commands you can see. I would explain each one
of them. So test suit is a collection of teams
and there is a yaml file called test suit configuration file
where this is actually defined how the suit would look like
and line number eleven. You can ee a timeout
of 300 seconds. Say that the test suit runs more than 300
seconds, which would mean the
test would actually fail. Suit would fail with an error saying
that it is a timeout. So you can have a timeout
on running those test suits as well. The first step is
a test suit. Next is a test step where
you define kind as a test step.
This is also a CRD and where you execute
commands. So it's more like if you have a bash script or if
you have anything you can directly use.
For my example, I would cite
with installing artifactory a stateful
application, and to see the
install went successfully while doing an assertion.
So test is a step of collection of test steps.
So you can have n number of test
steps inside a
test I would cite with an example
in my demo. Next is the assertion part.
So once I do a test, I would like to do an assertion
whether the test is successfully achieved. So this is where a
declarative way of testing happens here where you set,
since it is a stateful set application, I would expect
the default installation of artifactory would come up with
a single replica and I would also test whether that
replica is ready. Replicas is equals to one.
So the test route structure would something like that.
So I have a demo application where I would have a test
and add end to end test. I have
three to four tests that I would run. So more importantly,
Kuttl supports parallel execution of tests where you can run
n number of teams, parallel eight in specific.
So you can create a test suit with eight tests and
run all of them in parallel.
So let me quickly walk through a
demo, let me share my terminal
screen, let me
give you the structure. So I have four
tests that I would like to run.
So let me quickly open the test
cuttle test yaml. So where you can see this
is a test suit which I would be running, which would have
four tests. So I'm doing can installation of
artifactory in four different tests test
before I run, I would make sure that Docker
is running, which is not in my case.
So let me wait for Docker to come up. Okay then I
would run Kubectl
Kuttl test which would run the entire suit.
So what it would create is since I've specified start
kind as true, which would create a kind
cluster, we can actually get the kubeconfig
file and connect to that cluster as well.
So let me do an export of this kubeconfig
file. So I use an open source
tool called canines to view the Kubernetes clusters.
You can do a brew install of this canines.
So what this Kubectl
Kuttl test has done is it has executed the test suit and
you can see it has actually run the prerequisites
commands as part of the test suit. So it
has created a JFrog repository,
helm repo ad and then it has
run four tests. You can see end to end test has four tests.
So what it would do is it creates four namespaces
and runs these install tests and
then does the assertion of these tests,
which would probably take five to ten minutes to run these tests.
So in the meantime I would go back to the documentation and help
you understand how Kuttl documentation
is specified. So as
I said, kuttl is from Kudo
builder. It's a CNCF
approved project. It has quite a but of maintenance.
It's a declarative way of testing Kubernetes operators,
but it can be used for any Kubernetes resources.
So there is a specific slack channel associated with
this in Kubernetes slack called Kudo.
You can go and ask if you have any questions around that
and they have a good documentation. How Cuttlewood works
as I previously discussed, Kuttl has three things.
One is test suit, test step assertion, and it has
something called collectors and commands. You can go through the documentation,
some CLI usage, how Kuttl
can be used, you can see brew install Kuttl.
Then you would run Kubectl
Kuttl test which would run the entire test suit.
There are few examples. How can you actually create
your own test suit? Very simple,
go through the documentation and if you have any questions you
can always reach out to Slack channel. At Kudos
slack there are some tricks and things
where when you want to load
images. So when you want to
load these images, say if
you want to run these tests very frequently
you don't want these images or some images to be downloaded every time.
So just to make sure that kind also provides
loading images much faster. It's more like caching those images.
Let's switch back to the demo where we
are. Let's see how the installation would.
Let's check the installation part. So as I said,
it has created four namespaces and it is trying
to run the install of artifactory.
It's a stateful application and we will be doing assertion.
So let me do a quick sit.
So canines provides
a UI way of managing the clusters
so you can see. So my
assertion on these test cases, test cases is to
have the stateful set application,
the read replicas would be one. So once
every artifactory starts up in each namespace,
sorry, in each namespace.
So ee here the artifactory started in one of the namespaces. So once
that assertion is completed, that namespace is
actually got deleted by cutting. So it cleans up once
the test is done. So once it runs four teams,
it will also give out a report of what
tests it has run, what tests have been successful and
what were the failures. So you can have
a report format as XML or JSON
which can be actually integrated with the CI pipeline to see
how the test actually failed. And one more
good thing about Kuttl is it provides detailed output
logging onto the console so you can see
what actually has happened when the installation is doing it.
So you can see. So once
the test is actually done, it deletes the namespace.
Let me switch back to the ports. So canines
provides a good way of switching to ports,
replicas, deployments and other aspects as
well. It's very easy way and a convenient way of using
of managing kubernetes clusters. So you can
see still few upgrade tests are running.
We would wait for the final result, but you
can also see,
let me see the logging how the namespaces are getting created.
So once it randomly creates a namespace and
does an install command on top of it.
So I'm running an install one helm
install of this application with some
default parameters and a scale test.
So all these test
output is actually provided in the console to see if something breaks.
Say for example if a test breaks or test fails, you would
get a detailed report why the test has
failed. Say for example,
before I switch back to a failed example,
I would just like to see the final report how
this test has actually failed.
So it is actually deleting the final namespace of that test.
So you can see it has run all the teams, all the tests have
actually passed. Say for example if
there is some failure due to some issue in our code it
would give a detailed report why it has failed. Say for example,
you have set the replica count as two instead of one. So it would
give an assertion saying that your test
case has failed. Expected output is one
but you have given as two, so it gives
a clear indication of why the test has actually failed.
Hope this Kuttl
can be I've given a simple example of an install test. It can be used
for any API testing. Say for
example, if you want to test an endpoint where you can do a post request
and get the response back, you can do an assertion on that response
as well. So let
me go back to my presentation so
quick summary of references that what we have learned so
far Kuttl documentation is documented at Kuttl
dev Docs. The GitHub page is GitHub, Kudo builder Kuttl
and there is a slack channel called Kudo and
another tool that open source tool that we have used called canines for managing
the Kubernetes clusters. So let's have
a quick summary of what we have learned so far. Kuttl is
more importantly an open source tool anybody
can contribute and it's free to use. It can be used for local
end to end testing.
When you have local end to end
testing and you are able to fix most of the issues locally
then which would mean few broken builds, which would
also mean you could release much faster,
which would eventually mean happy developers and happy customers
eventually.
If you have any questions, I'm freely available on
LinkedIn or on Twitter with my hashtag ichuka.
Thanks for joining my session.