Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello everyone. Thanks for attending my talk today.
My name is Andrew Knight and I'm the automation panda.
I work as a developer advocate at Applitools, where I help
folks get the most value but of their QA work with automated visual testing.
I'm also director of Test Automation University, which offers several
online courses to help you learn testing and automation, and it's
completely free. Today,
I'm super excited to introduce a somewhat new idea to you and to our
industry. Open testing. What if
we open our tests like we open our source?
I'm not merely talking about creating open source test frameworks,
I'm talking about opening the tests themselves.
What if it became normal to share tests and automate new procedures?
What if it became normal for companies to publicly share their test results?
And what are the degrees of openness in testing
for which we should strive as an industry? I think that
anybody in software, whether they're testers, developer advocate,
reliability engineers, managers,
whoever, can greatly improve the quality of our testing
work if we adopt principles of openness in our testing practices.
To help me explain, I'd like to share how I learned about the main
benefits of open source software and that we can cross those benefits over
into testing work. So let's
go way back in time to when I first encountered open
source software. I first started programming
when I was in high school. At 13 years old, I was an incoming
freshman at Parkville High School in their magnet school for math, science,
and computer science in good old Baltimore,
Maryland. Fun fact,
Parkville's mascots were the Knights, which is my last name.
All students in the magnet program needed to have a Ti 83
plus graphing calculator. Now, mind you,
this was back in the day, before smartphones existed. Flip phones
were the could trend. The Ti 83 plus was cutting edge
handheld technology at that time. It was so advanced
that when I first got it, it took me five minutes to figure out how
to turn it off. Pro tip, hit the second key
and then the on button. I quickly learned
that the TI 83 plus was just a mini computer in disguise.
Did you know that this thing had a full programming language built into
it? Ti basic.
Within the first two weeks of my freshman intro to computer science class,
our teacher taught us how to program math formulas. Things like slope,
circle, circumference, an area, even these quadratic formula.
You name it, I programmed it. Even if it wasn't a frameworks assignment,
it felt awesome. It was more fun to me than
playing video games, and I was a huge Nintendo fan.
These were two extra features of the TI 83 plus. That made it
ideal for programming. First, it had a link cable
for sharing programs. Two people could connect their calculators and
copy programs from one to the other. Needless to say,
with all my formulas, I became quite popular around test
time. Second, anyone could open any
program file on the calculator and read its code. The Ti
basic source code could not be hidden by design.
It was open source. This is
how I learned my very first lesson about open source software.
Open source helps me learn. Whenever I
would copy programs from others, including games,
I would open the program and read the code to see how it worked.
Sometimes I would make changes to improve it.
More importantly, though, many times I would learn something new
that would help me write better programs in the future. This is how
I taught myself to code all on this tiny screen,
all through ripping open other people's code and learning it. And all because the
code was open to me. From the moment I wrote my
first calculator program, I knew I wanted to be a software engineer.
I had that spark. Let's fast forward
to college. I entered the computer science program at Rochester Institute
of Technology. Go tigers.
By my freshman year in college, I had learned Java,
c, a little python, and of all things,
cobalt. All the code in all my projects until that
point had been written entirely by me. Sometimes I
would look at examples in books as a guide, but I never based
other people's code. In fact, if I
had copied code and got caught by a professor, I would have failed these
assignment. Then, in my
first software engineering course, we learned how to write unit tests using
a library called Junit.
We downloaded junit from somewhere online.
This was before Maven became big and we hooked it into our
Java path. Then we shared writing test
classes with test cases as methods, and somehow
it all ran magically in ways I couldn't figure out. At the time,
I was astounded that I could use software that I didn't write myself
in a project. Permission from a professor was one thing,
but the fact that somebody out there in the world was giving away good code
for free just blew my mind. I saw the value
in unit tests, and I immediately saw the value in a simple
free test framework like Junit. That's when
I learned my second lesson about open source software.
Open source helps me become a better developer.
I could have written my own test framework, but that would have taken me a
lot of time. Junit was ready to go
and free to use. Plus, since several individuals had
already spent years developing Junit, it would have
had more features and fewer bugs. Than anything I could develop
on my own. For a college project, using a package
like Junit helped me write and run my unit tests without needing to
become an expert in test automation frameworks.
I could build cool things without needing to build every single component.
That revelation felt empowering.
Within a few years of taking that software engineering course,
suites for hosting open source projects like GitHub became big.
Programming languages had package indexes like Maven,
Nuget, PYPI, and NPM,
which all became mainstays of software development.
The running joke within Python was that you could import
anything. I was living in a future far
from swapping calculator games with link cables.
When I graduated college, I was zealous for open source software.
I believed in it. I was an ardent supporter,
but I was mostly a consumer.
As a software engineer in test I used many major
test tools and frameworks. Junit tests
cucumber, Nunit, Xunit net,
Specflow, Pytest, Jasmine,
Mocha, Selenium, Webdriver, Restsharp rest assured
playwright the list roles on and on.
As a Python developers, I used many modules and frameworks
aside from testing within the Python ecosystem like
Django, requests and flask.
Then I got a chance to give back.
I launched an open source project called Boa Constrictor.
Boa Constrictor is a. Net implementation of these screenplay
pattern helps you make better interactions for better automation
out of the box. It provides Web UI interactions using selenium,
Webdriver and Rest API interactions using rest shared.
But you can use it to implement any interactions that you want.
My company and I released Bo constrictor publicly in October
of 2020. You can check out the code on GitHub.
Originally, my team and I at Q two developers all these code.
We released it as an open source project, hoping that it could help others in
the industry. But then something cool happened.
Folks in the industry helped us. We started receiving
pull requests for new features. In fact, we even
started using some new interactions developed by community members internally
in our company's test automation project. That's when I
learned my third lesson about open source software.
Open source helps me become a better maintainer.
Large projects need all the help they can get.
Even a team of core maintainers can always handle all the work.
However, when a project is open source, anyone who users it
can help out. Each little contribution can add value
for the whole user base. Maintaining software
these becomes easier and the project can become more impactful.
As a software engineer in test, I found myself caught between two
worlds. In one world, I'm a developer at heart
who loves to write code and to solve problems in the other world.
I'm a software quality professional who tests software and advocates
for improvements. These worlds come together primarily through
tests, automation and continuous integration.
However, throughout my entire career, I keep hitting one major
problem. Software quality has a
problem with quality. Let that sink in.
Software quality has a big problem with quality.
Every manual test case repository and every
test automation project I've ever seen is riddled
with duplication. Duplication in the steps,
duplication in the patterns, and duplication in the flaws.
How many times have I seen the same 23 setup steps copy pasted
across 149 test cases? How many
times have I seen automation code use static variables or singletons to
share things globally instead of dependency injection?
How many times have I seen a 90% success rate treated as
a good day with limited flakiness?
How many tests actually cover something valuable and meaningful?
And how can we call ourselves quality professionals when our own work
suffers from poor quality?
Why are all these problems so pervasive? I think they
build up over time. Things like copying and pasting
one time feels innocuous or a rogue variable dont be noticed
or a flaky test is not a big deal.
Once this starts happening, teams instantly keep repeating these
practices until they make a mess.
The developer in me desperately wants to solve these problems, but how?
I can do it in my own projects, but because my
tests are sealed behind company doors, I can't use it to show
others how to do it at scale. And a lot of the articles and courses
and tutorials out there are full of toy examples.
So how do we get teams to break bad habits? I think
our industry needs a culture change. If we could be more open with
testing, like we are open with our source, then perhaps we
could bring many of the benefits we see from open source into
testing. Things like helping people learn testing,
helping people become better testers, and helping people become better test
maintainers. If we cultivate a culture of
openness, then we can lead better practices by example.
Furthermore, if we become transparent about our quality, it could
bolster our users confidence in our products while simultaneously keeping
us motivated to keep quality high.
For the rest of this talk, I'm going to suggest multiple ways to start pushing
for this idea of open testing. Not every
possibility may be applicable for every circumstance, but my
goal for today is to get you all thinking about it. Hopefully these
ideas can inspire better practices for better quality.
For a starting dont of reference, let's consider the least open context
for testing. Imagine a team where testing work is
entirely siloed by role in this
type of team. There is a harsh line between developers and testers.
Only the testers ever see the test cases, access test repositories
or touch automation. Test cases and test plans are essentially
closed to testers due to access readability or
even apathy. The only output
from testers are failure percentages and bug reports.
Results are based more on trust than evidence.
This kind of team sounds pretty bleak. I hope
this isn't the kind of team you're on, but maybe it is.
Let's see how openness can make things better.
The first step towards openness is internal openness.
Let's break down some silos. Testers don't exclusively
own quality. Not everyone needs to be a tester
by title, but everyone on the team should be quality conscious.
In fact, any software development team has three
major roles, business developers and
testing. Business looks
for what problems to solve. Development addresses
how to implement solutions, and testing provides feedback
on these solution. These three roles together are
known as the three amigos, or sometimes also called the
three hats. Each role
offers a valuable perspective with unique expertise.
When the three amigos stay apart, features under development don't have
the benefit of multiple perspectives. They might have serious
design flaws, they might be unreasonable to implement, or they
might be difficult to test. Misunderstandings could also cause developers
to build the wrong things or testers to write useless tests.
On the other hand, when the three amigos get together,
they can jointly contribute to the design of product features.
Everyone can get on the same page. The team can
build quality into the product from the start.
They can do activities like question storming and example mapping
to help them define behaviors.
As part of this collaboration, not everyone may end up writing tests,
but everyone will be thinking about quality.
Testing then becomes easier because expected behaviors are well defined
and well understood, testers get deeper insights
into what is important to cover. When testers
share results in open bugs, other teams members are more receptive
because the feedback is more meaningful and valuable.
We practiced three Amigos collaboration at my previous company.
Q two, I'd like you to meet my friend Steve.
Steve was a developer who saw value in example mapping.
Many times he'd pick up poorly defined user stories with
conflicting information or missing acceptance criteria.
Sometimes he'd burn a whole sprint just trying to figure out
things. Once he learned about example mapping, he started
setting up half hour sessions with the other two amigos, one of whom was
me, to better understand users stories from the start. And he
got into it. Thanks to proactive collaboration,
he could develop the stories more smoothly.
One time I remember we stopped working on a story because
we couldn't justify its business value, which saved Steve. But two
weeks worth of pointless work story doesn't end there.
Steve is now a software engineer in test. He shifted
left so hard that he shifted into a whole new role.
He's become a champion for quality in our products and I was
blessed to work with him. Another step
toward open testing is living documentation through specification
by example. Collaboration like we saw
with the three amigos is great, but the value it provides
can be fleeting if it is not written down.
Teams need artifacts to record designs, examples,
and eventually test cases. One of the reasons
why I love example mapping is because it facilitates a
teams to spell, but stories, rules,
examples and questions onto color coded cards that they
can keep for future refinement.
Stories become work items, rules become acceptance criteria,
examples become test cases, and questions become
suites or future stories. To learn
more about example mapping, check out this article after my talk.
During example mapping, folks typically write cards quickly.
An example card describes a behavior to test, but it might not
carefully design these scenario. It needs
further refinement. Defining behaviors using
a clear, concise format like given when then makes behaviors
easy to understand and easy to test.
For example, let's say we wanted to test a web search engine.
The example could be to search for a phrase like panda.
We could write this example as the following scenario given
the search engine page is displayed when the user searches
for the phrase panda. Then the result page shows a
list of links for panda. This special given
when then format is known as the Gerkin language.
Gerkin comes from behavior driven development tools like cucumber,
but it can be helpful for any kind of testing.
Gerkin defines testable behaviors in a concise way that
follows the arrange act assert pattern. You set things up,
you interact with the feature and you verify the outcomes.
Furthermore, Gerkin encourages specification by example.
This scenario provides clear instructions on how to perform a
search. It has real data, which is the search phrase,
and clear results. Using real
world examples and specifications like this helps all three amigos
understand the precise behavior.
Behavior specifications are multifaceted artifacts.
These are requirements that define how a feature should behave.
These are acceptance criteria that must be met for a deliverable to be
complete. They are test cases with clear
instructions. They could become automated scripts
with the right kind of test framework and ultimately these
are living documentation for the product.
Living documentation is open and powerful.
Anyone on the team or outside the team can read it to learn about
the product. Refining ideas into example cards
into behavior specs becomes a pipeline that delivers
living doc as a byproduct of the software development cycle.
Specflow is one of the best frameworks that supports this type of openness,
with specification by example and living documentation.
Specflow is a free and open source automation framework for.
Net. In Specflow, you write your test cases as Gerkin
scenarios and you automate each given one then step using
C sharp methods. One of Specflow's niftiest
features, however, is Specflow plus living doc. Most test
frameworks focus exclusively on automation code. When a
test is automated, then only a programmer can read it and understand it.
Gherkin makes this easier because steps are written in plain language,
but Gerkin scenarios are nevertheless stored in the automation repository,
inaccessible to many team members.
Speclow plus living doc breaks that pattern. It turns
gherkin scenarios into a searchable doc site accessible to all
three amigos. It makes test cases and test automation
much more open. Furthermore,
notice how living doc provides test results for each scenario.
Green check marks indicate passing tests, while Red X's indicate
failures. Historically, testers use reports like this
to provide feedback in house to their managers and developers.
Results indicate what works and what needs to be fixed.
However, test results can be useful to more people than just
internal team members. What if test results were shared with users
and customers? I'm going to pause and say that again because
it might seem shocking. What if users and customers
could see test results? Think about it.
Open test results have very positive effects.
Transparency with users builds trust.
If users can see that things are tested and working,
then they will gain confidence in the quality of the product.
If they could peer into these living documentation, then they
could learn how to use the product even better.
On the flip side, transparency holds development teams accountable
to keeping quality high both in the product and in
the testing. Open test results offer
these benefits only if the results can be trusted.
If test results are useless or failures are rampant, then public test
results could actually hurt the ones developing the product.
This type of radical transparency would require an enormous culture shift.
May not be appropriate for every single company to create public dashboards
with all their test results, but it could be a strategic differentiator when used
wisely. For example, when I worked at Q
two, we shared this very living doc report with specific precision
lender customers every two week release.
It built trust and kept the contracts moving.
Plus, because the living doc report included only
high level behavior specifications with simple results,
even a vice president could read it. We could share these
tests without sharing automation code.
Let's keep extending open testing outward. In addition to
sharing test results in living documentation, folks can also
share test tools, frameworks, and other parts of their tests.
This is where open testing truly is. Open source
we already saw a bunch of open source projects for test automation.
As an industry, we are truly blessed with so many incredible projects.
Every single one of these logos represents a team of testers who
not only solved a problem, but decided to share their solution with
the world. Each solution is abstract enough to
apply to many circumstances, but concrete enough to
provide a helpful implementation. Collectively,
the projects on this page have probably been downloaded more than a
billion times, and that's no joke. And if
you want, you could read the open source code for any of them.
So far, all the ways of approaching open testing are
things we could do today. Many of us are probably already doing these things,
even if we didn't think of them under the phrase open testing.
But where can these ideas go in the future? My mind
goes back to one of the big problems in testing that I mentioned earlier,
duplication. Opening up
collaboration fixes some based habits and sharing components
eliminates some duplication in the plumbing of test automation.
But so many of our tests across the industry repeat
the same kinds of steps and follow the same types
of patterns. For example,
think. But anytime you've ordered something from an online store,
it could be Amazon, Walmart,
Target, whatever. Every single online store
has a virtual shopping cart. Whenever you want to buy something,
you add it to your cart. Then, when you're done
shopping, you proceed to pay for all the items in your cart.
If you decide you don't want something more, you remove it
from your cart. Easy peasy. As I
describe this type of shopping cart, I don't need to show you screenshots from the
store website to explain it. Y'all have done so much
online shopping that you intuitively know how it works,
regardless of the store. Heck, I recently ordered
a bunch of parts from an old Volkswagen beetle from a site named JBugs,
and the shopping cart was the same.
If so many applications have the same parts these,
why do we keep duplicating the same test in different places?
Think about it. Think about how many times different teams have
written nearly identical shopping cart tests.
Ouch. Think about how much time was wasted on
that duplication of effort. I think
these is something where artificial intelligence and machine learning could help.
What if we could develop machine learning models to learn common behaviors for
apps and services? The learning agents
would look for things like standard icons and typical workflows.
We could essentially create test suites for things like login,
search, shopping, and payment that could run successfully
on most apps. These types of tests probably
couldn't cover everything in a given application, but they could cover basic
common behaviors. Maybe that could cover a
quarter of all behaviors worth testing, maybe a third,
maybe even a half. Hey, every little bit helps.
Now imagine sharing those generic test suites publicly.
In the same way developers have open source projects to help expedite their
coding, and in the same way data scientists have open data sets used
for modeling. Testers could have open test suites
that they could pick up and run as applicable, not test
tools, but actual runnable tests
that could run against any application. If these kinds of test
suites proved to be valuable, then prominent ones could become
universally accepted bars of quality for software apps.
For example, in the future, companies could download
and execute run on any system tests for
the apps that they're developing, in addition to these tests that they develop in
house. I think that could be a really cool opportunity.
So as we've covered, open testing could take
many forms. It could be openings in collaboration
to build better quality from the start. It could
be openness in specification by example and living documentation.
It could be openness in sharing tests and their results with customers and users.
It could be openness in sharing tools, frameworks and platforms.
It could be openings in building shared test sets for common application behaviors.
Some of these ideas may seem far fetched or aspirational,
but quite honestly, I think each of them could add lets of value to
existing testing practices. I think every testers and every
test team should look at this list and ask,
hmm, could we try some of these?
Perhaps a team could take baby steps with better collaboration and better
specification. Perhaps a team has a cool project they
built in house that they could release as an open source project, like I
did with BoA constrictor. Perhaps there's a
startup idea in using machine learning to generate tests. Who knows?
Could be cool. Perhaps there are other ways
to achieve open testing that aren't covered here.
What do y'all think? These are just my ideas.
I'm sure y'all out there have even better ones.
We should also consider the flip side. Are there certain aspects
of testing that should remain closed? My mind goes to
security. Could fully open testing inadvertently reveal
security vulnerabilities? Could lack of coverage in some
areas? Welcome expedited exploitation?
I don't know, but I think we should consider negative consequences like
these.
My goal in today's talk is to inspire conversations about open testing.
So here are these questions to jumpstart those conversations.
Number one, how is your testing today?
In what ways is it already open, and in what ways is
it closed?
Question two how could you improve your testing
with incremental openness? We're talking baby steps
here. Small improvements that you could easily achieve today
could be as small as trying example mapping or joining
a mob programming session in question three,
how could your testing improve with radical openness?
Shoot the moon. Dream big. Get creative.
In the world of software, anything is possible.
We should also remember that open testing isn't a goal unto itself,
it's a means to an end. And that end is higher quality,
quality in our practices, quality in our artifacts,
and ultimately, quality in the software we create.
We shouldn't seek openness in testing just because I'm up here on stage
or behind the zoom sessions spouting a ton of buzzwords
these same time. Don't be so quick to brush them off.
We should always be seeking ways for perpetual improvement.
Remember that this whole idea of open testing came from the benefits
of open source code, and they have been tried and true.
So thank you for listening to my talk today. Again, my name
is Andy Dyte and I'm the automation Panda. I'm a developer advocate
at app tools and director of Test Automation University. Be sure
to read my blog and follow me on Twitter at automation and Panda. I hope
you feel inspired to pursue open testing, and I'd love to
spark up a conversation, so enjoy the rest of comp 42 suites.
Reliability engineering bye.