Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello, my name is Mesut Erkal.
I'm a software quality assurance engineer.
And as most of my colleagues test automation is a very important part
of my daily tasks because nowadays we are not having big releases
or seasonal releases anymore.
But almost every day we are pushing some new commits and we are implementing
some new features to the working product in the production environment.
But even if not implementing new features, sometimes we are changing or updating
the behaviors on the existing features, which means we are doing the continuous
delivery and continuous integration.
And accordingly, of course, this means we have to do the continuous testing
as well, because we have to ensure that these frequent updates or frequent
development activities do not break, do not break the working functionality.
So we have to do continuously the testing and verification
and validation activities.
And in this way, we can continuously ensure the quality and improve the
confidence in our team to deliver the best product to our customers.
So, in these terms.
Test automation is very important because if we have only manual testing
activities, of course, with solely manual testing efforts, it is not
possible to catch up all the verification activities because there is a huge
and comprehensive verification scope.
So doing this frequently, almost every day with manual efforts.
It, it will be not possible to catch up with, to cover all the scope with
limited resources in a limited time.
So if we have already some implemented test code or the automated test
executions, it will be much more easy to cover with a, much more fast.
And, reliable, test execution activities.
But in these terms, reliability and robustness, or even
stability is very important.
So even if we implement test cases, are we having reliable
results from these test cases?
Or, even if we automate all the test cases we already designed,
it means, does it mean that we are already catching all the possible or
potential vulnerabilities or bugs?
So in this session, we will talk about the potential risks or difficulties or
drawbacks or test automation challenges.
And of course, if we are talking about challenges or problems, of course,
we will talk about the solutions and some proposals to cope with
this kind of challenges as well.
So, let's get started.
As we briefly discussed, test automation has great advantages and benefits.
First of all, it saves time, because while we are doing the test automation
or executing the test cases manually, sometimes it may take hours or even days.
But while doing the same thing with an automated run, we can execute several
scripts to fasten the execution or even we can create some virtual machines to divide
the whole suite into different subsets and we can already utilize the parallel runs
to reduce the total execution duration.
And not only the execution duration, but also we can improve the coverage.
For example, we can perform the same test scenario.
by providing different data inputs.
So we can perform the data driven testing to apply not only the positive
scenarios, but also the negative test cases by providing or inputting
some different negative data values or with some different data formats.
So all this means we can already eliminate So, we can reduce the manual effort.
But beyond that, there are some scenarios in which we cannot
do with only manual testing.
For example, when we try to do the load testing or performance
testing, we have to send lots of different concurrent requests.
So we have to create several requests which will be sent to the servers and
we will be checking the responses or the responsiveness of the servers.
So for this purpose, of course, we cannot.
Collect a lot of people who will try to send different requests at the same time.
But instead, we can just create those requests on our machines and resources.
And then we can do these kind of load testing activities.
So all these different aspects are great benefits and advantages, which we can
utilize by performing the test automation.
But.
Of course, there is a bad side of all these advantages, and of
course there are some difficulties that we have to cope with.
So, let's start to discuss them one by one.
The first difficulty that we have to cope with is the frequent updates or
changes, because Since we are working in Agile environments, it is very
likely to see some updated behaviors.
Initially, we design our features in a way, but after going to the customers
or the end users, after getting some feedback from them, sometimes
we update some, changes or updates.
And we try to improve the usability or the even the functionality of some features.
So it means that even if we implement the test scenarios to cover these use cases
once, it doesn't mean that we can execute them forever without any maintenance.
So sometimes we have to continuously maintain and pay
attention to these test cases.
Because otherwise, as we can see in the famous cartoon on this
slide, sometimes the customer expectation is totally different.
Then what we do or design or provide to the customers or the end users.
So this is the importance or the proposal to cope with the
frequent updates or challenges.
In addition to those changes or updates to the working functionalities or
the features, sometimes even within the specific feature or use case.
we may have some instabilities.
For example, if we have A B testing activities, we deploy different instances.
This is why I have a cute canary in the slide.
So we may have canary deployments, including different instances,
returning different results.
So sometimes the results coming from different instances will not match
to our expected conditions because in these cases we won't be able to know,
Which instance or which version we will hit during our execution in advance.
And there may be some different difficulties, namely testability.
Some of the features are not easy to test because if we are, especially if we are
testing a subsystem in the whole system, and there are some different interactions
and integrations to the other subsystems, and sometimes we don't have access to
those other parts of the whole system.
For example, let's assume a feature.
Which is syncing our subsystem which is under test after a data is created
on the other part of a system.
So when a data or an instance is created in the other subsystem,
our subsystem which is under test should sync itself by pulling.
The newly created object.
But in this case, if we don't have access to create an object
in the other subsystems, then how can we test this scenario?
So we have to simulate or trigger this scenario to be able to check
if the communication is done or the whole interaction is done properly.
First of all, we have to create the data.
But if we don't have access or right to do that, then we will be
having some testability issues.
We will not be able to trigger this condition because, we will not be able to
create the relative or relevant condition.
instances on the other subsystems.
So what we can do is, of course, we can utilize some mocks, or we can
develop some other subsystems, which is faking as the modules which our
system under test is communicating with.
But what one initiative we can do in this, in that regard is.
We can communicate to the development teams and we can try to find
ways to improve the testability.
For example, if we need some public APIs or other interfaces by which we can create
some instances just for testing purposes, because we know that they will not be
used in the real production environment, but just for testing purposes to be able
to trigger the condition that we need.
To perform this scenario, we can request some additional interfaces from
them, and then after with the help of development teams, if we have those
interfaces through which we can trigger our expected conditions, we will be
able to cover these testability issues.
And one more thing which may little bit make Things difficult for us is
the hardware modules because sometimes to perform or verify and validate
the interactions, which is done with the hardware modules, we have to
simulate those kind of messages or the protocols, which is communicating
the hardware modules with the software algorithms that we have in our system.
Not only the hardware modules, but also the AI components is very likely
to be seen in our products nowadays, because we know that today AI or the
machine learning algorithms is a very important part of our daily life.
So which means when we are developing some products or the features or the
functionalities to our customers, it is very likely to include some
machine learning algorithms to provide them the best user experiences.
So when we are deploying.
or delivering our products, how can we ensure the performance of these kind of ML
or AI components involved in our products?
So, of course, there are lots of different ways to improve the performance
by developing the algorithm itself.
But speaking in terms of the functional test automation point of view, how
can we ensure the best performance coming from these AI modules?
So it is a little bit tricky.
And what we can do is maybe we can perform several experiments and we
can do lots of different benchmark studies, but just from functional test
automation point of view, it is almost impossible to perform or ensure the
best performance of this AI components.
And a little bit related to this issue, there is a non functional aspect of
the quality in our product that we are delivering or providing to our customers.
It's not on the functionality, but there are lots of different aspects of
the quality, for example, performance, how responsive our services are,
even when under load, the how is the performance of our services.
Or even the usability, how easily the end users can use our services.
This is just the user experience and the, usability aspect of quality.
And of course, these, these, all these aspects and dimensions are very important.
Not only the functionality, but also all these other aspects of
the quality are very important.
So how can we ensure with the, test automation?
Of course, it is not, possible to cover all these aspects, but.
We can support, on top of all our test automation activities, we
can do several studies, sometimes manual testing activities, sometimes
exploratory testing activities.
Which means test automation will never enable us to provide a bug free
software, or even the manual testing activities will never enable us to
deliver a bug free software, because bug free software is not possible,
and infinite testing is not possible.
So we will always have some issues or always have some bugs, but
of course the priority of the severity of them are very important.
So the important thing is Try to minimize those kind of issues as much as possible.
Try to reduce the escape bugs as much as possible.
So, on top of the test automation activities, the
manual testing is very important.
But, without test automation, we cannot cover the scope
which is sufficient to deliver.
our products with a certain quality.
So test automation is very important as the baseline.
But on top of that, we should never ignore the manual testing activities.
Recently, as an initiative in our team, we performed a bug bash in which everyone
in the team tried to use the products.
So it was a kind of dog fooding activity.
And we tried to find the issues, all kind of the functionality and usability issues.
And we found several issues.
So we were questioning in the team, if so, if this is the case, what are
the automated test case are covering?
What are they finding?
But it doesn't mean that they don't find any issues.
So we already found several issues reported by the automated test cases.
But those were the issues already remaining from the
test automation activities.
They were, they were still.
A lot of issues which can be fixed or even improved.
in the product.
So both the test automation and the manual testing activities,
are very important in these terms.
And there will be still a lot of issues which can be improved.
So what we can do is maybe we can do some chaos testing activities
to test if something unexpected happens in some parts of the system.
If some parts are going down.
What will be the other parts or the other systems will be reacting
to this unexpected situation?
So after these kind of difficulties stemming from the product or the
way of working off the features or the functionalities, of course, we
can talk about some difficulties regarding the implementation itself.
On this slide, I have an example showing a setup to test the
account or tenant creation feature.
And one requirement of the feature was After a tenant was
created, The user of this tenant should be notified by an email.
And the difficulty in this scenario was, after each time we execute this
test scenario, we should go to the email account, and we should log in with
the correct credentials, and we should verify that the email was there with the
relevant subject and the correct context.
Creating unique email IDs after each run, and managing all the
credentials was super difficult.
So what if we did this?
We set up an email relay, which was capturing the email to be
sent to an email address, which is sort of format and forwarding
them to one other email address.
So in this way, whenever an email is prepared to be sent to an email address,
which is fitting to a certain format, it was capturing this and forwarding to an
other email address, which managed by us.
But of course, preparing this environment and utilizing these servers to prepare
the email relay itself was very difficult.
So of course we had to put some effort and we of course should spend some
time to prepare this environment.
test environment, but not only the test environment, sometimes the test
code itself is difficult to maintain or difficult to implement because
we, as we already discussed about the front end automation, sometimes
the element locators are tricky to figure out, but even sometimes the
normal algorithms to test the backend services like sending the responses,
preparing the, correct, format request.
Or even parsing the responses.
Sometimes implementing the test code is challenging.
So we have to cope with this kind of difficulties.
And we have to improve the ease of coding in our test automation framework.
And not only the implementation, but also the maintenance is very important.
And one aspect of the maintenance is the execution durations.
So if our test executions are taking too much time, it means sometimes The
features to be deployed are waiting for the test results because, as we
discussed in the beginning, normally we want to integrate our test executions
into our pipelines to ensure that the frequent updates or changes do
not break the working functionality.
But each time, after each commit, after each merge request, if we are
running test cases and if they are waiting, or taking too much time.
Then after some time, the team members will start to complain.
I'm raising an MRI and I'm waiting for hours to comment or to march my comments
to my updates and Test cases are really blocking our development activities.
To avoid these kind of situations, we can try to reduce the execution
time as much as possible.
As we already discussed previously, we can, introduce the parallel executions,
but even within the certain test scenarios, we can, analyze, Which steps
or which operations are taking too much time and we can try to find out
the root causes and if there are some dummy ways or there are any other steps,
redundant steps which can be eliminated to cover the same scope, then we can
already reduce them, eliminate them and reduce the total execution durations.
And one more aspect which will help us to improve the maintainability
of test code to implement.
The next test cases in a way faster or easier way is removing the duplications.
If we already introduced some helper classes which are providing some methods
duplicated, in lots of different test scenarios, then whenever we need to
implement, such a scenario, we will be already having the chance to call these
methods implemented in the, Those helper classes, and it will already help us to
improve the ease of coding and not only the ease of coding, but it will help us
to remove the duplication because instead of having this kind of helper classes in
which the repeated steps are implemented.
If we implement these steps in each test case separately, then what
will we do when a fix is updated?
We have to update one step.
Repeatedly used in lots of different test cases or test spec files.
We will go to each spec file and we will fix this problem.
in each of them separately, but instead, if we already have the unique point
of source, then we will be, having the chance to apply our fix to a single point.
Another important aspect of the maintenance stage of the software
testing lifecycle is managing the bugs.
So of course, when we are doing the test automation or the automated test
executions, some test tests may fail.
And we have to understand the root causes.
Sometimes they may be false alarms, but sometimes they may be the real
bugs stemming from the product.
So we should find the root cause and we have to report our bugs.
But of course, in these cases, the evidence collection is very important
because we have to provide as much as possible information to our reports.
So let's a little bit wrap up what we have discussed in terms of the test
automation challenges and the proposals.
to cope with this kind of challenges.
So we discussed first of all, we have to cope with this frequent updates
because once the tests are implemented, it doesn't mean that we can execute
them forever without paying them paying attention to them anymore.
So what we should do is to cope with the agile limited time and limited resources.
To have the minimum viable product in a very limited time, we have to
encourage the early QA involvement.
And to cope with frequent updates, to be able to cover the correct scope, we
have to continuously maintain our test cases and do the continuous maintenance.
The next thing was the instabilities, sometimes stemming from A B
testing activities or sometimes stemming from some scripts or the
nature of working in our product.
So we have to improve the test robustness in lots of different ways.
We have to improve to be ready for a different actual results and to
be able to cover them properly.
And the next thing was different testability issues.
What we can do is we can collaborate with the development teams.
And we can try to find different ways to improve the testability.
And we may have hardware dependencies, or similarly we may have some AI components
in the products that we are developing.
So what we can do is we can prepare several simulation environments.
And it's not only the test automation, but also we can do
the manual testing activities.
Maybe some experiments or exploratory testing activities to understand.
They are performing with a good performance and of course, functionality
is not the only aspect of the quality that we are trying to ensure, but also
the usability, performance, responsiveness and maintainability, recoverability,
all this kind of non functional aspects are also very important.
So we can do a lot of different non functional tests as well.
Especially chaos testing will help us to ensure.
The responsiveness of the product, even in the worst cases, when some
parts of the product are down.
And one more aspect or the difficulty we have to cope with
was the implementation itself.
Sometimes we have to set up some test harness or test environment and sometimes
the implementation itself can be tricky.
And, in addition to implementation, the maintenance, the reproduction
of the bugs or the test execution durations are all related difficulties
or the challenges that we can improve.
So what we can do is we can try to reduce duplication, eliminate
the duplication, and we can improve or encourage reusability
in our test automation frameworks.
So we can categorize these kinds of difficulties.
first of all, The frequent updates, and the instabilities are the
reasons stemming from the nature of the product or our implementation
methods like, agile methodologies or, other, working principles.
And the second category can be, considered as.
The testability or the components of the product.
And thirdly, we can talk about the non-functional aspects
and then we can talk about the implementation, maintenance, or
execution related difficulties.
So, we talked about lots of different difficulties, but of course when there
is a difficulty or there is a problem, of course it means there should be
some solutions which will help us to ease or, solve these kind of problems.
So, the.
A call for action should be you should list your problems or the difficulties
in the test automation framework because there is always something to improve.
There should be always some improvement rooms and please try to fix them
or try a lot of things to improve them and solve them as much as
possible and adapt your solutions in the test automation frameworks.
Thanks for listening and enjoy the rest of the conference.