Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hi everyone, and welcome to this talk about building automated quality yet into
your CI pipelines. And I'm glad that you've decided to come and
attend this talk because in my opinion, when it comes to building CI pipelines or
anything DevOps related, we tend to often focus on all
the technical natures around how to build the pipelines, the different sort of codes,
what sort of coding principle do we have, what sort of tools are we using
to be able to better design our CI pipelines? But we
often forget about the quality aspect of it. And I'm not just talking about the
software testing part, but sometimes the deeper sort of quality.
Gates, how do we ensure and maintain quality? We often
have these QA divisions or these software testers, and we've kind of
separated the individual part of it as opposed to what we can actually
build and automate through our pipelines and use our pipelines as well to help control
our software quality. So that's what I want to talk about today, and I hope
that you really enjoy this conversation because it's something that I believe is
really important and really useful in the software testing and software
development world. But before I go any further,
I want to just firstly, briefly introduce myself. My name is Craig
Reezy. I work for a company called RepL, which is part of Accenture,
where I work as a software architect focused primarily around
test, automated and software testing.
I also do board games as well. So I also have my
own board game company called Risky Games, where you can go and download some of
my board games and then also have a book out called quality by design
where I write a lot about software testing, but then also how do we
design and build software to be of a better quality? So if
that's something that's interested in you, where you want to learn more about how to
design software from the ground up to be of high quality, it's something which
I'd recommend you going to read, but yeah,
let's go back to my topic. But if you're wondering where
my accent comes from as well, I'd like to just say that I am
from Cape Town, South Africa. So I thought I'd just share a few pictures
with you from my beautiful city and my beautiful country, which I'm really proud to
be a part of and really would encourage you to visit this
part of the world if you really want to see something that's incredibly beautiful and
something different, really something which I'd encourage you to do. And all
of these pictures are from the city alone.
So this is not even something where I need to travel outside of my city
to be able to get to see all these things. So really grateful and really
thankful to be able to be a part of such a beautiful city.
But let's get back to the idea of quality gates. So what
is a quality gate? And I would really like to think of it as kind
of more of like a check control. So we think in QA we often
think of checkboxes and things that we need to take off or some form
of entry and exit criteria that we need to meet to be able to ensure
that this stage of quality, this stage of the software development
we're happy with and we're ready to take it on. And now we want to
move it forward. And it's a really important aspect of
software development from a quality perspective is having sort of checks and
balances in place to make sure that we've covered certain angles that
we've tested enough here before we can move it on. And I really
like to think about it more as like passing on a ball. So I think
we're all familiar with the typical SDLC, whether it be waterfall,
whether it be agile. Most often it's quite iterative, which is why
in this particular picture I've shown a loop. But in everything, we kind of
have to say, okay, well, we've now analyzed the story. We're now going to start
planning design in everything. We're kind of passing the ball on
to the next sort of phase. And yes, there are things where you can do
certain things at the same time. You can do aspects of your development and testing.
At the same time you can often do aspects of your deployment along
with some of your maintenance and your evaluation. And some of these things can bleed
into each other. But the point is that all move along and they might be
iterative, but you got to be able to say, at what point in time am
I finished with this analysis so that we're ready to start planning this function?
And at what point in time can we say that our design is done so
that we're happy to hand this over to our development team so that they can
start working with it? And we've got to pass this ball on.
And at each of these stages there is really a level of quality
that we want to be able to say is done, or we can call
it a definition of done in the agile world where we're like, now this particular
task has been done and I'm ready to pass it on to the next phase.
And that's really what we're trying to do with quality gates. It's really being able
to say that we've now checked that we're happy with the quality of
this work that's been delivered and we're ready to pass it on to
the next phase. The difference though, is that this is not something
where again, we want to have a manual checkpoint. What I want to talk about
now is really something that can actually be driven by
your CI CD pipelines.
Because we want software to work quickly. And typically, when we think of
all these different change controls, these different sort
of quality gates, these checkboxes,
we can often think this is over regulated. There is too much
process going on here and that's a bad thing. And we try and stay away
and move away from process because it slows us down.
But those processes and those sort of measures that we need are good things.
We need quality control. We need to have a measure
of being able to say that this really meets our quality expectations versus
it doesn't. But what I really want to talk about in today's talk is
how we can actually automate that so we don't have to think about it slowing
us down, but rather something where using our CI CD pipelines correctly,
we can build in the right controls that can help speed us up and
it can actually move us to actually moving faster because our testing
and quality control is no longer slowing us down, but it's actually an important
part of what it is that we do, an important aspect of how we're moving
forward with our software.
And that's really the why. So when I think about why quality gates, I think
it's really about how do we get things moving quicker.
So it's about how do we automatically measure acceptance criteria.
So we have the acceptance criteria, but how do we
create a way of being able to automate it so that we remove that human
checkpoints that can sometimes slow us down. It's about driving
a whole sort of shift left mindset. I think many
of us are familiar with the idea of shifting left, which is really how
do we start testing sooner? How do we get testing
sooner and sooner in the process? And by having quality gates and actually
automating those quality gates, you can drive that process because you can put measures
in place that kind of help the team and ensure that the team has actually
done these things. They have actually thought about certain quality measures before
you move it forward. And a really good way of being able to start driving
and moving tests into that shift left mindset is to actually utilize your CI CD
pipeline so that even developers, when they're now needing to building things in their code,
they've now got to hit certain standards before it can move on. So it really
encourages a team to adopt that proper shift lift mindset.
It's definitely going to safeguard your quality and
prevent things further down the line. If you speak to any tester
and you ask them what's the most frustrating part of part of software development,
they'll often tell you the crunch when it gets to the end.
They've maybe estimated and planned so much work that needs to get done with any
given sprint within any sort of deliverable or project. But inevitably
that gets taken up with a variety of reasons. Things get delayed,
get delayed, and now testers have kind of got to fit a
whole lot of testing into a lot smaller space. And then they get even more
impacted by the fact that often they're now working with things and they're quickly
finding issues and defects on the software. And that's
just making things even more difficult. When you have
automated quality gates in place, you actually are
safeguarding the quality, you're preventing later defects from
getting introduced, so that when the testers do start taking over, things are in a
much better space. It prevents a lot of that sort of crunch time and
really helps the testing to actually move forward because you can start to
actually align a lot of what's needed from a testing perspective into even their
automation perspective. So you'll find that it's not just that the
quality of the software is getting better, it's that the tests are in a better
space where they're able to automate at the same point in time. And so when
it comes to them doing their work, there's not this huge rush to
try and do both. It's really about being able to then focus on
properly doing quality assurance on the software, because they've already covered
a lot of the automation that needs to get done, a lot of the other
sort of technical work that is needed. And overall,
I think what it does is it really ensures proactive
quality. A lot of times when it comes to software
testing, we can be very reactive. The testers go,
they take the software, they deploy the software, they use
the software, they find issues, you got to go back, fix it. And it's this
iterative sort of cycle. It's often going
back and it's trying to fix things in hindsight and then try and say,
well, what do we do to kind of, why do we make this mistake?
What do we do to try and fix it now, rather than trying to prevent
defects from happening. And when you have proper quality gates
within your software you can almost prevent those defects from happening
because you can build automated measures that can say, well look, last time we
had this issue because of this particular mistake, you can actually go and build
a quality gates that can check for that. So the
next time Azure software is getting deployed or built to be able
to catch that and say, hang on, we can't allow that
because we need to ensure that something is being tested correctly or
that we have the right measures in place and it can really prevent quality issues
coming in. And that really helps you to ensure that you're actually preventing
and you're actually building quality along the way in the process as
opposed to this constant thing of throwing things
to testing. Testing is finding issues, sending it back and back and forth,
back and forth, and that slows teams down a lot. And I would say that's
probably where most software projects
overrun. It's that constant back and forth, whether it's between software
developers and the testers or maybe issues with
requirements, we haven't quite established those things. Those are the things that slow teams
down because there's constant scope, there's constant need to fix
and adjust what you're working on because you got to go back and fix older
mistakes. And by having quality gates in place you can prevent
a lot of those things from happening. So it can actually speed you
up and really make you a more efficient team. So yeah,
there's really important reasons as to why we should be considering quality gates
in our software projects. So what does it need?
So it's one thing saying, okay, this is what quality gates is, this is why
we need them. And before we even get to the point of
what specifically are we trying to build quality around and
what do these gates look like? It's also important to understand that there are
some building blocks to put in place before you even want to
consider quality gates. I think one of those first things that we need to really
work towards well defined and clear completion
criteria. I think when we allow a
lot of ambiguity into our software development
processes, we're opening up our teams to develop
software that doesn't work as expected. We open up our teams to
build software where they're not so sure
all the things that need to be tested because it's not really clear what
this is supposed to do. And so when we don't have that in place,
it's very difficult to build anything that's automated because there's so much flux
that we're still trying to figure things out and we actually
don't know what is good enough quality. We don't even know what to test and
how to test these things because we're still figuring all these things out. And if
you're in a state with your software where that's kind of where you're at with
your team, it might be a little difficult to try and automate your quality gates,
because what you really then want to focus on is rather how do we get
better at defining things like our completion criteria
and saying, look, this is actually what our requirements look like. We've actually defined this
a little bit more clearly. Clearly this is actually what's
needed to be say that this step of the process is properly done
and you really want to be able to have that in place. I think that
the next thing is to have a testable architecture.
We need to build software that can be clearly tested.
There's no point building software that might
work well in terms of being able to get something out the door as fast
as possible, but there's a lot of things where you're like, we're not too sure
how to test this particular part of the application and then you can't really
building a measure to be able to test. Have we
tested this effectively?
Have we written the right amount of automation scripts for it to run within our
pipelines? Because we actually haven't made it testable enough to be
able to do all these things. And if we're not creating software that's testable,
we're really robbing our teams of the ability to
be able to build anything that can be really
effective within the CI CD pipelines anyway. But then we're also robbing the teams
of being able to utilize something like quality gates because we can't really
accurately test our architecture. So it's important to make
sure that when you're designing your software that you consider testing. And how is this
going to be tests. And when I say testing,
yes, there's the unit testing, yes, there's sort of your integration,
your UI testing or your API testing. It's the whole thing, it's all encompassing.
How do you test the thing across every different phase of your
software? Have you considered that? And so that's something to think about.
I'm not going to delve too much deeper into what testable architecture is.
It's really something I'd encourage you to work out and to
research yourself, but it's something that you need to have in place and I think
a good guideline while I'm at this point, if you ever want to know if
your software architecture is testable, it's to try write unit
tests for every single part of your software.
And if you can't write a really good unit test, that can give
you a good 90% to 100% code coverage on a particular part
of your architecture, there's a good chance that it's not testable enough and you've got
to rethink your design. So just something to keep in mind. With that,
obviously comes a strong focus on unit testing.
We can't have this thing where developers are pushing things through
our CI CD pipelines that haven't really been unit
tested properly. We can't think we're going to push this through the pipelines. We've got
quality control for a reason. They're going to go and check the software. They've got
automation scripts that can run and pick things up. That's not
really helpful because a lot of things that work
better from a unit testing perspective. And again, this is not a talk about unit
testing and how to write
better and more detailed unit tests or things to cover in unit tests or not.
But if you're not going to have a strong focus of unit testing, where your
developers are writing a lot of unit tests to ensure that the individual aspects of
code are working correctly, you're going to push
a lot of your test automation requirements down the
line. And if you think about your CI CD pipelines, no one
wants their CI CD pipelines to be running for hours and hours
and hours before they get feedback on was this successful?
And the only way to really do that is to ensure that there is a
lot of unit testing place, because then really, when you're building your software straight
away, the moment you're actually doing that sort of code, commit those
first few minutes where it's actually building the code and running the unit test,
it's really giving you immediate feedback. And if you can cover a lot of
testing in that space, you really reduce the need for the
more complicated integration into an automated tests,
which are very effective and needed, but take a long time to execute.
So you want to again reduce the cycles and reduce how long it takes you
to be able to develop the software, and that's a really good way of being
able to do that. So have a strong focus on unit tests.
Obviously, you're still going to have those automated integration tests. You need
to have automation throughout every aspect of your pipeline. We can't talk
about automating quality gates if we haven't automated our tests. So we need to think
about test automation through every process, but it's
important to be able to have that strong base of unit testing and then on
top of that, start having your other automated tests kick in.
It also works best with small, frequent releases, which is really
what CI CD is designed for. If you're going
to be building software where it's only going to be released once
every year, 18 months, and you're not going to have a lot of iteration in
it, CI CD is really not for you anyway.
And to really make use of this and to make use of your quality gates
effectively, you really want to be able to have something where you have small,
frequent releases. You don't want to work with big software. Now you're trying to turn
over big software and big changes on
a regular basis, and then you're trying to rush the process. It can be very
difficult to automate something that's big, but it's a lot easier
when you're working with smaller, more componentized software where
you can make small iterative changes. It's a lot easier to implement
quality gates, be able to check your quality systems, and to be
able to move forward. And I think that last step is a very important thing
to get into next when it comes to thinking about software development.
And I'm going to diverge a little bit to really speak about the whole way
that I think we approach software development from a testing perspective.
And this is perhaps maybe more aimed at the testers than the developers.
But if you were to speak to most testers and you give them any application
and say, here's an application, test it, typically what they're going to do is
they're going to look at the application in its entirety.
You might have multiple different services running multiple different databases,
all with separate configurations that are storing things, but they're going to take the application
and they're going to run from it end to end, and they're going to devise
a lot of their test permutations
from that. Yes, they might be hitting APIs, but they might be wanting to
hit an API that goes all the way from service one to service five and
take it all the way back and see if everything's working together well.
And then often the mistake that's made is because your testers start off focusing from
that angle, they can tend to then focus their test automation
efforts from that angle. And again, that's big. When you're trying to focus your test
automation at these big overall,
end to end aspects of your software,
you're slowing things down because those tests become really
long to execute. They become really flaky because it's dealing with too many moving
parts and it's not really ideal. Those are not the type of tests you want
to be running in your pipelines. You need tests that are very responsive, giving you
quick feedback. And so you need to be thinking about
your software testing and your design not from this big perspective,
but really from a small perspective. So really thinking about
system. So we have like a system here, system one, where system one
has your service. It's got a database, it's got some
database that it interacts with and then it's got its different
dependencies, whatever that might be. And you'd actually want to have those stubbed. You need
to have a lot of mocking in place. So those of you that are familiar
with your unit testing, this is an important part of unit testing and you'd need
to build that in to your unit test, but not just your unit test.
Also your integration test have a level of mocking where you can test
that system one in its entirety, completely isolated from
those other systems if you want to. Every single time you're building
and making improvements in your software, running through these extensive end to
end tests, again, you're slowing down your pipelines.
It's not what CI CD was designed for. It was designed for
quick, immediate feedback. We need to think about testing from
a much smaller perspective and rather focus on trying
to reduce the scale of your software where you can isolate
things. And from a testing perspective you can just focus on this
one service and everything it needs. And then it's a lot easier to
not just write unit tests for your different aspects of code. But then it's a
lot easier to write tests just for this one service and making sure that
it's working well. And you can then instead of speaking
to the other sort of dependencies, you speak to mocks of those dependencies so
that you can test it and get really good coverage moving
forward across within that entire system. So your automated tests can run and you're just
running a portion of your automation scripts to be able to allow you to
get there. And then that's also useful because it allows
you to scale better because now all of a sudden we're like, okay, we're making
a change to system one and now we're going to make a change to system
two. We don't have to worry about, okay, we're now making change to system
one. Got to run through the entire intuition test cycle and now we're going to
do the same because we're now making a change to system two. You can now
effectively work independently, have a team work on system
one and system two and those letting can be completely isolated.
And from a CI CD perspective, that's what you kind of want. You want sort
of isolation. You want to be able to run this job that can then eventually
merge into the bigger software process. But you want to be able
to work with things independently. So it's important to have that
building block in place as we move forward into actually now looking at quality gates.
More specifically, that it's not just about building quality gates if your
software design is not right.
I'm really passionate about getting software right early and that starts at
a design phase and that's why I'm spending so much time at the start of
this talk talking about that design. Because you're never going to fix
or build high quality software or poorly
designed software. Even the world's greatest testers
and the most fantastic development team with incredible
automated scripts, if your software is not designed right,
you're not going to put good quality software out no matter how effective and good
your teams are. And so it's important to start with this design and
get this right. And if you can really get this design right and you
can really focus on isolating and building software that's modular,
that works in a small way, that's very testable and very independent
from each other. That is what you have for the success to be able to
moving forward. And if you have that layer and that foundation, it's a lot easier
to build your software and then to be able to put quality gates
and put checks and balances in place. But what
I will speak about can apply. So what if you're working with software,
maybe not all of this is in place. Yes, you can still use quality gates.
And as we go through the next few slides where we look at quality gates
in a little bit more detail and what they actually are and maybe
how you can apply them.
Note that you can still do it. It does work with bigger pieces
of software. It does work with bigger sort of applications, even things that haven't quite
been broken down. It's just that it works tests in these type of things.
So I kind of want to give you sort of the sort of best case
of how it should work, how you should be designing your software
for CI CD to be able to get the best quality out of your CI
CD and your software. But if you haven't
quite got that in place and you're working with some legacy applications where you've
been running the software for years and you've got your CI CD systems
already in place, but your software application is maybe a little bit too
big. That's fine, we can still work around that. You can still build automated quality
gates in place. It might just look different.
And so with everything, I think it's important to understand that context, that it might
just look different for your application if you don't have this in
place, but you can still find those measures that work for
you very important to note.
And again, it's important to, when we're looking
at these gates, just to understand this automation test pyramid,
because again, anything quality related, there needs to be some sort of checks and balances
in place. And with any pyramid it's very
important to be able to have that strong foundation. A pyramid works well because
it's got a solid foundation and everything is scaling up to a point
where you can go a lot higher because you've got this firm base and you
can effectively build a pyramid high and strong. It won't fall over, it will
last a long time because it's got that solid base in
place and that's what you need. So again, it's just reiterating that whole thing of
you've got to have a solid base of unit tests and you've got to
have your component tests and then your
functional tests with APIs and sort of aspects of UI before heading
into anything non functional. And you'll still have that manual testing component,
but it can be greatly reduced. And with all of these things, this unit component,
functional, non functional. If you have these automated,
you can build quality gates to check each layer to make sure
before we even move on to executing our component tests, we've got our unit
test in place and we're running our unit testing code. Then we're going into our
components where we're working with those isolated and stubbed microservices
that we've built before. We then going into maybe a little bit more complicated API
and UI tests at a functional level that are testing things. And we can actually
go and make sure that all of these building blocks are in place before we
move up. And that's useful because again,
it's a lot quicker to run everything at the bottom, all of these things
as we go up the permit. It takes longer and longer to do
manual testing is very effective and we needed because
we need to be using our software within our spaces. But it's the
slowest form of quality control and not useful when you want to deploy on
a regular basis and you're running your CI CD pipelines and wanting to be able
to deploy into production on the same day. And to do that effectively
again, you need that base, that whole sort of having a solid base of unit
tests in place. And if you can build quality gates in between those things,
you can control the whole process better.
So very useful to have in place.
But let's get back to quality gates. We've had a look at the foundation and
how it is that you need to building your software, but what is a quality
gate check? So we now have quality gate and we're going to have different sort
of quality gates at our software. What is it really looking at? And there's
a multitude of things that you can look at within a quality gate. And there
are some change controls that you can even put in before we even get
into CI CD pipelines in terms of your requirements and what they need to look
like and how a user story should be defined, how requirements
should be defined, how your acceptance criteria
should be defined before you even begin to work on it.
Those are all important quality gates and checks that you need to have in place.
But from an automated CRCD quality gate perspective,
these are the things that you'd probably want to look at. And these are not
the only things you can look at. There are other things that you can build
quality gates around. But essentially if you can understand
all these different things, you can start to build automated quality gates
around these different measures. I think the first thing is bold health. When you're building
your software, and I think this is an easy one to do,
it needs to bold correctly. So you want to be able to check that the
health of your bold, maybe not necessarily always the code that
you've submitted, but anything else that you
might need to be dependent on needs to also be checked as
well. So something around other pieces of software,
are they healthy? You might, for instance, be releasing
some code and you want to deploy and build your code and test it,
but you know that your code is dependent on something else that's maybe not quite
right and you can go and actually check and say, well, hang on,
this other piece of code, I'm not happy with it. It's not building right.
Let's not go in and push this any further because we're dependent on it.
It's looking at the infrastructure health. So we're deploying our software
onto a container or into some bigger system. What's that like? Is the infrastructure
healthy? Will it give me reliable results? Will it run reliably and
you can actually go and check the health of your infrastructure
to make sure that you're happy with it before you even build code onto it.
And very important, there's the obvious thing, and I think this is the
one that we all mostly will think of as test results.
We're running tests, whether it be unit tests, integration tests,
component tests. What are the results of that? So when you're building your
software, you should be building your software, you should be running your
unit tests. Obviously, you want that to pass, and that should be 100% pass
rate before you even move on to the next level to say,
we've passed all of our unit test, let's move it on. And then your integration
tests or your component tests run and move on and on. So again, you want
to check your test results. Very important, code coverage.
And I've had a lot of debate with people who say that code coverage is
a really poor metric and it's not a good
measure of the software quality. And I would say that
that's not completely accurate. I think code coverage can be a really
good measure of your, your testing effort. I just think it's that we use
it incorrectly. We don't understand code coverage and how it works,
and we should never place emphasis only on code
coverage when it comes to software testing. But if we're not using code coverage
at all, we are really crippling our software development,
because we really need to use code coverage as a way of being able to
effectively determine that we have written tests that are covering
the right amount of code, and particularly if we're aiming really high,
and we pulled in really good sort of stub systems and
mock systems in place, we can really get a high code coverage out of our
unit tests in particular. And so it's a really good measure to have and understand
that by code coverage. When I say code coverage,
yes, there is statement coverage, there is branch coverage, there is decision
coverage. There's a whole bunch of ways of measuring it. I'm saying all of it.
When we're doing code coverage on our software, we shouldn't just be isolating one
of those. We should be taking all of those different code coverage metrics and
saying, do we score high enough across all of them? Have we
really looked at all of our decisions? Have we really looked at all the different
sort of branches that our code is taking? Have we really looked at all of
the executable statements in our code? And is it then giving
us a high enough sort of coverage? And I would say a good thing to
actually aim for from a unit test perspective is 90% or above some
applications that might be a little difficult to achieve, in which
case it's often worthwhile then maybe lowering it or maybe looking to
change that design to improve it. But I would say if you've
designed your software right and modular enough, a 90% code
coverage shouldn't be hard to achieve. I think it's just often we
don't put the effort in to that early unit test phase to
warrant that type of thing, and we say it might be difficult to achieve.
Our software is never able to get to a 90% code
coverage. We're happy with all of our tests passing on the code coverage
being 75%. If we really focus on it enough, we can get
there, and particularly if the software is designed right. But important to
have that metric, you need to have some sort of quality gate in place to
know that you haven't just got all your tests passing, but that your tests have
actually covered the areas that you want them to. So, very important thing to have
in place when you're utilizing both of those, your security scans,
you need to be able to scan your software. Yes, you can run
a variety of security tests and you can automate a lot of your
security tests, but a very quick one is
having scans in place. You should have a scan in place.
There's a variety of tools that can do it, that you can go and just
ensure that your code is of a high quality.
That's something you should build in, and you should prevent your code from moving too
far along in the process with known security
issues. Now, I understand that when we're starting
out with a software project, you might not want to
put this in place because you're still starting out and you're still finding your feet
from a software perspective.
And quite a lot of teams tend to leave their software scanning
for very late in the cycle because they're focused on getting
things done before they bring that in. And I understand that
you really want to have a good foundation of
code and application written before you start introducing scans.
What I just says is, don't leave that too late. I have worked on projects
where they've started to look at the security too late, and that
they were building new applications and they've been building this application for three or four
months. Now they want to get it into production, and now they want to
introduce a level of security scanning to make sure everything's safe.
And now they're starting to get some really big failures. And I worked on
projects where those failures were identified in the code that was written
at the very start of the project where they weren't following proper standards and
protocols. So rather err on the side of bringing that in really early
and having a security scan in place from the very beginning of writing that first
code, if you can. And then there's something you can build into your pipeline so
that if a security scan fails for whatever reasons, and you can benchmark
it based on how severe you need to be on different parts of your
application, but you can bring those security scans in
place and make sure that the software does meet security criteria.
It won't even build and move on to the next phase until you fix that
gap. It might again sound like a lot of work to slow you down,
but you're preventing tech debt from
later in your project and you're getting it right early so that you don't have
to worry about these things later. And it actually helps you to move quicker if
you can do that. Right. Same with performance. Now here I'm
not talking about the software performance and performance testing, I'm talking about the
service performance of, yes,
not of your whole software, but of just that piece of the application.
So when you're actually running your code and you're running your unit test,
how long should certain things take to execute?
How long should a function run? As you're testing
the individual services, it's worthwhile knowing these things and actually then
being able to benchmark how effective your service is
running. And because it's unit test, you can often make these
really tight and you can make them really strict criteria.
But the moment you start introducing code that's not efficient,
all of a sudden you'll notice your unit test goes from maybe executing within
a couple of milliseconds or one or 2 seconds into 1020 seconds because something's
not right with the code and it hasn't quite been optimized correctly.
We should flag that and we should say, hang on, we're not going to deploy
this any further, there's something wrong with the performance. And so if we have started
looking at the performance of our code and how it executes very
early, it's something that we can building into our quality gates
and we can stop the software and poorly
performance software from getting further on because we might think it's just
a small piece of code, it's not running optimally, it's not a big thing,
it doesn't scale well. If you're starting to think that that service
or that piece of code might be called on a regular basis and that might
then be scaled out around the world. If you're in the cloud and you're scaling
globally, you don't want to have any code that's not performant and that's something
that you can consider building in place and it's something
that you want to do and consider. And then the
last thing is incident and issue management. We can build
quality gates even if we might not think of defects
and incidents within our CI pipelines, you can still
build a quality gates that can check your incident or your
issue management systems and say, hang on, we can't deploy this service.
Let's say you need to deploy something into production,
but now you've got traceability in place and you know that there is a bug
being logged against particular piece of code that's in your
release and it can go and actually check and say, well hang on,
this particular piece of functionality, there is actually a major bug that's still being logged,
it can stop you, hang on, we can't deploy. There's a major issue that's
still outstanding that we need to resolve. And again, you can
put those criteria in place on how strict you need to be with certain rules
and whether it warrants it based on the importance for feature.
But again, you can build those measures and you can prevent the human
error part of accidentally
putting something into production when there was actually a major issue and someone forgot about
it, because your quality gate will catch that and say, hang on, our issue management
system is saying that there isn't known issue in place. I'm not going to
let you deploy any further until we fix it. And so again,
we can use quality gates really well. And those are some of the things that
it can check, which are very important types of
quality gates that we can get set up and
check out. So whenever we're setting up environments, we're checking out our code, we can
build a quality gate there and check that out. Again, anytime code
is built there should be a quality gate in terms of how well does it.
Anytime tests are executed, whether it be units, CI tests, whatever it might be,
we need to make sure that that's done and that we can measure those results.
The static analysis, those scans that we're running,
we can build a quality gate around that step. And every time those
scans run, are we happy with the results? If not, send it
back. If we're happy, move it on. An environment readiness
check we're now deploying. If we've gone through this stage of our CI CD
pipeline, we've now got to deploy this into some sort of environment,
whether it be in the cloud, whether it be some sort of containerized
environment. So if you've got somewhere or some sort of bigger test environment,
whatever it is that you're deploying in, is that environment
really for us? Have we actually gone and looked at, can we actually spin up
some containers and say that we're happy with it and hang on,
this environment meets our needs, we're happy with it. Let's go
on to the next thing. Then you can go and deploy your steps in so
you can actually test your deployment. Before and after
your deployment. Just say, okay, we're happy with the deployment. Once you've then deployed your
software, is everything up and running that's supposed to be up and running before you
even start running any further tests? We've actually deployed these systems.
Are these services up? Are we happy that they are operational
to a basic level and basic degree? There's no point even trying to run
any sort of automated test against something if it's
not running properly. If you want to now move to the next phase and maybe
run broader, bigger tests,
we need to be able to ensure that it's deployed correctly and we can build
a quality gates around that. And so again, something very important that you can put
in place from there on, it's something like automated
integration test execution. We can now have bigger tests that
are now executing. And again, we've got that strong foundation of unit tests. So you
might have fewer of them, but you still want to have your integration tests that
are running and they're automated and it's able to run those checks. And you can
have a quality gate around that to ensure that those all pass before you move
on to the next thing. And I've put dynamic code analysis
as opposed to static code analysis. This is something that you can have in
place. Not all systems might need to
have this in place, but it's important to have it there as well. And these
are typically tools that, again, can give you a good feel around
security of
your software. So it's not just about the static analysis, where it's
actually now building your code and making sure that it's happy with
the quality of the code. You can actually run a dynamic sort of code analysis
where it's actually taking code and trying to do certain things with the
code and trying to break it. There are a lot of tools that can help
you to be able to assess the quality of your software at that level,
and you can build a quality gate around that. And then the last sort of
type of quality gate that we get is really around our non functional tests that
we have in place. Your performance tests, you can have a quality gate where last
step of deployment, you actually run a couple of very lightweight performance
tests to make sure that the software is performant and you can stop it there.
If it's not performant. You can do the same with your security or
with any sort of visual. If your software needs
to visually look a certain way, you can use visual scans to quickly now run
through your software and ensure that, well, hang on, this software doesn't
quite visually render itself properly if that's important to us.
And you can flag certain pages to say this page has to
pass or not. From a visual regression perspective, we can now flag that
and say, well hang on, we're not happy with this result, stop it,
stop deployment. There's something that's not quite right. And so this,
again, if you understand these different types of quality gates and how they
fit within a pipeline, you can see that they can really add a lot of
value because it prevents us from making mistakes.
The end result is if you build a quality gate at every single one of
these steps, and again, you can automate this quality gate,
your end result will be good quality software,
unless you've really lowered your standards of your quality gates.
But if you've increased and you've got high standards for your quality gates in terms
of code coverage and test execution results,
you are going to have a really good quality software. And the
best thing is that all of these things are automated and
you can get to the end of your check and know that
you've got good quality software. Some examples of things,
this is not an exhaustive list. This is very lightweight, some things,
but just some simple things that we'll look into. And I'll go through some code
examples towards the end of the talk, things like linting standards to be
met. So when your code is building, are you happy with a linting?
Does the code look right and flow right? And again,
I've had discussions with people who criticize and say, but why are we focusing on
a linting? Shouldn't we just focus on the functionality and execution of the code?
Code needs to be maintained and linting is an important part of that,
making sure that the code looks and meets certain standards.
You can have tools that can actually go and check to make sure do all
the variables meet. Camel case, are we happy with the way that
everything is shaped and looked? You can actually automate all of those checks by utilizing
linting tools in the build process, that can help and you can
make that a quality gates where it actually checks that to say it's got to
pass our linting standards because we need to write code that's maintainable and
tests the standards that we set as an organization, have that there.
Again, the most obvious one, you want to have something like 100% successful
completion of all tests with a 90% code coverage.
Obviously this is primarily focused on unit testing because when
it comes to later sort of tests, you can't do the code coverage
comparison. But again, you want to
make sure that all
your unit tests pass. Why are we releasing code into
later cycles if tests are failing? Why are we doing
that? Obviously there might be times when we know something's going to fail,
and those would be the odd sort of things that might happen from time
to time when software is still very early in development,
where we are aware that some things might not completely pass
because there is something that we haven't quite figured out yet.
But I would argue that that's very seldom the case early
on in your software development phase. It might get there, but as your software matures,
any software that's been around for a couple of months, you should already have 100%
successful unit test completion as a standard, because you should by that point in time,
have built something that you know how to test it effectively. So very important to
have in place. And 90% code coverage I've spoken about before
is a really good target to hit and very doable if you've designed your software
correctly. Obviously you want to make sure your scans all
meet and that you've actually covered all of your code within
your scans. And then something like successful pass of all automated checks,
something that you want to make sure is done. And that's a quality check
that you can have in place when all of your automated tests are run at
every level, component integration, whatever you might have your UI API
tests, that's all passed. And again, we're quite strict on those criteria
that it's got to pass. Something you can build in place,
you don't have to do the 100%, you can do 90%, 80%.
Again, you can tailor this because you know your software product,
but obviously you want to aim as high as possible to be able to get
the right quality code out. So definitely something
you'd want to have in place. And so those are just some examples of things
that you can check. Again, not an exhaustive list, but if you understand the
whats of what needs to be tested, you can probably think of some good examples
of checks that you can put in place to build into each of those
quality gates. And you can put multiple checks into one quality
gate. For instance, you can have, while your code is building,
you can run some scans, do your letting,
and check your test results of your unit tests, and you can build all three
into one quality gate. And each of those checks can fail your quality
gate, or two out of three can fail it, whatever you want your standard to
be. But you can put those checks in place in each quality gate
to make sure that within this quality gate, we're going to measure these three things.
They're automated and it's going to come
in place. So those are some good examples. I've put here
a high level example of a typical CI CD pipeline.
Or when I say typical, it might not be typical to every organization,
but something that shows
the typical process of things. And this
is one that I've used before. This is something where they're using maven
to be able to set up
their software. And so you can see as it does, it sets up and check
out that the code has got to be checked out correctly, that the code sets
up correctly. Before it even goes to building the code, they've got a quality
check in place to make sure that we've checked up the code
correctly. We're happy with the setup of
the system before we move on. You then build your code.
It's got to build correctly while it's building. You're running your unit test, you might
be running some other CI test that you have in place.
You're then going into static code analysis where you're looking at
your security scans. Maybe there's some sort of quality criteria that's picking
up or any sort of dependencies, where it's looking at other dependencies and how those
dependencies are covered within your code. All of that can be achieved through
static analysis code. And again, those are all check marks, things that you
can put an actual automated check against before
you can then deploy into, let's say, some sort of broader QA environment,
some sort of bigger environment where we can now run
a bigger set of tests where we're not just isolating everything into just this code
build, but we're now actually executing into some
broader environment. It doesn't be a fixed environment, it can be another
sort of CI environment. But it's not just that isolated code. It's a
series of containers that are maybe now spinning up and testing the code in a
bigger sort of way. But yeah, we're now deploying into some
sort of test environment where we're now going to do some
post deployment checks. So again, we're not just deploying the code and saying,
okay, let's run our tests. Let's actually go
back and make sure that the software. So whether we have
a smoke test or some sort of lightweight integration test, whether we have some sort
of monitoring in place, whatever it might be, but let's utilize
those smoke tests to say, well, actually, before we do anything further, we've deployed
the software. Is everything up and running? Are we happy
that the services are speaking to each other and that they're communicating effectively?
Can we do one or two lightweight tests with them?
Yes. Okay, great. Now let's run the rest of the test.
If we get a failure there, don't bother running the rest of the test.
Yes, you might think, but we still want to test everything else to see where
other failures are there. That's important sometimes,
but you also want to just stop it there rather,
because a failure here can give you mixed
or often inaccurate test results further down the line anyway, because something's not quite
right. So rather fix it there and put quality checks in place.
And these are things that can be checked and measured. Your functional tests,
obviously testing locally, remote, whatever it might be, those are
things that can be measured. You can have your dynamic code analysis, your quality scan
that can be checked. You can then repeat a deployment to the
next stage and do post deployment checks at
the next level. Integration and smoke.
And again, that can all be checked. Those are checkboxes that can be checked before
getting into your non functional test. And those are really things around
accessibility. There are a lot of tools these days that can
do accessibility, that can check your security from a
dynamic perspective, your load tests and your performance tests.
I wouldn't typically put a load
test in a CI CD pipeline because you generally don't want to
put your system under load. So I've put
it here because I think it's important to understand that that's a quality control that
can be automated and a check. But I would only run
a load test on a very
limited basis. But it's important you can still build it into your pipeline,
but only run when needed. When you're making big changes, then you can maybe run
a bigger sort of load test where you actually put in your system under load
because system underload will stress your system. It may
impact other operations
because the system is under load and other things might not be working correctly if
you've got multiple teams trying to push coding together. So it's not something you'd want
happening all the time, but there might be times when you're
working on a big feature before you push it into production. You can run
a load test and actually see how it loads and that can be automated.
And you can have a checkbox in place where your CI pipeline actually runs
it and then actually checks results and determines whether it's happy with it or not
based on the criteria you've set. And then same with your performance test. You can
measure performance at every aspect. I put performance test at the end
here, but you can even performance test much
earlier if you know how long it takes your unit test to be able to
run. And you've got sort of benchmarks in place for how long code should take
to execute and speak to each other. You can move performance tests even earlier
into the cycle. But important to note that all of these things can be
checked and all of these things can have an automated measure where
it's like, well, we actually know what pass means or what we consider
pass. And again, it doesn't have to be 100% pass. With your performance test,
you got your benchmarks in place. Are we happy that it met all the benchmarks?
And you can even prioritize parts where it's
absolutely essential. It's got to meet the benchmarks and parts where maybe it doesn't,
but you can prioritize that and set that up, but you can build a quality
gates that measures that.
Here's a broader thing, if you want to just understand a little bit more in
terms of specific things that maybe you want to do and then introduce.
So this is again, looking at something a little bit bigger where maybe this
is not your typical sort of CI CD pipeline, but your quality
gates that you have in control, that you have in place across your entire
project to get you thinking about other things in terms of what are the
other sort of measures and things that we can check in and check in.
A quality gator stuff like
analysis might not be things that we typically want automated, but that would be a
quality gates that we would have before we now start going into the code development,
where we now start getting into that whole sort of automation phase before we then
get to get into our operation, where even things like incident management,
problem management, your acceptance testing, those things
can be automated, your user testing, you wouldn't typically automate that,
but it just shows you how quality gates
can work through every little process of the cycle. And they don't have
to necessarily follow the sequential thing, particularly in most sort
of software delivery cycles your testing might be interspersed
with different aspects of your code, but it's important to
have these things in place and know that you can use them. Things like mutation
testing can help. Things like your configuration
testing, all of these things can be automated. So I would say that whole sort
of quality gate two to qualify to quality gates
six is really what you'd run in your pipeline.
And those are the things that you can automate. Some of the stuff in quality
gate seven and eight can be automated as well, but those are things that we
just want to make sure that your software is still going and you can monitor
and building that in place. Those are things that you can still have. But important
to note that they don't necessarily need to be part of your pipeline and don't
necessarily need to be automated. But your quality gates two to six, and again,
it doesn't be those you can expand those to, like I have in the previous
slide, a lot more quality gates. But if you understand the different sort of stages
of typically how it would work, important to understand that those
things are there. And again, these are iterative things. You might have multiple different stages
where you can then iterate those processes and have a quality gates for each sort
of stage in that process, and very important to have.
So hopefully that gives you sort of a newfound respect for
the types of quality gates that you can have and the things that you can
actually check within your software to ensure that it actually meets the criteria of what
we need. Now, when it comes to building these
things. And again, there's multiple ways of being able to build this. This is
an example of how you would typically want to maybe build something
when it comes to ado as your DevOps.
But you can try something similar, whether using GitLab, whether using Jenkins,
any one of the other many sort of CI tooling out there. These are
things that you typically want to do where you'd have your board, you'd have your
developers got to have some code sitting somewhere,
where there's some sort of things where the moment your
repo gets updated and someone pushes code to
your git repo, it triggers something that can then start the CI sort of
build process. And that's sort of how you would do it. And within sort of
every step of that process, you can build
checks in place that can run and check everything.
And so that's typically how you would go about looking to build your
quality gates in the first place, to be able to make sure that it kind
of does it. And that trigger starts at
that pushing code level the moment git has an update and someone's pushed code in
the update, you can start triggering things and how you want them to do.
And then you can run through your
CI before you then start doing your CD
where you're actually taking things through a little bit more automated
checks. Your CI is just building your code correctly, your CD is actually deploying
it and running a better level of tests against it. And you
can do that against multiple different environments. So important to have that in place
and understand that that's kind of the approach that you take to building
your quality gates. But let's go into some examples because
you might be thinking well how do we actually build this in our project?
It's great having these quality gates in place, but we're not so sure what to
always do. Here's an example of some code that
can maybe help. And again, these are just high level examples. You could
have find what works for you, but all of these examples
can actually be written within your code, within your CI
CD system to be able to use
it. This is all done within yaml with
these particular things. And you can have a variety of sort of scripts that need
to be able to run. But yeah, whatever it might be in
place, whatever it is that you need to do, these are just some examples of
things that you can do. So look at it and just
then try and figure out what will work best for you. But a pre
deploy test where you're actually running a
job, you're looking at your server and your database, you want to make sure that
everything checks. And you can run some quick bash scripts to be able to just
check that everything's kind of checked out. And basically
really you can write some shell scripts that can quickly just check that everything's up.
So you're actually just checking that your db is up, that your server is up
before you deploy anything. You can do a pre deploy
test and then you can do a post deploy test where, okay great,
now we've done this and I want to run some smoke tests
to make sure that everything's up. And in this particular example we're using Bash
where we're actually just going to run some shell scripts that can check it for
us. But you can also run some more specific
examples. You don't have to use some shell scripts. You can run some actual tests
that you've scripted and then actually just run those functions and
call those functions. You can put this within your actual yaml
code to instigate this quality gate. And so that's really what you'd
want to use when it comes to building your quality gates is use your
yaml. Create your task within yaml and be
able to define what it is that needs to be run.
And you can then within that task, determine the things that needs to
get done. But that doesn't mean that you need to always just run it.
And then how do you know that it's passed and how do you put certain
criteria in place? And here's an example. And again,
I'm going to share these slides so you can have a look at the slides,
download them if you want to have a look at this code in more detail.
But this is just, again, just a quick sort of snapshot of what the
code would look like within,
within your system. But here we go, for instance, where we now want to go
and we want to be able to check things and we can utilize some tools.
We want to be able to execute some code. So for instance, we're using in
this particular example, Kobatura, for a code coverage tool where
we want to now go and actually measure these things. And we're running a task
to be able to run a whole bunch of different jobs. And we can
set a code coverage target of 90. So there is
something where you can actually set a target where it's got to hit. And so
we know that this code coverage target is an output that comes
from our tooling. And now we're saying, okay, great, well,
that output has got to hit 90. So most tooling will return a
result. Even when you're executing your test, they should be pushing some sort of result
back, and you can set what that
target needs to be. So in this case, we're setting the code coverage target needs
to be 90%. And in
this case, all the tests need to pass, and then the code
coverage need to be 90% of whatever's passed. And so we
can specify code coverage target 90, that the actual pass rate needs to
be completely 100. You just go and create another line in there and you specify
it needs to pass 100%. And that helps you with that.
And then you can pass those results and put those results somewhere and
store them in a location so that we can still find them
later. And all of this within your CI CD type, within your CI
CD tooling, you should be able to then go and view, and that helps
us. And you can go check it there. And then the same with just ensuring
your scans. You can actually go and check scans and see how
successful were your scans. And so this is just another example of
things that we can do. When we're running any sort of scan policy, we can
actually go and have a look at the different sort of critical, high, medium,
low sort of risks that are raised and we can actually set some
standards in place and say, well,
do these scans bring me results that I'm happy with?
And am I happy that it's met the criteria that we want? And you can
use those variables and put in a variable to say that I'm
happy with what's been achieved and I'm happy
that there's no major severities.
And so therefore our code is safe. But if there's any major severities get
picked up, we can stop the bold right there and say, this is not a
safe bold, let's stop. So these are great things that we can do and
put in place. And so this is some code that you can think of.
And again, these are things that you can play with. Have a look and see
how you could utilize it in your own space with your own tooling. It's just
an example of some code that you can write that can start building quality
gates within your CI CD tooling. It's a lot easier to do it
when it comes to test results where you can literally go and make sure that
all the tests need to pass and you can set that to any
sort of thing. The process has to run completely, particularly when you're running
sort of your integration end to end test. You can set it where the whole
thing needs to run and so the single failure will throw it and it won't
deploy it any further. Things that you can put
in place, but if you utilize in some sort of process like Yaml
within your pipelines, it's very easy to write a YAml script that can add in
that step and within that phase, put in that step and then building that quality
gate. So from a coding perspective, it's actually quite easy to do and
to build these quality gates. It's getting the processes and
the actual systems around that correct
and in place to be able to make
use of them correctly and to be able to build in the effective measures where
we can actually test our software more effectively.
But yeah, very important. So this is some code examples for that.
And then the last thing I wanted to just talk about is observability.
Because I think with anything CI CD related, it's not just about it
passing through and moving on to the next step. And the next step it really
becomes around us being able to make use of our software
being observable. And if we don't have observability
in our CI CD pipelines, we are losing out
a lot of what value it can give
us. So it's not just about it running through the process and then we
just trust it. There's a lot of observability and things that need to happen and
things that we can track to then be able to also building around our
CI CD tooling that can give us better quality. Things like
collecting data from multiple sources. And how do we collect that data?
Because your CI CD pipelines typically running
and deploying a system, and then you've got your unit tests that are running and
then it will push that output somewhere. You might typically have
your integration test or your end to end test written by your
testing team, maybe outputting somewhere else in a different location.
Then you have your scan results pushing to another location.
You then have things like performance tests,
any sort of things where we're actually monitoring what's going on with
our sort of system, our server, what is
our server operation like? What's our cpu
usage while this thing is running? We can check all of that stuff if we
wanted to. All of that will then store to another sort of thing. And so
what happens is that there's a whole bunch of data that goes all over the
place. And I think the biggest problem is we have the CRCD tooling
in place and then we allow that data to sit all over the place.
And yes, we can build the quality gates around that will
stop and prevent it from moving forward. But we're not digging
deeper into the software and understand how it all works. And so the best thing
to do is to take all the data, collect it from multiple sources and store
it into a central location. And you
can build APIs to go and automate that data gathering where it can go and
put everything in one location of this is our CI CD
information that we need and it's all in one location. And the reason why that's
important is because then we can use logging correctly.
So there's failures during the process and things are going on.
But now we have logging and everything centralized and we can
now monitor this database and it's very
easy to now go, okay, this is the log, this is the issue. And then
we can go and actually delve deeper into the issue because our quality
gates will have failures and we're going to have a lot of failures during our
process. But what actually went wrong with the software and the application, and if it's
going to take you ages to figure out what went wrong with your application.
We're not speeding up the process in any way. So again,
logging and monitoring becomes important because then you're able to actually go in and see
what was logged in the software now because you've pulled everything together in a centralized
way, you can follow, okay, well, this was logged here and this
is what happened at the next phase. And it failed this quality gate here because,
and we can see what went on in the code in that level and we
can pull everything together and we can use login to
delve deeper, but we can also monitor the health of that. And so you can
have triggers in place and alerting in place that
can trigger and alert you when quality gates have failed and
allow you why. And you can also put dashboards in place that can maybe showcase
system health wasn't great there, or this test was failing, or this particular scan
kept on failing. And you can also track the lifecycle over time.
So your CRCD pipelines ran a thousand times this
month. And you can start saying, well, this quality
gate failed more often than any other quality gate. Why?
And you can also start to use your quality gates to get even better
where, okay, maybe your quality gate standards are too strict,
but also maybe we're just not doing something right and we need to change
the way that we're working and improve the way that we're building our software or
our processes in place so that we can hit that quality gate
more often. And so again, if you have all that place,
all that data in one place and you've got the right sort of visualization,
you can see those things. And again, it helps you to build better quality
software as a result of it. You can keep track of,
obviously, your data retention policies while you're doing this. That's important.
I sort of put that point in that whenever you're working with data, be aware
of data retention policies. You don't want to store this information all
the time. You don't want to have years and years and years worth of CI
CD information. It's not worth your while. You often want
to just track certain things. You might want to keep around for a while,
but typically three to six months for anything CI CD related,
you can have it and you can start flushing it out. And then you don't
want to store stuff, particularly if you're moving quite quickly with
your development. You don't want to store stuff that's been around for a long time
and then try and track, well, six months ago this happened in our CI
CD tooling. Why you might not need
that information, get rid of it. But yeah,
I think an important part is to then continuously monitor and optimize. So it's not
just about putting your quality gates in place and leaving them there with
anything software related. Continuously monitor the effectiveness of your quality gates optimize.
See how you can do better, see how you can make your CI CD process
better, see how you can make your quality gates better, see how you can
design your software and change your software to better meet your quality gates. All of
that's important and I think that's why I wanted to talk about observability
before closing out the talk, because if
we don't have some level of observability, we're not really
understanding what's going on going on with our software and we can't improve it properly.
So very important thing to have in place and so
important to have observability. The thing about observability, whenever it comes to
your CI CD pipelines and your quality gates, and if you have that in place,
you can start utilizing your quality gates to be able to build better
software. So I've covered quite a lot in this talk and
so I really want to encourage you if you've listened to this talk and you've
really enjoyed it and learned a lot, but if
you do have any questions, please feel free to contact me in the Discord channel
on the conference and we can talk about these ideas a little
bit better. And I'm really keen to be able to hear your feedback and maybe
answer some of your questions and maybe some challenges that you're having in your
space around quality gates or CRCD pipelines,
or how to change your testing or your software design to be able to better
achieve this within your software or any sort of other sort of coding
related questions. If you're not so sure how to build a quality gate, let's talk
about that. But I look forward to hearing more about it from everyone. And thank
you so much for listening to this talk. I've really enjoyed being able to present
this and I really hope that it's going to be helpful to
many of you to be able to start making changes in your own
software delivery process by putting quality gates in place
that can really deliver real results and allow you to be able to
deliver software that really is of a great
quality. Thank you so much for listening.