Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hi there. Thank you for joining my session.
Today we will talk about machine learning and security.
So the title of my talk is hacking and securing machine learning environments
and systems. So, to introduce myself, I joshua Arvin Lat,
and I am the chief technology officer of Nuworks
Interactive Labs. I'm also an AWS
machine learning hero. And I am the author of the book Machine
Learning with Amazon Sagemaker cookbook. So if you're interested
in doing some hands on work on performing machine learning experiments
and deployments in the AWS cloud, then this may
be the book for you. So, enough about
myself and enough about my book. Let's now go straight into
talking about machine learning and securing. So let's start first
with machine learning and the machine learning process.
Let's say that we want to build and deploy
an image classification model. It's a machine learning model where
we pass in, let's say an image, an image
of a cat to the code. And what you want your model
to do is CTO check if
that model is a cat or not a cat.
So if it is a can, the output should be one.
If it's not a cat, then the output would be a zero.
So your machine learning model is able to perform
some intelligent task that it's programmed to
do. So there are other applications of machine learning, let's say
sentiment analysis. Let's say you want to perform forecast,
you want to perform some regression and classification,
then you would be able to do that with machine learning.
So in order to build models, you would of course, need data.
And in order to be able to perform and build that
model, you have to follow a process called the
machine learning process. So you can see in the screen a
simplified version of this process. And again,
you would need to start with data. So most of the time,
in order to have a model, you would need to have data in
order to train that model, especially for supervised classification
requirements. So here we can see that you
start with data collection. Next, you prepare and clean
the data, you visualize it, and then you perform feature engineering
in order to prepare your model. In order to prepare your
data for model training. Once your
data is ready for a model training,
you use certain algorithms and you
perform and provide specific parameters
and hyperparameters, CTO chain and tune your model. So you
would produce a model which can then be
used to perform predictions or inference.
So at these point, now that you have a model, if you have
a new data or new set of records,
you can now use those records as input, and your model
can be used to perform specific predictions.
So after you have a model, you need to perform model evaluation,
because the goal of your model is to be as accurate as possible.
Let's say you have an image of a dog. You don't want your image to
tell you that it's a cat. So the goal is for your model to
make as many correct answers as possible,
and that's the goal of model evaluation. So once you are happy
with your model, you now deploy it. So that's the basic
machine learning process. That's the simplified machine learning process.
And in real life, you would encounter requirements where you
will have to perform redeployments, meaning that
you have an existing model in place and then given a new data,
given a new data set or an updated data set,
you would have to train a new model, compare it
with the previous one, and then you can replace it and ensure
that there's zero downtime. So, as you can see,
this entire process involves both machine learning and engineering.
That's why there's something called machine learning engineering,
where you have to take care of the infrastructure, you have
to take care of the application in order to deploy
and host and manage your models in production.
In order to make the lives of data scientists and machine
learning engineers easier, a lot of machine learning
practitioners make use of specific frameworks and
platforms and services in order to automate certain
parts of the work. For example, instead of building
your own custom solution using
custom scripts, custom formulas,
we can basically proceed with using scripts
or even services which can help automate the work for us. So as
you can see in the screen, this is an example of how a service like
Sagemaker is used to compute for certain metrics
like the class imbalance scores, DPPL and
treatment equality, so that we can easily analyze
and understand our data and the model. So this is
very helpful when it comes to dealing with requirements, when it
comes to fairness. And you can technically use siege maker also to
analyze models and how it behaves,
especially when you're dealing with, let's say, neural networks. If it's hard
to understand how your model works, maybe you can check its behavior
through different teams and through different formulas.
Now that we have vector understanding of the machine
learning process and the different approaches,
especially with the usage of services and platforms,
let's proceed with talking about security. When thinking
about security, usually people think of
the best practices. People think that when they follow the best
practices, they think that their application or their infrastructure
would be secure. Unfortunately, this is not the case,
because in order to know if your system is secure, you have CTO
test it. These is very similar to testing load
testing. If your client asks you, can your
system support 100 clients
or users at the same time? You cannot just say,
yes, it can support it without actually testing it.
So what we want to do here is if we want to test if something
is working or not, then we literally have to
use certain set of tools and processes to test
and validate our own assumptions. So in the case of load
testing, we use a load testing tool. If we're talking about
security testing, we use the appropriate set of tools.
We may use the security scanners, we may use certain tools
to assess and check different vulnerabilities of
our application and infrastructure. And that's basically part of the
tool chain and toolkit needed to assess
your environments. At the same time,
it's important that we keep in mind that we are trying
to protect our system from attackers.
So we have to assume the role and
mindset and approach of these hackers, as this is
for them a game, this is for them a puzzle, and they
will be given a set of resources to attack,
and they will follow a certain set of protocols
and processes in order to understand the environment and perform
a sequence of attacks. For example,
if they're given, let's say, five servers, the first thing that they'll
check is what's visible to me.
What can I easily access? Is these system configured
properly, which systems can attack right away?
If a hacker is able to attack a certain server,
if that hacker is able to, let's say,
get access by exploiting a
certain set of vulnerabilities resources, would they be
able to perform privilege escalation next? And so
on. So after exploiting one
of the vulnerabilities and having access to the servers,
they may try to use that server to access the other
servers in the infrastructure. So once they are
able to reach the inner ones,
the servers which have direct connection to the databases,
these, that's the time these will try to extract the data.
They might try to steal the passwords, or they might try to extract
everything from that server and send it to
their own machines. And that's their way. That's where they will use
other tools to basically maybe
get the passwords or maybe use the data
to attack other systems. Finally, they may
use the other servers they have already compromised to attack
other servers as well. So again, there are a lot of things which
can be performed by attackers. So from your
point of view, if you want to secure something, you have to
understand how your systems can be attacked. At the same time
you have to assume that this will be a
combination of attacks. Some attacks may be performed
and directed towards the human entities,
meaning the employees. So if an employee gets attacked,
maybe one of your colleagues may receive an
email, and then when that person opens a
link in the email, then his or her machine would
be compromised already. So that machine
may have access on internal network, and that's where the attack may
happen. So again, there are different areas and different
things an attacked will do. So it's better that you are prepared
on what's possible whenever you're trying
to protect your system.
So a lot of companies know this, but in real life,
in practice, this is usually not the case, as a lot of companies
often deprioritize security. For one thing,
they would prioritize the short term financial objectives and the
long term financial objectives, because there's nothing to secure
if there's no business in the first place, right? That's how people
think. At the same time, it's already hard
to keep the client and the customers happy.
So in terms of the ratio and the focus areas,
your company may spend or may have 90% or
95% of the entire population of the entire team
focused on the first two or three
objectives here, right? Short term financial objectives,
long term financial objectives, and client and customer happiness,
because that alone is already hard.
So if we add more things to this list, of course that would be deprioritized,
and that's what's happening usually to compliance, to security,
even to the operational excellence part.
A lot of these things get deprioritized and it's our duty to
remind everyone how important security is. Because if
your data gets hacked, if your systems get attacked
and your data is stolen, then there's a big chance that
your company might close. Because of course,
if the passwords and the credentials are stolen and
your customers get affects, then customers would lose trust
in your company. So be very careful about these
topic, because security is a complex field and
it's super hard to secure systems.
That said, you need to have the right mindset and the right strategies
when securing ML systems and environments.
So when talking about machine learning and machine learning engineering projects,
we have to understand what's available to us,
we have to understand what hackers would want.
So here we can see in the screen that we
have control of the custom implementation aspect.
If we have custom code, if we have custom scripts, then yes,
that's in our control, that's under our control, that's under our
responsibility. We have to make sure that everything we put inside
that platform or service is properly checked
and secured. At the same time, we have to ensure that the
service or tool that we're using is properly configured. A lot of attacks
are performed on misconfigured servers or systems,
so it is important that we properly secure this as well.
So the ML service that we will use,
or if we are not using an ML service, that's critical also,
because the type of security strategies that we will implement
depend whether we're using an ML service or not.
So let's say that we're using tensorflow or
Pytorch or Mxnet on top of a server, like an easy
to instance. Then there we're making use of an open
source framework to run training experiments
and deployments inside a server we can control.
So the hardware is managed by AWS,
so we don't need to worry about that, because that's being taken
care of AWS themselves, at least when you're
running machine learning experiments on AWS. So what about the
operating system and the applications inside
that server? We need to take care of that, because that's under
our control. So there's something called a shared responsibility model,
and that model should be similar across
all the other different cloud platforms. So make
sure you know which ones you need to take care of. And you also
need to make sure that the different services and tools
are configured properly.
So now it's time, we think like a hacker.
All right, so let's try to do a quick simulation,
and let's say that we have this network. So there's
a public subnet, and then there's a private subnet.
In the public subnet, we can have these, the web servers.
And those web servers are accessible globally.
So the outside world can access everything inside the green
region and in the blue region, the private
subnet, we have these, these database servers.
So of course, since the data should be protected,
those are stored inside the private subnet.
So if you were a hacker, of course, you cannot directly attack the ones in
the private subnet. You would probably have to look for the
servers in the public subnet first. So, as you can see,
we have already highlighted and put a box and tagged it
as high risk because anyone can access that,
including the attackers. They will try to run different scripts,
they will use different tools to assess the security of the server and
applications running there, and then they will perform an
attack. If they are able to compromise that, that server will
be used to attack the other servers. So what you need to
do is to add multiple layers of security,
CTO, secure the different aspects of your application,
and you need to prioritize what's highlighted and
tagged as high risk. So that's the reason why the green
area, the public subnet, is tagged
and sometimes labeled as a DMZ, a demilitarized
zone. So you need to protect everything in that DMZ.
So now let's try to implement the same set of concepts
in our machine learning process and in our machine learning
environments. So here are some
of the applications and services and tools a machine
learning practitioner would use. Of course, if you're a data scientist or a
machine learning engineer, you may be using Jupyter notebooks.
And that Jupyter notebook will be running on infrastructure
such as servers or instances or
virtual machines. In some cases,
you will have an ML service to help you manage running
these notebook instances for you. So on top of
that Jupiter notebook, you will be running custom code and you will download
the data sets. So in terms of like the holler
convention here, you will see that the ones in white are the ones you
can control. And if you're not able to manage these things
properly, then hackers may be able to take advantage
of that and use it to compromise your system.
So how would that work? If your infrastructure is something that
is valuable to hackers, maybe they can run some bitcoin
mining stuff there. They may try to write some
malicious code or write some malicious scripts
inside some of the packages and dependencies that you may use.
So when you run your scripts, suddenly the
hacker's malicious code would run as well. And what will happen
is your hacker would be able to run
things inside your infrastructure, and out of nowhere,
the entire infrastructure has been compromised already.
So how are hackers able to do this? They are able
to do this by adding some
payload, which, let's say, opens a port.
So if there's a server, maybe before the
script is executed, let's say port 4000 is
closed. So when you run your script,
the malicious payload runs as well.
And then suddenly port 4000 is open.
So your servers, port 4000 has been opened
by the malicious payload. Now your hacker,
since it knows that that port is open, he will now
connect to your server,
and he will be able to run commands as
if it was his or her, or her own machine or computer.
So if your hacker wants to download scripts,
if your hacker wants to navigate through the files, if your hacker wants to
steal the passwords and the data, then that is now possible.
And if the machine that you have has excessive
permissions, then it may be able to do other things like delete
resources. It may be able to create resources and so on.
So be very careful because usually the cloud environments have
something called roles, roles and
security entities, which allow them to perform
operations without API keys. So be very
careful about that because hackers may be able to
use that either for privilege escalation or for performing
malicious actions.
So this is one good example. So let's say that
you decided to use a pre built
model prepared by someone else, someone that
is outside your company.
So that means that instead of using your own
data to train a model, you would be making use of a model
trained by someone else. So that all you need to worry about is model deployment.
So when you're writing custom scripts, you would probably use
something like this where you use a library and then
you use the load method or function to load a pre
built model so that you can use that model for prediction or
inference, right? So what if that model had
a malicious payload or an instruction similar to
what I discussed earlier? And then when you run the code
which loads these model, then suddenly your
infrastructure and your application gets compromised. So what you
need to do is you have to review each of the libraries that you're using
and how it impacts these securing of your system,
especially if there are known vulnerabilities and issues
like this. So make sure to read and look
for the documentation sites because these are usually documented
either here or in the issues or security section.
So now when you're talking about machine learning and machine learning services,
what you need to do is you need to know
what you're trying to do and what you're trying to prepare.
So when you're running training jobs in the cloud, you may use
a machine learning service which converts input into
output. And the output would be the model
artifacts which can then be loaded later on to
perform predictions. But before you are able CTO reach that point,
you will need to pass in different parameters. So that
includes the data set, that includes the custom code.
And then in some cases you are given the opportunity to update the
environment and prepare your own custom container image.
So that if you were to use a library, like let's say that the transformers
library, you would be able to use these hugging face
stuff inside your training jobs.
And at the same time you would have to add and provide the
configuration parameters. So the one in the
black box, that's something that you usually are
not able to control and manage. So all you need to
worry about would be the inputs and outputs.
So in some cases you might think, hey, I'm going
to use a custom container image, I'll just use something which has been built
by another entity. You have to be very careful,
because if there's a malicious process or a malicious
application or script running inside a container from
that container image, then if you run the training job,
then the training job server might
get compromised. And if the permissions
and roles assigned to that server is a bit
too powerful, then the malicious script may
be able to perform actions like,
let's say deleting different resources in your account or maybe
creating new ones. It may be used to even perform privilege escalation
from an account level. So be very careful because these input
parameters are also the areas
where your hacker might insert a payload or a malicious
script. So when it comes to deployment, these are
different options in different cloud platforms. And what is
important is that the different types of attacks would also differ
depending on where you deploy your model. So now
that we have the model artifacts, now that we have already trained a model,
it's now time that we build and
deploy that model into its own inference endpoint.
So what's an inference endpoint? An inference endpoint is simply
maybe a web API where a model is
hosted. So what your custom code would do if
it's not automated yet by the ML service is your custom
code would load the model, and then when there's a new
request containing the input payload, the loaded model object
would be used to perform a prediction with the input payload
as the input, and then it would return an output back
to the caller. So let's say that your model is an image classification
model. If the input payload contains an image,
these the output should either be a one or a zero. So if it's a
cat, it's a one. If there's no can there, it's a zero.
So that's basically the goal of your inference endpoint to
just provide a web service accepting
the input and securing the output as the response.
So in some cases an ML service would provide
more flexibility, like allowing you to introduce
a custom container image. So if you are using certain
libraries and packages, this is very helpful. However, you should be
careful because similar to the previous example, you shouldn't use
a container image provided by other entities.
At the same time, your custom container image may include packages
and installed tools which may be vulnerable.
So let's say that at this point, your custom container image
and the libraries installed there may have no vulnerabilities
built. After one year, maybe new vulnerabilities
would be discovered. And yeah, your inference endpoint would
be vulnerable to different types of attacks. So make sure
that your custom container images are scanned and check for
vulnerabilities and weaknesses and risk.
So at this point you will be asking me, hey, how about machine learning pipelines?
We're running this automated pipeline. And would
the same set of concepts be
usable when it comes to ML pipelines? The answer is
yes, especially if your pipelines involves,
let's say, a controller, and then you have
a different set of resources running the jobs.
So let's say that your training job is running inside a server, and then your
deployment step would of course provision
a dedicated ML inference endpoint. So those would
involve resources, and those are areas for attacks.
So a pipeline simply just automates the entire process,
and you should be protecting the resources involved in
the ML flow in the ML pipelines. At the same time,
you also have CTO harden and configure the security
settings properly for the tool used to manage
the pipeline. For example, if you're using Kubernetes
and you're using some other open source tool to manage the ML flow
and the ML pipeline,
then you also have to configure and tweak
the different open source tools and the different services that you're using to
manage this entire workflow, because that will be an opportunity
or an area for attack as well.
So now we have talked about a lot of things when it
comes to attacking different ML environments and systems.
Let's now talk about the different solutions available in order to secure
this. So the first practical way to secure
a machine learning environment or system is
CTO enable network isolation.
So what do we mean by this? So network isolation is
very powerful, especially if you accidentally use
a script or maybe a container image which
has malicious code or a malicious payload,
usually when you're performing training with a specific algorithm.
So you have this training job, and then your training job
would be running a custom set of scripts or maybe a custom
container. Generally that script
or app or container would not need Internet connection.
So if network isolation is enabled, the malicious
code or the malicious payload or scripts would not
be able CTO connect to the outside world. So it will not
be able to send requests, it will not be able to
transfer your data to some other server, or it will
not be able to download additional malicious scripts.
So this is how powerful network isolation is. Because if
you're trying to train Xgboost, for example, I mean, why would your Xgboost
script download other things, right?
Ideally, it's already self contained.
Whenever you're running training jobs, if, let's say
you're using something like distributed model training,
so let's say instead of one server you're going to
chain a model built, you're going to use multiple servers instead.
Ideally, network isolation still allows those different servers
to talk to each other while training the model, and then
those servers, that cluster is protected from the outside world.
So yeah, if this is properly configured, then yes,
this would help prevent different types of attacks, especially if
there's a script which tries to connect CTo the outside world without your
permission. At the same time,
if you have something like a CI CD pipeline,
then make sure that there's the manual approval step whenever
possible. Because if everything is automated and nobody is checking
it manually from time to time, then out of
nowhere maybe your application will already have code
that attacks other users or customers. You do not want everything to be super
automated, that you're already forgetting about the manual processes
and audits required to check the stability and security of
your website. So in some cases your website
would be used to attack other customers. For example,
your website would be used to redirect users from your current website
to some other website which has a lot of
viruses or things that will automatically exploit the
browser witnesses. So there are a lot of other attacks like that.
And a simple redirect by a malicious payload
can already cause a lot of harm. So make sure that your
CI CD pipelines, if it exists, would have this manual approval
step. In addition to this,
it's better if you can automate vulnerability management.
When talking about vulnerability management, you might probably be thinking of
a security scanner. So you have a web endpoint.
It may be your machine learning inference endpoint. So you run
a scanner there and then your scanner would then list down all the different
vulnerabilities and weaknesses of that endpoint.
If there are misconfigured parameters and so on, your scanner
may be able CTO detect it. But there may be a better way to
do this. For example, what if you have a vulnerability
assessment tool which not only scans a
system from the outside, but it also scans the system
from the inside. So for example, if you use something like Amazon
inspector, it would automatically scan the servers
and the container images. So if you were to use a
container image for machine learning training or machine learning deployment,
then if, let's say your custom container image contains
a vulnerability or some other risk, or maybe
a library which has a vulnerability, then your vulnerability
assessment tool would be able to detect that even before it gets
deployed. So that's pretty powerful because a tool
like Amazon inspector would be able to run automatically
every time there's a change. So if your server changes,
Amazon inspector would run. If a new container
image gets pushed then it would run again. So all you need to do is
check the list of vulnerabilities. Here you need to process
each one one at a time and then you
have to list down the different solutions and remediation steps.
It is important to note that these is something that
you need to spend time on because for one thing you
might see 1000 or 5000 vulnerabilities.
So you need to sort it first. You need to assess
which ones may be exploitable. And you also have CTO check
if your application will break if you were to remediate some
of these risk and vulnerabilities.
Next, let's talk about infrastructure as code. So here
we have our infrastructure. Instead of us
trying to deploy things manually, one resource at a time and updating
these resources, we can simply convert
our resources into code.
So what's happening here is that we try to divide our
infrastructure into layers. So the resources, the security
configuration, the network configuration and
so on. And what we want to do here is we want to convert it
into layers of templates. And these templates can be
used to generate automatically different types of environments.
For example, if you want to have a staging environment instead
of manually creating those environments, we use that
template as a reference to automatically
build this environment. So there's of course
a service which converts a securing template to real resources.
If you need these production environment created or updated,
we use a template as reference as well.
So if you need a new environment for manual penetration testing,
instead of the manual penetration testers attacking
your production environment or even a staging environment that your developers
are using, you can create can environment a dedicated environment
for them. If you need an environments for load testing, then yes,
you can create a dedicated environment for these as well.
Once you're done, you delete it. So again, this is a very powerful
tool. And if you're going to these and deploy
security configuration updates or upgrades, it's better
to do it this way so that you may be able to roll
out things properly. So let's say that
you have improved the security configuration of your network
infrastructure. Instead of deploying it first in the
production environment, maybe update the template
first and have it used to update the staging environment.
So now if your application is still stable and working, that's the
time you update the production environment and so on.
So this is something that you can use and you can
even create and manage the security resources and IAM
configuration using infrastructure as code concepts,
account action monitoring. It's hard
to prevent attacks. It's hard
to troubleshoot problems if we're not able
to monitor things properly. For example, let's say
that we have this serverless machine
learning environment and project.
So this machine learning endpoint, this ML inference endpoint, makes use of
an API gateway, a serverless API gateway,
and a lambda function. So a lambda function is
where you can write custom code. And even if you think that
it's serverless, there's no server to attack, it can still be attacked because
your code may be vulnerable to different types of attacks.
So what happens here is that your hacker
may input some payloads and instructions and it will go
straight into your lambda function. So what you need to
do is you have to ensure that you're able cto collect the logs and
you're able to analyze the logs quickly.
If you're able to introduce a tool or
a logging mechanism or system to help process the different logs,
then that would help you detect security attacks earlier,
right? So you need something in place, because it's best
to not assume that your system is secure. It is best to
assume that somebody will always try to attack it from the outside
or from the inside.
In addition to this, we have to restrict the
IAM permissions. We have to ensure that we limit
what different types of resources are capable of doing. For example,
there are different types of resources, real humans and infrastructure
resources. So from an infrastructure
standpoint, you need to take care of the IAM permissions on
what these resources can do. And what we need to implement
is the principle of list privilege,
because, for example, you have hardened your entire infrastructure. What if
the password of one user is not secured in the
sense that maybe the password is just 123456 been
eight? If that has been attacked,
then your entire infrastructure has been compromised because of poor
permission management and poor security guidelines
and enforcement. So you need to ensure this because the weakest
link would always be attacked.
At the same time, you need to train the professionals
working in your team, because professionals are
generally trained to build something or to perform their task at hand.
However, you also need to worry about these
last 50%. So what do I mean by the last 50%? The first
50% would be doing what you're supposed to do.
The last 50% would be configuring and properly
using a tool in a production setting, meaning it
needs to be scalable, it needs to be something you can
easily troubleshoot, and it's something that's hardened and ready
for security attacks. So if you're using a
tool, developers would generally just assume that
the thing that they have built in their local machine can be
directly deployed to a production environment. That's completely
wrong, because a production configuration is more
or less super different compared to how it
looks like when you're running it locally. So make sure that you
have specialists and experts in your team to properly configure
these security settings and implementation before
going into production or staging. So before we end
the talk, let's talk about cost. Because we have talked about a
lot of things, a lot of best practice. We have to
divide it into maybe two or three components or two or three buckets.
The first bucket is the infrastructure
cost, the additional infrastructure cost. And the second bucket would
be, of course, the manpower required to work
on these types of security requirements.
The first bucket, you need to prioritize
the free ones. Sometimes you always think, oh,
we need the firewall, we need to spend this, we need to purchase this subscription
or so on. In some cases, some of the
small tweaks, let's say network isolation,
you may be able to get that for free. A properly configured
IAM setting or IAM configuration implementation
that can be free as well. So list down all the
things you can do for free. And these list down all the
other moves you can do, which may involve additional
fees. For example, additional fees
may be involved when it comes CTO using another service.
Let's say you're using a service which automatically encrypts data
at rest, or maybe it automatically encrypts it on the fly.
For example, if you were to use that service,
that would add maybe these percent or 4% on top
of the infrastructure cost. But if it makes sense,
then yeah, proceed with that and have it approved.
On the other hand, on the other bucket, when you're talking about manpower,
you have to check on where you will get the resources
focused to take care of the security. And you also need to
make sure that there's a proper ratio. Maybe 80% would
be builders and then 20% would be the maintainers.
So the maintainers may be composed
of the analysts, the ones taking
charge of the processes and the management of
the resources. So you have to follow a certain ratio. And you
would also have to check if you're going to utilize
internal team members or if you're going to work with
other companies to help you, or maybe a combination
of both. So always plan ahead, because once you're able
to identify what you need, it is usually
a long term contract. So that would basically affect
the overall cost. Because if you were to hire a company for
one year just to audit your system. Of course,
you have to check, hey, how much is my system contributing
to the final financial numbers in the
first place? Is it worth it? Is it much cheaper to
hire someone or to train someone from the team?
So, yeah, so there are a lot of options. And,
yeah, you have to assess and choose which is the best
collection of solutions, which is best for your team.
So there. So we were able to talk about a lot of things.
We're able to start with discussing how to
attack different scenarios and how to attack the
different components and process of the machine learning process
especially, and how they are applied in a real life setting.
Towards the second half, we were able to talk about the different security measures
because if we're not able to secure our systems properly,
these, the hackers and attackers would be able to take advantage of misconfiguration,
and they would be able to steal our data and
use our resources in ways that would
either harm others or would harm us. So thank you
very much. Thank you again for watching my talk and hope you learned something
new. Bye bye.