Transcript
This transcript was autogenerated. To make changes, submit a PR.
You. Hello and welcome to Covid-19
is a cloud security catalyst. Thank you for joining this
session. And in the next 40, 45 minutes, we'll talk about
security, about cloud security and how
the last one year, one year and a
half impacted the way, how cloud
and how security is taken into consideration
by the organizations that are moving to cloud.
Before starting, before starting to talk about security and
what is the current status, I think that we should talk first of all
about where we are now and why security
and cloud security is so important.
The rules of the game change in the last 18 months
and the way, how the organization needs to scale in
a very different way. Just think about retails. Retails that had a small
presence on online, and over the night
they realized that from a small shop or
from a small online revenue, they need to handle almost
all their orders through their
online platform. And we don't
talk about logistics and warehouse, let's just talk about
infrastructure, about it, about their it solutions.
And things were not easy and cloud
was a good option and is a good option. But we need
to be aware of the things that we need
to configure, we need to set up to ensure that
the cloud environment is safe. And in a case
of a bridge of data breach or any kind
of bridge we can track, we can know exactly what
that person done, what kind of data he accessed, for how long,
and maybe we can even track after he took the content,
where he opened that content, or if
he resolved it, if he reseller it now figures,
especially for the last 18 months, because this is what we want to target.
Take into account that the global adoption of digitalization has
increased with around 55% in the
last seven years.
So might not look so much, but from
the way how you do business, from the way how you sell your products and
your services, it's very important that more and
more companies decided to go online.
And until,
let's say, February or March 2020,
the strategy was, okay, a slow adoption.
But with the pandemic and
with the lockdown and with more or less interactions,
guess what? 48% of
the organization had to accelerate their cloud migration.
So some of the companies were thinking about cloud,
or think about adapt cloud, but they had been planning two or three years.
Last year in March, a lot of them decided to
do the migration in the current financial year or in the
current calendaristic year. So they
had problems. Not only is the finding the
best vendor, the best CSP, where they should run
their systems, but also they
had to look for the right people with the right skills
in a very short period of time. Some of them had to
build their capabilities,
they had to educate very fast their teams and
with working from home. That was not an easy job because
it's not same as in person class or in
person trainings. And the third thing that
they had to do is to build their cloud
center of excellence. And if you think that the
cloud center of excellence in an organization should contain all the best practices,
all the recommendations, all the blueprints
that can be used by the organization to enhance, to adopt
cloud, it was not an easy task to build in a few months. And of
course they might forgotten some things,
they might rush. And these are some concerns
that we need now to be aware, because now is a perfect moment in
time when a part of the organization are
already running inside cloud and they have enough time or
they have budget to do the
consolidation, especially from the security point of view,
review and indentify where they might do some
improvements and how then not only that,
but also from the operational and management point of view.
It's very important to ensure that you have the right services
and the right systems that are able to track,
are able to provide you insights related to
what is the user behavior, what are the action that
the user is doing on the cloud services,
and to track as much as possible all the steps that
each user is doing.
Now think about the 60% adjusted cloud cybersecurity posture
as a result of distributed workforce.
What does this mean? That a large organization in the last
18 months decided to invest more
in their cloud cybersecurity, and especially because
a lot of their people were starting to work remotely
from their home and at the same time they started
to use cloud native services. And the
priorities change. From 2020 to 2021,
cloud security become very important and the
organization started to look more as a platform, as a service, and the software as
a service, features and services that
CSP are offering versus the standard service.
This means that the adoption took the next step,
and it's correct. Now,
what was the impact of Covid-19
from the security point of view, how it changed the
rules of the game?
Take into account that only in EU,
the number of cyber attacks increased
with 250%.
That's huge in only twelve months.
And that's crazy because it means that
a lot of organizations were attacked
using online tool, online system
and so on. And people who are still working from home, it means that
they already had problem because not all of them accepted working from home
or were ready to adopt working from home for all
their employees.
And also last year, the breaches of large breaches
increased with 273.
It means that in 2019, if we usually had only
one large breach on large systems,
now we had around, let's say free 3.5,
even four in some cases. And this is because it means that it's
more effort to the security team, to the IT team to ensure that they
are patching, they are fixing, they are protecting their solution in
the right way.
Now, working from home, what give us the opportunity to
browse not only to work from home,
not only to stay closely, closely to our friends and family,
but also when you have some free time, you don't go anymore
at a coffee with your colleague from other floor in the front
of the company. Maybe you just
look on the Internet reading news, finding out different
things, and it means that you are more open for
phishing scams. And yes, working from
home increased the phishing scam with 45%
might not sounds as a big number,
but think about it, 45% of people fall
into different phishing scams. And it's very easy.
Just in this morning, when I opened my aqui email
account, I just seen in my spam, because from time to time I'm
also checking my spam, an email from
PayPal that were saying that my
account was blocked because
of a transaction that I've done this week. And I
need to enter in the PayPal to confirm that I've done the transaction.
And that's funny because two days ago I
just done PayPal
pay to somebody and they use a
revolut card that was locked. When Revolut
notified that I'm using PayPal, I unlocked I've done the pay the
payment. But it was very interesting, the connection between when
I've done the pay, the payment, the PayPal, I think that I didn't use
PayPal for the last two years,
blocking the revolut card. And also in my mind I said, okay,
it might block me in the future when I will do PayPal attacks.
And guess what? I had in my inbox, that mail
that was saying that my PayPal account was locked very
easily. Other people like me or like you could fall, could click
and their PayPal account might be stolen.
So we have the Inciso phishing attacks with 350%
and half of the people were
scams, were falling in different
phishing scams.
Talking about the breaches and security breaches, think about that. Starting only
from February to May 2020,
around half a million people globally were affected by different
breaches, by different security breaches from,
and I'm not listening to general breaches. I'm referring to the ones when
you are in a video conference, any kind of video conference, and your own
data was stolen, your name, your email address,
maybe your phone and so on, and sold on dark
web. How they've done that, that's another story.
But think about that all the time. When you're joining
can online video conference,
you are sharing some data, you're sharing the data with the platform,
but also sometimes other attendees can see your data.
So a lot of things can be done and
can be done to get your data, to steal your personal data.
Now, before jumping to security,
before talking about security, also I want to share with you
some insight related to how cloud adoption
and work. Cloud workloads increased in the last
one year.
So the increase in cloud workloads in different regions
is around 70 65% for APAC
and MA is 75%.
Sorry for Amar is around 65,
and for Japan it's around 58. But it's
still, it's a lot of increase in just one year. I would say that
it's huge and was a big step forward. I also notify on the
number of cloud projects that I'm involved. The stretch
become higher and higher. Now, if I'm looking at different industries
where the cloud workload started to
increase and increase drastically,
83% was on chemical manufacturers. When I
saw this information, I was shocked. What chemical manufacturing?
How come? Because they're doing a lot of simulations,
and simulations need a lot of computation.
And also for chemical manufacturing, they were not
yet, let's say, adopting cloud. So this was a
good opportunity, more or less to force this
and to go to cloud. And secondly, as everybody would expect,
was at retail with 60%. Why only 60%?
Because taking into account that a lot of large retailers were
already owning large
data centers or who are already working
with, let's say, private cloud providers or
large providers of IT
services and IT infrastructure on different countries,
and basically they just ramp up there also
on insurance that you could say that is indirectly affected
by Covid, the increase was with around 74%.
More and more people were doing
their insurance and thing like that from their home. They were
not going to an agent or to location
to talk about the insurance options and so on. So they also had to adopt
cloud. This is an old story, maybe you
are aware of that. And it happened before
the pandemic in August 28,
when one of the big players
on the market, Vim, had configured
one of their AWS systems to
do automatically backups. That was perfect.
The backups were done on an AWS S free storage
still, that was perfect. Nothing outrage from
there. The problem was that the
backup was done on an AWS free
storage that was public available, meaning that anybody from
the Internet could access it in just a few hours,
because the timeframe where that
storage was publicly available on the Internet was a very short period
of time. But still there were some downloads.
Full down operation of the data backups. Now the backups were encrypted,
that's fine, and protected with the username and password.
Nevertheless, take into account
that even
if you don't protect your data with strong username and
passwords, especially passwords, they can be very
easily broken. And just to give an example,
I think in 2014 or the last decade,
I think it happened in Chicago. I was at a security session
where in a 90 minutes session I saw how
a backup of an SQL database,
it was encrypted, was broken,
was break. Basically the
token of the password was replaced inside the backup.
And after that they were able to do the full restore of
the database on their own server. All the tools
that they use, and I think they use around eight or nine
different tools were more or
less public available on the Internet. I'm more than sure that you
could find all the tools in one day.
And anybody, even if you are twelve years old, if you would follow
all the steps, would be very easily to be done.
So Microsoft, sorry, cloud vendors
like Microsoft, AWS and GCP are offering
us a lot of tools that we can use to secure our system, to secure
our payloads, secure our data. But still we need
to be aware how we do this configuration, how we
protect our data and what are the security features that we need,
because some of them are pretty expensive. So you
need to find the right balance between
premium security that each cloud vendor is offering versus the
free and default one. Or to bring your own security system that you are using
on on Prem system, because in the end you can have an virtual appliance
inside any cloud vendor where you can deploy your own
firewall, your own security monitoring system and so
on.
Okay, we talked a lot about some
facts. Until now I
expect that most of you are developers or are technical
persons that are working from home for
different companies and so on. You might have
a repository, let's say GitHub repository, a CI
CD, a pipeline, and you are the dev hero
that is working. You are working on the project, you are doing
a great job. Your system is using let's
say Azure storage. And to access the storage you use the account
key, the full account key. And of course
you are pushing that in the infrastructure repository where
that is very secure, it's not publicly available on the Internet and so on,
everything here, it's super, super secure, nobody can
access it and that's perfectly fine. Your dev machine,
it's very secure and everything,
it's fine over there. Even still
you are also using the machine for your own needs and
somebody's pushing a malware on your system is
able to get your windows credentials and
get access to your machine. From that on
he has access not only to the repository,
but also he puts his hand on the storage account key from
where he can go on different environments where that
key is used to get access to customer
data. This is why it's very important and all
the time to remember that segregation
of teams, roles and responsibilities is very,
very important. Developers should never have
access directly for long periods of time on
environments, testing or pre production where customer data
are used or sensitive data.
There are some situation where you would like to provide temporary access,
but only temporary access. It means for 1 hour and
2 hours and be very cautious with that, don't just provide
access and to forget about that. So let's
jump to the subject, what can we do,
what can we do to protect our
systems? There are a lot of things. So if you take a look
on Azure or AWS or GCP,
you will see a lot of recommendation for each service, for each use
case, and even for each industry, you will find a list of recommendation,
list of service, something like that. But there are some common things that I
found that people are forgetting about and
what I will try to cover in the next 20
minutes, 25 minutes,
are the things that I seem that most of the time
forgotten and are very simple to fix. Even so, they are
hoping a big security holder could be used
by anybody. So the first thing that we
can do is limit the use of the cloud preview services
and features.
What does it mean? For example, in the case of Microsoft Azure,
we have for example a service or feature that can be in a private
preview, in public preview or in general availability.
What is the difference between them? In the private preview there's
no SLA, there's no formal support.
For public preview there's some support,
but there's no SLA. And for GA, for general availability
you have full SLA support, formal support and so on.
Now the recommendation that also vendors are having,
the cloud vendor are having, and you need to take into account for
production environment for environments when you use sensitive
customer data all the time, use only
general availability services.
And a very good example here is
or if you remember the Cosmos DB security
issue that happened at the end of August.
But what we forgot, or a lot of people forgot to
mention, is that that happened
on a feature of Cosmos DB, or integration
of a service to Cosmos DB that was in
public preview.
Meaning that if you would follow the recommendations Microsoft
is providing, you should never go in production with something like that because it's
a public preview. Sometimes very hard to do that. But still,
you need to be aware of that and you need to be very cautious.
Now, how you can find, or how you can identify if a
service preview or not. There are two ways that you can do that.
The first of all, when you have the full directory of services,
let's take this example, the full directory of
Azure, of Microsoft Azure
services. What we can notify here that in the
dashboard we have this preview
tag on the services that
are in preview and can be used very easily
just to know about what are the services that are in preview and to
not go with them in production or to get a clear timeline when
they will be live in GA.
This is very easy to do. The tricky part is with the services, with the
features, especially from each service, because some of them might be
in preview, some of them might not be in preview,
especially if you have team that are using ETF
as a code, for example the resource management
template. They might not be all the time aware of what is in preview,
what is not in preview, the current state of the preview, and so on.
For each service. For each Azure service you
will find pages under the documentation of a
service that is specifying what are the preview features,
what is the current strength and what is
covering. So you'll be able to know exactly
if some features that you are planning to use
are in a preview phase or not. What else we
can do. And this is, I think the most stronger
thing that we can do is to educate,
educate all the organization about cloud, about cloud security and not
only the IT team. No.
Try to educate at a minimal level, at the foundation level
or as AWS is calling at a practitioner level,
all the people from the organization,
HR, financial and so on. Why?
Because each of them are using directly or indirectly,
some cloud services. They might end
up very easily not in the cloud portal or in
the cloud console, no, but they might end
up in some specific location where this
information is very important to avoid the security issue.
Now you don't need to invest a lot in these kind of trainings.
What you can find on each CSP you
will find free trainings, especially like for example the one that I'm showing from
Microsoft that are offering different type of trainings
for different roles, for business catalyst, for general
seller, for pre sellers, for data science. But also you
have all the time. For example, you have the azure fundamental and
AWS cloud practitioner certification. Not the certification,
but the content is very generic, it's very base.
You will ensure that people are understanding the base concepts
and covering the content in just four or 5 hours.
They will already have the code
information that would allow them to understand how
cloud works and where that application is running and why.
Sometimes they might have different issues.
What can we do? What more can we
do? First of all, we can
use more identity access management.
And the yam part on AWS and on
Azure are very mature and they
might not be so easy to be used all the time, but at least it's
providing you the power to have
a full control and use less. The master
user, for example the database access, that admin
username and password, or that kind of user that are masters and have
full access to all the databases or to specific database.
You shouldn't use them and
try to use less account master key and tokens.
Think about Azure storage account, for example the account key,
you should never use it. You have the shared access signature,
you have the identity access management. You can provide access to different services
and different systems, to the
storage, to the database and to other resources very
easily and without even having to
provide token. Because inside
the cloud, inside, for example the Azure, all the services can do almost full authentication
and authorization based on their service principle.
And that's awesome. Externally you have limited tokens
like for example shade access signature that can be used to provide a third party
access to a storage maybe or to some
of other type of data repositories and
so on. The problem with this kind of keys and token
is that sometimes the team can forgot them,
can push them inside the configuration that would end up
in a repository, even if it's a private repository. Still it's
not safe enough. Especially when you have
a lot of people that are working or multiple teams that are working on
the same repository on the same project. The teams are changing,
but some people that are leaving the project and go on another project might
still have access to the alt repository, even if not
correct. Still it happens. So you need ways how you can ensure
that sensitive data are not pushed to that repository.
A very easy thing that you can do is using
different scanning tools like the one I'm sharing now, that basically
what is doing is fully scanning a repository
to ensure that there are no secrets. There are no Azure
AWS or GCP secrets pushed to that repository. And if any
kind of secrets are identified automatically,
he would alert you. Or you can take an action and you
can do pretty nice things.
For example, you can do this kind of scams before
a push and you can reject commits where secrets are detected.
You can have iterated them in the pipeline and the build to fail
and automatically even to remove the secrets if
a secret or sensitive data is indexified. And during
your 90 scan you can remove all the secrets.
You can even freeze pipelines or repository in case you
find issues. Or you can even freeze the access
to the user, they've done a commit and
push that secret to
the main repository. So there are lots of things that you can or even for
example, a nice thing that I seen is
that at the moment in time when a secret is identified
in a push or during the night scan automatically,
the system was able to regenerate the access
tokens and all the credentials for that
service to ensure that the
keys were not made public available to a
third party. We talked about the kids,
now let's talk about yam policies
and Azure ad robust access control
because it's pretty important. And why?
Because using Azure site access signature or AWS
signature together with them
we can have the power to limit
the access to anonymous users to our storage,
to our system. And this combined with firewall rules.
And please remember about the firewall rules
because almost all storage type services
and also workload services have now the ability to specify
very simple firewall rules, at least to specify a
whitelist. What are the ips that are allowed to access?
And it's just a small step how you can create a
system that is more robust, more secure.
Now, talking about Azure role based access control, I think it's very
important in the moment in time when you're using Azure and you want to secure
environment because as I said before, try to avoid to use
that master username and password that has
full access and also try to limit how
much a user have access or how much a specific service
have access to other services. If me radu
should have access only to
storage and only to a specific lambda or
Azure functions, then I should only have access to that one.
How I can do that? Well, when we're talking about Azure role
based access control, we have three elements that we
need to be aware of. The first one is security principle.
A security principle basically is the entity that gets access can
be a person, can be a group, can be a service principle.
And what I like here to mention when you're talking about the service principle,
we are talking about a service Azure
function, an app service, a VM,
you call it a major identity is an identity that can be managed inside
it. You have a role, you're specifying the role, what kind of
operation can be done, read, write,
delete, update and so on. And what is the scope?
Can be a subscription level, can be a resource group level, can be
a resource level, only the storage or only
the VM. And what you're doing, basically you are doing a role assignment,
you are aligned, a security principle,
let's say a user that is part of
a specific group like development group.
You say what is his role? It is contributor.
And to what? To a specific resource.
And by managing this in that way on long term you have
better visibility about how you can access, how you can manage and
if you need to retire can access or to review who has access to what.
It's more simple to analyze how
a role was assigned to
a group versus to take a look on each user,
how and what kind of actions he was able to do.
So remember all the time do duty segregation
within your team, not at the team level, inside the team
per different roles that people might have at the level or your team
specific all the time, the permission and
don't provide more than they should have and try to
avoid to build that legacy
permission that they had in the past. For example,
I was on project a, so should I still have access to project a anything
like that? And avoid to
provide access to specific resources to specific people,
try to create blocks and
try to group permission and resources in
such a way to be logical grouped per different,
if we can call them line of business,
what else we can do? We can secure our
public endpoints and to protect them through the
firewall. And sometimes firewall is not enough. More than we can do
is that. For example,
Azure application Gateway has the ability to activate
the WAF web app firewall. The WAF is
providing us a very fine and granular
control on who can access.
But not only that, but we have out of the shelf protection
for oaps attacks at
the firewall level. At the application level means that
any kind of attack will be stopped here, will be audit,
will be locked and our workloads can
focus on their business.
The load on cpus when this kind of attacks will
be the same because everything is stopped at the application
gateway level and we know that this
is updated and is following all the recommendations
and best practices that are on the market.
How we can track, how can
I track it? Well, first of all,
sorry, what else we can do?
We need to ensure that default security features of cloud services are
active all the time. And sometimes,
even if they might be annoying, we shouldn't disable
them. What we should do, we should more
look on the best practices, look on the recommendation and try to follow them,
even if sometimes means that the investment
that the cost is higher because implementing all the recommendation,
all the best practices cost us money. And one of
the most common causes from what I've seen, that different security recommendations
are not implemented is because of the cost in time and
money, but especially from the time point of view.
Okay, how we can track, how we can monitor
the first thing that we should do is to use Azure Security center,
even if you are using the
free tier. Why? Because it's offering us a security score
to know exactly what is our score and where
we could do the improvements.
Compliance information, especially from the catalyst service point of view,
we have the Azure Defender, that is the premium feature that
is very useful, especially when we integrate with the on prem system and also inventory.
But the most important thing is that it's providing us recommendations
and identify where we could have different security
issues. For example, if you don't have too
much time, I just want to show to you exactly what security center can do
for us. He can identify exactly where we
have issues, where we might have different
data breach and what
we can do to improve that. Sometimes even
it come up with some recommendations and we can do automatically
fixed by triggering for example a logic
app behind the scene and things like that. So with just a few clicks,
security center and Azure defender not only protect us,
but is offering us mechanisms
to fix the problem that we have on top
of Azure security center. Just try to take
a look on Azure advisor because besides security,
Azure advisor can come up with recommendation about cost and
do cost optimization, how we can have
system more reliable and also how we can
improve our operational
excellence. Also we need all the time
to track and monitor. We need to have the logs,
we need to have the metric, we need to keep them and to ensure that
we are looking at them to analyze them. Now the tricky part here
is that where should I look for when I
have different problem? First of all,
remember that we have information about
all the activity that the user has inside
the Azure. And I'm not referring here the activity that he's
doing when he access a website and things like that.
I'm referring more on the activity that the user is
having when he's doing or is changing different configuration.
So for example, if I would go here and I would add a new
policy, let's say this
policy, this action would appear automatically
here once the policy would be created
and nobody would be able to delete it,
nobody from your team,
from the subscription owner, or even from Azure, and will be kept there
for a specific period of time when you can go and access the data.
Another thing that you can do is to ensure that you are also logging
the events that are happening inside the system and the
process events and all this information can be used to
track exactly what happened. But remember, you need to have the logs
and you need to keep them in a secure way.
Now for final thoughts.
Well, there are a lot of things that
we can do to improve our system and all the team, there will be more
things that we could do. But don't forget that
Microsoft is offering the cloud adoption framework for Azure
that has four different stages defined strategy,
plan, ready and adopt, but also is offering very
good support for governance, compliancy,
control and secure. And that is the location where
we should analyze exactly what are the actions that we
should do to be aligned with all the security recommendations
at the moment in time when we do a cloud adoption.
Additional to this, we have the well architecture framework that
is coming with a list of recommendations, best practices and ways
of doing things specific for specific industry
and business domains that can help us
to build a secure environment that
can be used without any kind of problem by our customers and can be managed
easily by our teams without being afraid that they could open
a security hall and think similar like
that. Thank you. Thank you for joining this session. My name is Radhu Radu
Vunvulea. You can find me very easily on Twitter
and LinkedIn. And if you have any kind of question related
to cloud security or cloud in
general, please let me know anytime
on Twitter and LinkedIn. Thank you. Thank you very
much and have a great day.