Transcript
This transcript was autogenerated. To make changes, submit a PR.
So this is the agenda for the secure AI model sharing and deployment.
First we're going to look at the usage of Gen AI and platform engineering.
Next we're going to have a look at different security risks and the Gen
AI After which we will discuss the different security techniques for
different GNI models, and what the best practices for sharing GNI models
internally and externally would be.
We would then go ahead with understanding secure GNI deployment strategies, which
essentially relates to how models, GNI models can be deployed securely,
efficiently in a corporate environment.
Next, we will look something specific to security with respect to GNA models,
which essentially relates to trusted execution environments, which generally
means how, isolated processes can be run.
And we would then go ahead with discussing the automated security
in GNA operations, which covers the different security techniques that
can be applied in GNA operations.
Finally, we will conclude by looking at a case study.
of how an organization can use these tactics to, apply Gen AI to
their corporate environment and the different trends of, security,
Gen AI security as a whole.
So coming to Gen AI in platform engineering, so Gen AI, Gen AI
as it relates to, it's a field of, artificial intelligence that
is able to generate answers.
So it's able to generate images, text, videos, et cetera.
So over the past one to two years, Gen AI application scope
has been booming pretty much.
It has been able to generate codes, system configurations, as well as look at large
chunk of logs and emit that information, which makes it pretty much easier for
the person at the user end to look at the logs and just summarize the logs.
but the main security risks here are that Gen AI inherits a lot of
the, security risks in AI models.
So essentially it's inheriting the bias.
Essentially it's also inheriting the privacy related challenges as well as
the hallucinations and explainability based, which is based on the data,
the specific AI model is trained.
This is a specific reason why security is such an important
aspect of Gen AI as a whole.
security is an important aspect because it can be used to protect the intellectual
property of enterprise organizations, basically protecting them against a
different kind of attacks, as well as helping continuously monitor these models
which emit the output to make sure the data that's being feeded in is appropriate
and is not biased or corrupted in any way.
let's discuss the different security risks in GNI model sharing.
Thank you.
the first risk we're going to be discussing about is data poisoning attack.
So we're going to, for each of this attack, we're going to be
looking at the risk detection as well as the general mitigation.
So data poisoning attack basically relates to manipulation of the data, maybe by
corruption of the data, potentially leading to skewing of the outputs.
Maybe it's drifting from the original, standard way of what the exact output is.
A good example for this as let's say, if I'm gonna ask Jenny more, what addition of
two numbers, maybe one plus one is instead of maybe answering two, it's gonna answer
four because the data that it was trained on is basically skewed and corrupted.
So the way to detect these kind of attacks is employed data validation at
the starting before training the morning, as well as provenance checks, which
is basically validation checks to make sure these kind of outputs are detected.
And, nipping the bud appropriately.
It's very important and crucial to examine the outputs of the DNA models so
that one can understand if it's emitting hallucinations or any kind of bias or any
kind of imaginations that can be there.
the mitigation for this kind of attack would be to try to regularly retrain the
model with a clean set of data, making sure there's validation checks and
there's no any kind of tampering with the data, which is an important aspect.
So the integrity of the data must not be compromised here.
The next thing, attack we're going to look at is called as model inversion attack.
So in this attack, the, the attack basically exploits the
model output to infer sensitive details from training data.
So this actually risks privacy breaches.
An example of this would be emitting private information or
health care information, which is.
information of a person.
This has the potential to breach regulations, their
health regulation, regulatory standards like the HIPAA in the U.
S.
So if organizations don't take care of this kind of, attack, it could lead
to breach in violations and leading to reputational damage as well as
financial, big financial charges.
So the way to detect these kind of attacks would be to implement different privacy
and output scrubbing to add randomness.
to restrict sensitive data leakage.
So essentially it's important to make sure appropriate policies and
standards are in place before the models are being trained and after the
models are trained to make sure the models follow the specific policies.
And basically adding randomness to the data, making sure that it
does not emit any kind of sensitive information is of paramount importance.
because once the confidential information of the specific person is out,
there's nothing one can do to, scrap that information off the Internet.
the way to mitigate this would be to enhance security
by strict access controls.
Essentially, giving an audit trail of people who have created the model
or maybe who have tried to feed misappropriated input or invalid
input trying to make the model emit something that it's not supposed to.
looking at this query, it's easy to understand, what the model, how the
model is trying to behave as well as if it's trying to emit any sensitive data.
So next let's go to different security principles that can
be employed in Gen AI models.
So the first principle we're going to look at is zero trust.
So Zero Trust is a principle that is based on the model of never
trust anything, always verify.
This essentially means that even if a request is within a corporate, a
demilitarized zone, which is essentially a sectioned, zone of the corporate network,
it's not trusted, it's still verified.
So the three methods I'm going to be discussing here, the first
method is continuous verification.
The second is minimizing insider threats, and the third is
adaptability to dynamic environments.
Let's go into detail into each one of these, starting
with continuous verification.
So continuous validation of the data inputs is essential because the
relationship between the data inputs and outputs happen at the rapid pace
in GNA models, often in real time.
So zero trust architecture here ensures that the validation of
data here is constant to make sure.
any unauthorized access or integrity of the data when, with which the
model is trained is not compromised.
Next, look at, let's look at minimized, minimized insider threats.
Gen AI systems are also susceptible to, insider threats.
There might be people who might be trying to internally corrupt the data
or maybe, leak the, model configurations or training data of the specific model.
So zero trust may help mitigate the insider threat, insider threats by
limiting access to only people who are needed on an as needed basis.
So this makes sure there's a specific risk profile and they're not able
to access content other than what they are authorized to access.
next, let's go to adaptability into dynamic environments.
So these days with every company moving to the cloud, zero trust is the way to go
in terms of our security because Zerotest has a way to continuously monitor changing
requirements for different services.
So one could use, for example, in a cloud system like maybe AWS, one could
use something like an API gateway that could basically monitor all the logging
based information on who accessed what and retain these logs for compliance
purposes, which would be helpful in trying to understand who accessed what and when.
the next set of security principles we are gonna be looking at
is, ai, API based principles.
Basically even trying to develop a gen AI model, one has to use, one of these APIs,
rod by one of the huge vendors, like maybe chat GPT, which is belongs open AI, andro
pick, or maybe hanging face, et cetera.
So some of this techniques for, secure, gene model charging involve
ho homo homomorphic encryption.
So homomorphic encryption is basically.
carried out on a ciphertext, generating an encrypted result, which when
decrypted matches the result of operations performed on the plaintext.
So homomorphic encryption enables the processing of sensitive
data without exposing it.
For an example, a Gen AI model could train on an encrypted data set, ensuring
the underlying data remains confidential throughout the computation process.
So organizations can leverage homomorphic encryption.
to change in, Gen A models and encrypted data from multiple sources without
accessing their own data, directly.
So this makes sure any confidential data is not accessed directly.
we look at an example down the line.
So this is crucial in industries like finance or healthcare where
privacy is of paramount importance.
There are also regulations in terms of what access can be accessed within
a country or outside a country.
So for this.
is used for this specific reason, homomorphic encryption is
specific, specifically important.
So looking at the secure ABA strategies, so robust
authentication and authentication is of paramount importance here.
So using a modern authentication mechanism like OAuth2 or OpenID to
verify the identity of users and services is of paramount importance.
There needs to be fine grained authorization controls to make sure
users only have appropriate access to the data and actions based on their roles.
So someone could employ something like a privileged access management or
just in time access which essentially provides access only for a specific
window for the specific data so that their access is automatically removed
without people having to monitor if that person is having access for a longer
period of time that wouldn't be intended.
So API need to be also be protected from denial of service attacks that can, this
can be done by, using features like rate limiting, which essentially times out
or locks out the user depending on the number of requests that's being sent on
a per minute or a 10 plus periodic basis.
One could also use, CDN systems like Cloudflare or Amazon AWS
who have custom algorithms to protect against such attacks.
One should be basically utilize a transport layer security to encrypt data
transmitted between clients and JNI APIs.
This makes sure that the attacker or a hacker is not able to sniff
on this data and tamper the sensitive data during transmission.
As we discussed before, using API gateways to manage, monitor and secure traffic to
and from JNI services is of, can also be used and it's of paramount importance.
So gateways basically provide additional layers of security,
such as IP whitelisting, detection, logging for forensic analysis.
Next, let's look at different deployment strategies and how one
could securely deploy GNI models.
So one could use digital signatures, real time monitoring,
and automated response protocols when trying to securely deploy GNI
models.
So in terms of digital signal notification, one could use that to
understand the model authenticity and implement secure key management and
version control to ensure that any changes that are made are properly documented
and any models are not, are not, are, the original models are not un altered.
So real time monitoring plays a crucial role here.
So advanced tools can be used to detect anomalies in gen
IBM utilizing sophisticated anomaly detection algorithms.
one could also use automated response protocols by integrating
automated responses to swiftly address, detected anomalies.
One could enhance security with feedback loops that could refine and improve this
detection process based on the insights.
next, next, let's look at trust and execution environments, which
is a specific concept of security.
as it relates to gen AI models.
in terms of trusted execution environments, it essentially means
there's an isolated, pro, execution environment in a processor that provides
secure, computing environment for the particular data, isolating any
sensitive data and model processing from the main, hardware or operating
system to make sure people are, any unauthorized person is not able to access.
we have for a real time application of this as, processes
that are used in phones.
Many of the sensitive data that we use, like maybe, calls or messages or anything
that we discuss with a Google Assistant or an Alexa Assistant, it's executed in
a trusted, executable environment, so this data is not accessible, even if a
person has administrative privileges.
or the processor that's performing the normal tasks for a phone is
not able to access this data.
One could use, so trusted execution environments offer
enhanced security features.
They basically ensure data integrity and confidentiality through
encryption and integrity checks along with strict access controls.
this is more versatile in computing.
That's to say, trusted execution environments are used mostly
in cloud and edge computing.
These are crucial for GNI applications because given the interaction between the
input and the output and it's almost in real time, the amount of computing power
it takes and the speed of the response is of paramount importance, which is
why, industries like health care and finance can use something like a trusted
execution environment to make sure that confidential data is processed separately.
Next, let's look at the different, security, operations one could
automate in JNI operations.
So one could perform automated vulnerability scanning.
So these tools basically identify address vulnerabilities within the AI
models, data processes, integrating them into a CACD pipeline for early
detection and continuous compliance.
one could use behavioral analytics and code analysis.
these essentially ensure they monitor the AI system, look for their behaviors,
analyze the code to flag anomalies and non security risks, thereby enhancing real
time security and developer awareness.
One could also use proactive security and compliance by continuous
integration of security assessments.
making sure that a proactive security posture is maintained.
This reduces the security depth and the amount of work developer teams have to
do while also maintaining compliance and giving visibility to entire development
and IT operations team, making sure IT bugs and security bugs are fixed before
these models are deployed for use.
Now let's look at an example case study of how an organization would unam such,
some of the techniques described So, let's take an example of a healthcare
industry that's maybe trying to modernize their security process, and
they're trying to use GNI to do this.
So the healthcare company basically needs to abide by HIPAA regulations in the U.
S., and they're trying to use GNI systems to improve the diagnostic accuracy,
thereby training the model on a vast set of anonymized patient image data.
So the, here, as you can infer, as we previously discussed, they're
using the homomorphic encryption and the concept of trusted processor,
trusted execution environment to separately process the data.
This application basically uses homomorphic encryption to allow the
GNA model to process encrypted images directly, ensuring the data domain
is secure even during computation.
they then use digital signature, they can use digital signatures.
to verify the model integrity before each use, and each tested execution environment
can be used for secure model execution, isolating the confidential data, making
sure that the data is accessed only within the specific computation period
and not accessed outside that environment.
So let's look at the future trends of how Gen AI will affect security.
So there's a lots of research going on in terms of quantum cryptography,
proactive security models and the, compliance and privacy regulations.
So in terms of quantum resistant encryption, one, one could look at
different applications such as development of post quantum cryptography, which
aims to secure systems against quantum computing threats, where the standards are
developed by a big institution like nist.
one in terms of predictive security models with Gen AI, one could use Gen AI to
analyze extensive data sets, predictive security models, which basically are able
to foresee vulnerabilities and basically mitigate those vulnerabilities and
enabling proactive different strategies.
integrating and standardization of, for adaptability of these, Gen AI
models is of paramount importance.
So this emphasizes the need for interation of new cryptographic solutions into
existing systems and standard and standardization for, widespread adoption.
Coming to the conclusion in today's evolving board, the role of securities
of paramount importance, especially in the, role of application of
attribution intelligence, it's not only important to protect against threats,
but also to maintain the trust and confidence of users and stakeholders.
As AI technologies keep improving at a rapid pace, so will the sophistication of
threats against these kind of AI models.
It's crucial to have security in mind and from the start, and set of product
development, and integrate security across each stage of the product
life cycle rather than at the end.
This is crucial in making sure security bugs are squashed and teams are able
to manage their workload efficiently.
The complex landscape of AI and security demonstrates the necessity of
a proactive approach where security is not an afterthought, but a foundational
component of all AI initiatives.
thank you for, this opportunity for any questions or feedback.
Please reach out to me, via the LinkedIn, username listed on the slide.
Thank you.