Transcript
This transcript was autogenerated. To make changes, submit a PR.
We are gonna be basically going through an introduction to genaid
and what it is, along with applications of Gen II across
different facets of life. Then we will try to understand different
key security risks in gen AI models. Then after
that, we will go through real world case studies and how
Gen AI is used in governance, risk management and
compliance. We will then conclude by understanding the
ethical and legal considerations of AI as a
technology. So let's start by understanding what
generative AI is. So, unlike traditional AI
systems, which primarily analyze data to make decisions,
like suggesting what movie you might enjoy next
based on your viewing history, generative AI goes a step
further. It uses what it has learned from different
large amounts of data to create new, original content.
This could be anything from a piece of music to a realistic
image of a person who doesn't exist. So imagine
a painter who learns by studying thousands of paintings,
and then he or she starts creating unique art.
That's similar to how generative AI works. It uses
techniques based on advanced algorithms, specifically something
called neural networks. So these neural networks are designed
to mimic the way human brains operate.
Another important technique involves what we call
generative adversarial networks, or gans,
where two neural networks contest with each other to
improve the final output. Now, think of how
this technology can be applied across different sectors.
In healthcare, generative AI can be used to analyze
data from thousands of patients to propose customized treatments,
or simulate how drugs might work.
Consider this scenario where there's sudden epidemic.
So in the absence of lab rats,
or maybe lab test rats that may be used,
this software can be used to minimize the go to market
time of a specific drug. This might be very helpful in
saving thousands or millions of lives.
Now let's consider the application of genai in media.
In media, genai can be automatically used to generate new
content for games or special effects in movies,
thereby drastically reducing the time and cost involved
in creative processes. Just think, the amount of time
and people that are involved in creating an animation movie.
So with Genai, these processes will
be more efficient and more streamlined, and it will also
help enhance and bring movies more closer
to the fans who are expecting movies to be
released at a faster scale. But why is security so
crucial in Genai? So, as we depend more on AI
to make decisions or create content, ensuring these systems
are not only effective, but also secure and unbiased
becomes paramount. If the training data is flawed,
say it's biased against certain groups, those biases
will definitely be reflected in the AI's output.
Ensuring security in AI systems means maintaining
the integrity and fairness of what AI creates,
which is essential for gaining and keeping
public trust. With that, let's go to different security
attacks in Genai. First off, let's start
with data poisoning attacks in Genai. Now, what is
jet data poisoning attacked in Genai? So these kind
of attacks basically target the training data of AI models,
subtly inserting or manipulating information to affect
the model's learning process, thereby causing it to make
errors or bias judgments post deployment.
So the mechanics for data poisoning attack would
be by introducing corrupted, misleading, or specifically
design data into a training dataset, attackers can
skew an AI models output, making it unreliable or
incorrect. For example, altering a street sign image
used in training an autonomous vehicle's AI could cause the
vehicle to misrecognize stop signs, as in signs.
What would be the impacts and examples of data
poisoning attacks? The consequences of such attacks would
be definitely severe, especially in applications where
like finance, where predictive models could determine credit
worthiness, or in healthcare, where diagnostic accuracy
is critical. We previously went through a traffic self learning
traffic slash traffic example where self learning cars
use the data to, to train
themselves. The stakes here are much more higher because human life
is involved. But what are the prevention and mitigation techniques for
such an attack? So, solutions include robust
data validation at the start before training them, making sure
we use only trusted data sources to train the AI model,
as well as implementing anomaly detection during
the training phase to identify and mitigate suspicious
data patterns. The next type of attack we are
going to be looking at is called as model inversion attack.
So what is a model inversion attack? So these kind of attacks
basically focus on exploiting a model to relieve
sensitive information about retraining data. So this is
particularly concerning where models handle private data like personal
identifiers or medical records. So the mechanics
for such an attack. So by querying such an AI system with
grafted inputs, attackers can reconstruct or
infer data about individuals that were used in the model's
training dataset, thereby effectively inverting the model's
output to reveal the private information. By knowing
certain private information, it's possible for the attacker
to decipher the relationship and connections between different pieces
of information. It doesn't take long to recognize
most of our information is on social media. Just getting any
one person identifying information can be dangerous
and can be exposing the personal information of
people. So what would be in the impact of such an attack, such as
model inversion attack? So the impact of such an exam,
such an attack would be in healthcare, for example,
reconstructing patients faces from a model trained on
medical images can be very dangerous
because this could either misclassify or
misreconstruct the patient's face, and thereby allowing
the doctor to wrongly diagnose a patient for disease that they were not
meant to imagine. If somebody is being diagnosed
with another person's disease diagnosis,
which could be life threatening for the specific person, and it has the
ability to cause. So what would be
the techniques for prevention and mitigation here? So,
in this case, the techniques used to counter these attacks include limiting
the amount of information the models output, as well as
applying different privacy methods to obscure
training data. The next type of attack we're going to be
going through are called adversarial attacks in Genai.
In adversarial attacks, they involve manipulating the input
to the AI systems in a certain way that induce the model
to make errors. So these attacks are often difficult
to predict as they exploit the model's inherent weaknesses.
So the mechanics for such an attack would be they
are small, calculated changes to the input data,
such that adding noise to image files
that are impossible, imperceptible to the humans can
lead to AI misclassifying these images or making erroneous
decisions. An example here would be subtly
ordering images that could trick a facial recognition system into
misclassifying individuals, compromising security systems.
Imagine what could happen if a facial recognition algorithm could
classify an innocent person as a criminal and the
criminal as an innocent person. This could have life changing consequences
in both lives of both people. So what would be the prevention
of such attacks? So, defenses for such attacks include adversarial
training, where models are trained with adversarial examples to
learn to withstand them, and implementing input validation to
detect and filter out manipulated inputs. The last
type of attack we're going to be going through is called backdoor attacks.
So in backdoor attacks, basically what happens is
when attackers introduce malicious
functionality in a secret embedded manner during
AI training phase, which later becomes active in response
to a specific input trigger. The attacker
basically introduces a trigger during the training phase that causes
the model to act in a predefined malicious way when the model
encounters the trigger during the operation. An example
of a backdoor attack would be a surveillance system that could
make it ignore specific individuals, effectively creating a
security loophole. So the prevention and mitigation of
such attacks that include rigorous inspection of training
data, continuous monitoring of model behavior, and applying
techniques to detect anomalies and models response. Now,
let's go through different case studies where Gen A.
So in case study one, we have a deepfake
misuse. So the situation here is somebody is
using deepfake technology to create realistically yet
fabricated videos of public figures making contribution
statements. In this case, it's a
news anchor who belongs to a popular news channel
that was created using AI. The impact of such deepfakes
can spread, is that it can spread misinformation, it can influence
public opinion and cause political instability.
Along with this, it basically undermines the trust the
public basically has in the media. So the mitigation of
these attacks would be to carefully understand and analyze
minute details in the videos. If in deepweight
videos, its possible possible that the sound
from the lips as well as the video is not always in sync.
So these are very minute details one would not notice unless keenly
observed. The development and application of deepwake detection tools
is also helpful, and public awareness is basically crucial
in understanding these types of attacks. In the second type
of case study here is identity driven theft,
where somebody is stealing someone's identity to perform a
malicious task. So criminals use AI voice
synthesis tools, in this case to impersonate a CEO,
basically directing a subordinate to transfer
funds to an unauthorized account. This leads to a big and
huge financial loss, as well as reputational hit.
This case study basically shows how criminals can use voice
synthesizing a software to commit fraudulent activities,
thereby exploiting the trust in human voice authentication.
This case study shows the importance of different
multiple factors of authentication that are needed. So some
examples are maybe biometrics fingerprints,
which are part of biometrics ubikeys, as well
as two factor authentication techniques like phones,
etcetera. So the need for setting up government governance frameworks
for this kind of AI authentication system is
particularly important in this case, so that it's easy
to audit internal processes and prevent such type of fraud,
and making sure that one or the organization itself is
adherent to different regulatory standards.
So the last case study here is AI manipulation.
So in this case, an AI system was basically that used for
trading was manipulated by poisoning the data which
it used for training. So basically
this caused the AI algorithm to make unpopular trades
where the cyber criminals profited by making a hefty profit
and the organization that was using the trading algorithm faced
an irrecoverable loss. So the problem with these
kind of attacks is it's because these are minor changes to the
training data, it's very difficult to analyze and
detect these algorithms. This shows the need of continuous
auditing and detection that are needed along with anomaly detection
systems in order to maintain the integrity and reliability of
AI systems such as trading algorithms. So with
that being said, let's go to application of GenaI in
governance risk and compliance. So, in governance,
generative AI is transforming how corporate governance
is admins stirred by automating and optimizing decision making
processes, thereby ensuring transparent and accountable
governance practices. And a prime application example
of this would be AI tools increasingly being used to
analyze vast amounts of corporate data to identify
different trends, forecast potential governance issues,
and provide data driven insights to board members and
executives. For instance, AI can automate the
monitoring of compliance with corporate policies and
regulatory requirements, alerting the management to potential non
compliance and governance failures. The benefits of
using genre in governments include enhancement of corporate
efficiency and effectiveness. Basically, AI allows
real time decision making with a higher degree of accuracy and less
bias than traditional methods, provided the data
that was used in training is itself fair.
So it also supports a dynamic governance environment where strategic
decisions are informed by up to the minute data analysis,
enhancing responsiveness to market or internal changes.
Let's go to the application of genai and risk management.
So, in risk management, Genai is used to predict and
mitigate potential risks before they materialize,
using advanced analytical technologies to model risk
scenarios and their potential impacts. The applications and
examples of GenAi in risk management include being
AI systems that are being adept at identifying different
patterns and anomalies that may indicate risk,
such as fraudulent activities or cybersecurity threats.
For example, let's consider a financial services organization.
A algorithms can predict credit risk by analyzing
transaction patterns on customer behavior or more
accurately than traditional models. The same can be
said of banks. Banks use a algorithms,
machine learning algorithms to basically identify the
transactions we make and if that transaction passes
above a certain threshold, they basically think that as fraudulent activity
and many banks have the potential to block such activity.
The benefits of using genai in risk management include
allowing companies to anticipate and mitigate risks more effectively,
thereby reducing the costs associated with losses and
insurance. By enabling predictive risk management,
organizations can allocate resources more efficiently and
improve their overall risk posture. So lastly, let's look at GenaI
in compliance. As for us visage of Gen
AI in compliance, which is one of the key areas where AI can have
a significant impact, especially regulated sectors
like finance, healthcare and pharmaceuticals,
etcetera. So AI systems here basically help
ensure that compliance is maintained
by monitoring regulations and automatically implementing changes.
Application and example of GenAi in compliance include
AI tools being automatically tracking and analyzing
changes in regulations to help companies adjust their operations
accordingly. In healthcare sector, for example,
AI systems ensure patient data handling
complies with the HIPAA or HIPAA regulations
by automatically encrypting data and controlling access
based on user roles, the benefits
of use of AI in compliance.
It basically reduces the likelihood of human error
and the risk of non compliance, which could otherwise lead to hefty
fines and legal issues. It also streamlines the documentation
process, making audits more straightforward and
less frequent. With that being said,
let's go to risk management in AI. So as far as risk
management in AI is concerned, AI is a very crucial aspect
for safe and effective use. So effective
safe use of AI begins with identifying what can
initially go wrong, understanding the potential threats like
data poisoning or adversarial attacks, as well as performing
regular security assessments such as vulnerability assessments,
threat modeling, helping anticipate and prepare
for these kind of threats. So as long as security is
included early in the face of development of a Gen AI model,
it's safe to say that that model is likely to be more secure
than models that try to incorporate security as an afterthought.
Once the model has been trained, this likely
has the potential to face
different kind of attacks that we earlier saw. So employing tools
like real time monitoring systems can alert to unusual AI
behavior that might indicate a security issue. So encryption
is a security technology here that provides data,
that provides data by encrypting it with a strong algorithm,
making sure only people who are able to access the
specific data and interact with it will be able to interact it.
So keeping AI systems and their components up to date
with the latest security patches is also of paramount
importance. Regular updates help protect against newly
discovered vulnerabilities. Adversarial training basically teaches
AI systems to recognize and understand disruptive inputs,
enhancing their robustness.
So utilizing AI security platforms to continuously
monitor real time attacks is of paramount importance.
Here we have different example security controls such as encryption
access controls that we would be basically talking about and
incident response plan is also of paramount importance,
just because one needs to be ready to understand what would be
the path one would take if a specific attack happens.
So as we saw, incorporating adversarial examples
during training is really helpful in helping the AI model
distinguish between real pattern of data and then
attack pattern of attack,
basically where cybercriminals try to infiltrate the data.
Let's go to ethical considerations for AI.
So the rapid adoption of AI technologies brings
to light numerous ethical considerations.
Ethical AI basically involves ensuring that these systems
operate in a manner that reflects societal values,
protects individual rights, and promotes fairness and
justice. Some of the key concerns for ethical considerations
include bias and fairness. So AI systems
can perpetuate or even amplify biases present
in their training data, which lead to unfair outcomes
that can discriminate against certain groups. For example,
facial recognition technologies have shown lower accuracy rates
for women and people of color, raising significant concerns
about fairness and equality. The next point would be
transparency and accountability. This is a growing demand for AI
systems to be transparent about how decisions
are made, what kind of data are being used to train them,
especially during the training phase. So ensuring
accountability when things go wrong is important,
which requires clear guidelines on who is responsible. Is it going
to be the developers, the users, or the AI
itself? Privacy is also an important key concern
here. AI systems often require vast amounts of
data, which can include sensitive personal information.
Ensuring that this data is handled security is of paramount
importance so that privacy is maintained as
a fundamental ethical requirement. Some of the strategies
for addressing ethical concerns include implementing
robust data handling and processing protocols to make
sure there is fairness and reduce bias in the
data that's used for training. The AI systems being
developed they need to be developed with explainable
AI features that allow users to understand and
test the decisions made by AI. Basically, this is a way
of saying the user should be able to understand how AI makes the
decision so that it's not biased against certain
people of color, women, or any kind of gender, etcetera.
So regular ethical audits of AI systems are also helpful here
as they help adhere to ethical standards throughout their life cycle.
Let's go to the next section of the presentation, which is legal
considerations for AI. So,
as AI technologies become more integral to business operations
and daily life, they also encounter a complex web of legal
challenges. These involve compliance with existing laws
and regulations and development of new laws to
address emerging issues. Some example frameworks
include a GDPR general data protection regulation,
which basically imposes strict guidelines on data privacy
and security, including AI systems that process personal
data of EU citizens, European Union citizens.
The other regulation we're going to be talking about is California
Consumer Privacy act, which is similar to GDPR,
but it provides consumers with specific rights regarding their personal
data used by AI systems.
This applies to the state of California only biometric
Information Privacy act. This regulation
basically regulates the collection and storage of biometric data,
which is crucial for AI systems that use special recognition and other
biometrics technologies. Some of the key challenges
faced for legal consideration include intellectual
property. Determining the ownership of AI generated
content or invention pose significant challenges.
Traditional laws here are not well suited to address whether
an AI can be a copyright or a patent holder.
Traditional laws were defined for publications.
AI itself cannot be defined as a publication or
a magazine or anything else because it's being generated
by a software. And as technologies evolve,
these kind of laws and regulations need to evolve to make
sure they are being adapted to the evolving technology liability.
As we, as we saw
before, when AI systems cause harm, whether physically,
financially or emotionally, determining the liability
of a specific system can be complex. For instance,
if an autonomous vehicle is involved in accidents,
questions arise about whether the manufacturer, the software
developer who provides the navigation system and the
understanding, the self learning, understanding self driving system,
or the vehicle owner itself should be held responsible for such an
accident. The next part would be regulatory compliance.
Different countries may have different regulations regarding AI,
such as GDPR in Europe, as we saw,
which imposes strict rules on data protection and privacy.
AI systems operating across borders must navigate these kind
of different legal landscapes. There is a
new AI act that recently came that has been developed
by EU. So as these regulations keep evolving
to meet the legal evolving technology, it's a
up to the organizations to make sure those that are
operating across different borders must navigate and adhere to
these legal landscapes. Some of the strategies for navigating
legal challenges include engaging with legal experts
to make sure that AI applications comply with relevant laws
and regulations, participating in industry groups and
standards organization to help shape the laws and
regulations itself that are fair and practical, as well as implementing
rigorous testing and compliance checks within the development process
to mitigate potential legal issues before they arise.
With that being said, let us look at some of the challenges currently facing
AI systems. So with AI systems,
if AI systems are being compliant with different regular
frameworks, they are still being challenged.
Challenges faced by organizations,
basically that perform out of the border transactions.
Some of these challenges include regular audits, which is either at
a state level federal or a globe federal level, or a central
level. Data management strategies. So as
different nations have different regulations,
data management becomes very important. Some regulations
require data to be stored in state or within the country,
or need certain regulations to be followed before they are being accessed
outside the country. So, cross functional compliance
teams, some of the challenges here include making sure
working with cross functional compliance teams to make sure
these organizations that operate across borders
are adhering to different regulations frameworks,
making sure that these organizations are evolving their
systems to meet such regulations to keep in
touch with what is needed for these technologies
to stay safe and compliant. The last part is
going to be continuous education and awareness. Of course, as technology
keeps evolving, there's going to be the challenge of continuous education
and awareness. So it's up to these organizations to devise
a training plan, work with their different partners,
and employees to make sure each and every one of them is
up for the different evolution of technologies.
With that being said, let's conclude this session by
talking about what by summarizing what we saw
and what we definitely how AI systems itself
are beneficial to humans. So, to conclude, the security
of generative AI is not a technical issue, but rather a broad
concern that impacts public trust and ethical deployment
of technology. One must be proactive,
embedding security at every stage of AI development and
deployment. So security needs to be deployed,
considered as early as possible, rather than afterthought.
The commitment to robust security measures must evolve as new threats
emerge. This requires vigilance, continuous innovation,
and the commitment to ethical practices. By doing so,
one can ensure generative AI technologies are not only powerful and
effective, but also safe and trustworthy.
Let's take forward the understanding that security generative
AI is integral to leveraging its full potential.
We need to work collaboratively across industries and disciplines
to uphold the highest standards of security and ethics,
ensuring that it benefits the entire society without
compromising the safety or integrity of the public. Thank you.