Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello, everybody.
My name is Shraddha Kohli.
And today I will be talking about building user trust in conversational AI.
The role of explainable AI in chatbot transparency.
Today's presentation explores the application of explainable AI
techniques to enhance transparency and trust in chatbot decision making.
As chatbots have become increasingly more sophisticated and popular,
Understanding their internal reasoning still remains a significant challenge.
Let's first try to understand how do chatbot technologies actually evolve.
The rapid advancement of chatbot technologies have revolution,
have revolutionized human computer interaction, offering increasingly
sophisticated conversational experiences across various domains.
However, as these AI driven systems become more complex, they
often operate as black boxes.
making it challenging for users and developers both to understand the
rationale behind their responses.
This opacity can lead to issues of trust, accountability, and difficulty
in improving the system performances.
The early chatbots came in around the 1960s, which primarily relied on pattern
matching and predefined responses.
Then came the machine learning era.
With the advent of machine learning and NLP techniques in the 21st century,
it led to more advanced chatbots capable of understanding context
and generating human like responses.
With all these, the current challenges such as transparency,
leading to issues such as unexpected responses and biased decision making.
In order to curb these challenges, We did a study.
Let's first take a look at what is the overview of explainable AI techniques.
Explainable AI techniques aim to demystify this decision making
processes of complex AI systems.
Three key methods are explored in this study.
The first one is called LIME, local interpretable model agnostic explanations,
which help to create local linear approximations of the model's behavior.
By perturbing inputs and observing outputs.
The second technique that we used was SHAP, shapely additive explanations using
game theory to provide a unified measure of feature importance, offering both
global as well as local interpretability.
Lastly, we used counterfactual explanations.
Focus on providing minimal changes to the input that would result in
a different output, helping users understand key decision factors.
In our research, we, we researched around 150 users across different demographics,
and we employed three state of the art chatbot models, a retrieval based model,
a generative model based on transformer architecture, and a hybrid model.
We applied the three techniques that I just mentioned to these models,
assessing their effectiveness using metrics such as faithfulness.
stability, as well as comprehensibility.
The three stages were, we first did a model selection, then we applied
one of the explainable AI methods, and then we evaluated the results.
Our study revealed that each of the explainable AI techniques offered unique
insights into the chatbot decision making.
LIME effectively provided local explanations for individual
responses, while SHAP offered a more comprehensive view.
of feature importance across multiple interactions.
Counterfactual explanations were particularly useful in highlighting
the sensitivity of chatbot responses to specific input changes.
The impact of our use on user, the impact of our study on user trust and
understanding was that It demonstrated a significant improvement in the trust
and understanding when interacting with explainable AI enhanced chatbots.
Participants reported that they felt more confident in the chatbots ability and
were more likely to forgive occasional errors when provided with explanations.
This whole black box sort of became a white box to them.
The ability to see the reasoning behind responses led to increased
user engagement and willingness to use the chatbot for more complex tasks.
Without the explainable AI, they had very limited understanding
of the chatbot decisions.
And they had low forgiveness of errors.
With explainable AI, this comprehension improved by 48 percent, their error
tolerance improved by 40 percent, and their increased use of chatbots
for complex tasks such as healthcare or finance increased by 27 percent.
Overall, the user trust increased by 35 percent, and the user
enhancement increased by 31 percent.
These results were very promising.
However, still, there are some challenges in implementing these explainable AI
techniques for chatbots, such as the real time nature of chatbot interactions.
You sometimes make it difficult to generate comprehensive explanations
without introducing significant latency.
Additionally, balancing the level of detail in our explanations
with user comprehension is also a challenge as we don't want our
explanations to be too technical.
Such that such as non technical users cannot understand that model
compatibility is another issue.
Adapting the correct explainable AI technique to various chatbot
architectures and what model we want to select is also one of the challenges.
The integration of explainable AI techniques in chatbot systems
has far reaching implications for their development and deployment.
Developers can use the insights gained from explainable AI to refine
the chatbot models address biases and improve response accuracy.
Furthermore, the explainable AI can facilitate easier debugging
and maintenance of chatbot systems, potentially reducing
long term development costs.
So it definitely has an impact on the developer users.
The ethical considerations in transparent AI driven.
This raises important ethical considerations because while we want
transparency to build trust, we also want to make sure that we are not exposing
any sensitive information about the underlying models or the training data.
So striking that balance between transparency and privacy is crucial.
Furthermore, there is a need to ensure that explanations are presented unbiasedly
and do not inadvertently reinforce societal prejudices or stereotypes.
Some of the future research directions is that as our language models become
increasingly sophisticated, there is pressing need for a more advanced
explainable AI techniques that can effectively interpret and explain
their decision making processes.
Future research should focus more on developing methods that can
handle the complexity of transformer based architecture and other
state of the art language models.
This may involve exploring hierarchical explanations approaches that can
provide insights at different levels of abstraction, from individual
attention weights to higher level semantic representation.
Overall, with this study, we were able to demonstrate the significant
potential of explainable AI techniques in enhancing the transparency and
trustworthiness of chatbot systems through the application of methods such as LIME,
SHAP, and counterfactual explanations.
We have gained invaluable insights into chatbot decision making processes.
Our research has shown that integrating explainable AI into chatbots
not only improves user trust and understanding, but also provides
developers with powerful tools for refining and improving these systems.
As chatbots continue to evolve, the need for transparency and
accountability also becomes more crucial.
And there is a lot of future potential in this area.
Thank you.