Transcript
This transcript was autogenerated. To make changes, submit a PR.
Welcome to this talk on design thinking before you develop your next AI product.
Before we go further, let me tell you a bit about myself and
my background in this field. I have been in software consulting,
design and development for the last 20 years.
After doing some Java J two e development at
the start of my career, I got several opportunities in
entity data to lead emerging technologies,
research, design development and now deployment.
So right from big data around twelve years ago, I have been with
these technologies like IoT, gamification,
enterprise, social blockchain, data science,
and now AI and ML. So now in my current
role, I actually lead fantastic
team of designers, developers and
solution architects who design,
deploy and scale these products for large enterprise
grade customers. And that really gives me
some of the real world experience in this field which I'm going
to share with you all today. In today's talk,
we will dive deeper into the current state of AI,
where things are, what are some of the issues that
haunt AI application in the absence of approaches
like design thinking? And then how can design thinking
really help? And how
do you design things for AI applications?
So with that, let's start the talk today.
To start, let's get started from the big picture.
What is the ultimate aim of AI products?
In my opinion, it is to be a force multiplier
for people and organizations taking over repetitive tasks
or very complex tasks that take time for
humans to do and that can complement human
effort. Things that are right now based on
subjective knowledge and experience
and which often result in inconsistent outcomes,
can be changed with AI. They can be made to be encompassing
of all the possible data and all the possible context
around that data to result in the best possible decisions
with consistency and reliability.
And the data that we are talking about changes with time.
In my opinion, the biggest promise of AI
is to ensure that the technology supporting our
society keeps updating itself continuously
with the changes that come with time, with place,
with the surrounding context, and it
keeps updating this knowledge and understanding with this continuous learning.
So lofty goals for sure, but I would say not too
far. And in fact these goals are not at all new.
Over the last 70 or so years we have made incremental
progress in AI. After coining the term and making
some early experiments, there was a long gap in which we did not really
do much and it is often called as AI winter.
But since 1980s, focused academic
research as well as development of surrounding ecosystem of
technologies has led to tremendous investments.
But nothing has been as big
as the last two, three years where we saw the first big use
case of generative AI through chat GPT and
that has rightly deserved its place on every next LinkedIn
post that you see or the conference talks,
newspaper headlines, and more often than not
on the minds of worried policymakers.
That said, it is not just OpenAI and chat GPT.
AI startups are sprouting in every
major industry vertical cross industry applications
such as vision, natural language processing and search
are still ruling the charts, but use case
focused development which is application of these
cross industry applications in a given business context,
such as manufacturing plant floor machine vision
or say traffic condition analysis on a highway.
Those are the ones that are actually resulting in real business
value for businesses, but none
of them are as feasible or even deployable
without the supporting platforms and processes. Where we are seeing a
lot of development recently, such as
those that are needed for Mlops or
AI Ops. But with around all this buzz
that we are hearing about AI, there are a few things which it is really
good at and some things where it is still very niche,
very new things that it is good at. First I would say is
the NLP based chatbots BI has really shined great
in these areas. Or inculcating knowledge from
heterogeneous data sources, detecting anomalies and
fraud in say, insurance claims,
etcetera, and then predicting based on
historical trends and creating better personalized recommendations.
Something that you see when you shop on Amazon or Instacart,
etcetera. It is not just based on your historic
information, but also the context, people like you
and where could your interests lead you to spend
your next big part? Right, but where are
we heading with all these developments? Around 90%
of the AI experts when surveys felt that
AI advancements will lead to human like
intelligence in the next 100 years. I don't know what
will happen decades later from now,
but one thing we can say for sure is that it
is here to get better, with over 80% of businesses
adopting it by 2025. But some of the
most prominent area of challenges in the current state of AI
still haunt us, and one of the biggest one is
scaling of AI application. Currently, the kind
of energy requirements that large scale AI
applications have are simply not sustainable
and feasible for most of the businesses and people.
That is something that still needs to be worked out.
And there are some very good advancements being made
which which tell us that this is something which will
be solved in next few years.
The next is about adoption of AI. Yes, it is
related closely to the cost of AI right now, but there are other issues
that are stopping AI from being adopted.
Generally they are related to the quality
of the outcomes, as well as the biggest
thing, which is again, closely related to adoption and scaling, is the
trust. The users and the businesses
are not able to put their complete trust in the
AI solutions. We will look at all these issues in detail in
a little bit, but having said that,
I think there are some things that you can solve
with approaches like design thinking, and there are of course,
a whole gamut of other frameworks and approaches that
need to be applied in order to solve the others. That said,
some of the most prominent challenges in scaling adoption of
AI this is a report from Everest group. I'm not
going to go through all the numbers here, but the key point is that organizations
do not really have talent and skill to make informed decisions
about their AI investments. The way they manage their data
and the computational infrastructure right now is just
not suitable for enterprise grade implementation of AI,
and everything that would be useful for them
is extremely expensive right now, at least at the current
state of technology, and the skills to maintain that
is just not there. There is also inadequate policy
and compliance infrastructure, so businesses do not know if
what they are going to do is going to be acceptable in the regulatory framework
or not. And then there's simply just unclear ROI
of the new things that organizations do want to do.
But the kind of costs that it takes, whether it is really
going to result, the ROI that they are
expecting or not, is unclear. But that happens with every new technology,
isn't it? Having talked about scaling and adoption,
let us look at the last pillar, which was really trusting
AI and why businesses and users are
having trust issues with it. So the first one that
is most talked about is bias. It occurs when an
algorithm produces results that are systemically prejudiced
due to either erroneous assumptions in the machine learning process
or the lack of context, lack of
data sources, lack of all the possible parameters
that can inform better decisions. And algorithms
can usually have this built in biases because they're
created by individuals who have conscious or unconscious preferences
themselves that may go completely undiscovered
until the algorithms are actually used.
Bias is a big issue which is haunting AI right now.
The second one is accuracy and real time intelligence.
All the AI systems that have been created either have supervised
or unsupervised learning in them. The data sources that they
are considering keep changing all the time,
and depending on how the AI system has been implemented,
it may or may not be taking updating its learning
sources continuously or not, and which greatly affects the
accuracy of outcomes. The third one is a big one, as it
is about explainability the AI system, the kind
of recommendation it has given you, on what basis has it said
so? Again, coming back to the example of doing
online shopping, your new recommendations,
have they been shown just based on your past history,
or has it also considered people like you? Has it
considered what is the next big fashion trend that is going
to happen? Or has it considered the weather implications
in your area in the next few months?
So, there is a lot of factors that AI algorithms can consider,
but communicating them to the end customers are extremely important,
which is right now failing that is one part of it,
things that can be easily explained. There is also
inherent in explainability, or the lack of explainability within
the AI models themselves, because they have been built
upon the understanding that we do not understand our
brain very much. We created these neural networks which mimic
the outcomes that our brain create, but we do not
know how the process itself works. The data scientists have a very hard
time explaining why did their model choose
outcome a versus outcome b in two
different scenarios. And that is a little bit harder
problem to solve than the first one, where there are parameters
that just need to be explained to the users.
The next one is about the security,
copyright and IP infringement. And as we have all seen with some of
the image generation or the video generation softwares,
with the generative AI, it does collect millions
of data. So data sets, right?
And where has it take? What influence,
if, uh, what has affected the result? It is
just simply not humanly possible to cite all
of them with every outcome that is created.
And it is important for it to look at all the data
sources. But that also heavily weighs
in on some of the creative ideas that original
creators have put in, and they are simply not
getting credited for it. Forget the, the reimbursement
for their well thought work put in. So that is,
again, some of the things from which the creative community has
a lot of problem with AI. The next one is the quality.
So a lot of you who have, who may have used
chat GPT for must have seen this, right? I have seen this myself.
If I ask to create the script for this talk,
I would see a lot of things being repeated, which is
known as parroting. I would see the answer that I
get today for a talk on design thinking would be
very different from what I may have got a year ago and maybe very
different the next year, depending on what data sources and
sets it has considered. And sometimes it
will be very outdated as well, creating these drifts.
And the outcome. The last one is hallucinations. Sometimes when
AI is not able to bridge the gap between
what it knows, it puts in things which are
totally out of the context. Right? Just the other day I was
hearing on NPR an example where they said a
newspaper ad agency or a product review
company was using AI, and they were talking about
the gym belts. And suddenly the AI hallucinated and
started putting in text, which was talking about the belts,
the waist belts that you use, or the fashion belts that you use.
And it doesn't take a human to understand when the
AI has done so, and it just creates that cringeworthy experience,
where would you really want to consider it for any serious
discussion or not? So, all these issues
are prominent challenges, because of which users are
not able to trust AI completely. But what is the
root cause? We've talked about some of them, the lack
of explainability, etcetera. But one of the key reasons
this is happening is because while the development of this
technology, people were extremely focused on
the technology itself and how to make
the algorithm work, they were not really taking
the end user in consideration, or society
in consideration, etcetera. But now, when the business applications are
out there in the world, this tech focused tunnel vision is
not working at all. So that is one of the key reasons. Another one
is that when people think from business applications perspective,
they think about people, process and technology.
And now data is a new participant in it.
It is more than the technology itself, it is
more than the process, it is more than the people. It is also an
active participant in an AI ecosystem, in an
AI application. So now, thinking from the
data's point of view is extremely important.
What data sources should you consider?
What should be the quality of those data sources is something
that have to be rethought of when you are developing an AI application,
which right now are not being considered by every product,
and which is where, again, which creates some of these
biases, drift, etcetera. Lack of explainability.
I think we've talked about it enough. There is, of course, the two kinds.
The one that is just not done
by the developers and implementers, and informing the users enough.
But then there is also an inherent explainability in the implementation
of these models. The last, but something that
is very true for any new technology, is that
it is just simply new. We have not lived in an AI
world before. Some of the problems that we are seeing right now
will not exist in 2040. So there will be new problems.
Of course. I think the problems that we would be talking ten
or 15 years from now would be about the supervised
learning, the sustainability of quantum computing,
etcetera. But probably we as a society would
have found out solutions to problems related to bias,
crypt, etcetera. So remember these four reasons
for the rest of the talk. Now that there is
extremely tech focus, the new participant and
data is not being considered in designing.
When the user experience people look at AI
systems and then there is lack of explainability, both from
the technology side as well as from the design side. And overall,
it is a new word for all, not only the implementers and designers,
but also for the users and customers. So nobody really
knows what they expect from the ultimate system.
Why I think design thinking can really help
because if you look at the key tenets of design
thinking, it is, it always keeps it's
focused on user at the center of it.
Everything that you design is for a specific
Persona, and that is extremely important for AI systems, because when
you consider the feelings and emotions
of a person who's going to interact with the system that you have implemented,
you are always going to do a better job than
by throwing technology at them, by perceiving what
they might or might not like. And then the next is it
also involves around. It very much relies on
cut. It also relies on cross functional
collaboration. You can avoid some of or
many of the biases and tech only vision problems
by involving various groups, diverse backgrounds,
diverse data sets, diverse context.
In your AI solution design,
there is also a focus
on iterative development which can really help you to
solve problems incrementally and then reduce risks.
The way it does it is by taking
the solution back to the user sooner than
what traditionally system implementations too.
You cannot solve all the issues that plague AI
currently, such as the talent and skills gap,
or the hardware sustainability and inefficiencies,
etcetera. But there are definitely these application
adoption and trust issues that could benefit
from the approach of design thinking. So before we
go deeper into how, let's see what is design thinking?
What design thinking actually does is that it makes sure
that whatever product or service you're trying to implement is
actually viable for your business and it is feasible with the
technology at hand. And most important, it is desirable
by the users that are going to use it. It is centered
for most of the innovation that has happened recently.
Many of the successful apps, companies, products and services
have benefited from it because of these key
qualities of design thinking approach. But again,
it is not new at all. It is not something that
has only recently come and no products or services have
become widely popular because of this. Without design thinking at the
center, it has always been there. Everything has been said about
this. You take up any industry is going to take a successful
book, take a successful movie, take a successful song.
It has always had these coordinates at its center.
But the reason why we need to talk about it now about
this again is because AI is new and it is showing some of
the problems that other products and services have shown in the past,
which when they use design thinking, help them get better about themselves.
That is why AI could really benefit some of the key tenants
of design thinking. For those of you who go by the definition,
here is the definition for you. It is an approach to solving bigger
problems by understanding users needs and developing insights
to solve those needs, resulting in an AHA experience
for not only the users, but creators and stakeholders
as well. Now before, let me go back before we
move on to when should you use design thinking or not? If you
just look at the highlighted words here, it would give you a
very simple guide on when to use
design thinking or why is it important? Right?
Wicked problems. Problems which are not easily solvable
by simple if then else in for loops, right? When you have
to consider users needs again, think of trust and adoption
issues. When you develop deeper insights, think about
all the data and context that we have been talking about till now and create
an AHA experience, something that people have not experienced before.
And that is what your AI systems is
supposed to do, which we all saw that aha experience
when we use that GPD for the first time, right? So coming
back to when to use it, right? Whenever you need to
understand user needs and develop insights into
those needs, that is when design thinking should be used.
When the problems are really wicked, then that
means they are extremely complex. The answer
is not straightforward. And when you
have, you need to create a unique experience. Now let me give an example
here. For example, in healthcare industry vertical,
there is a lot of talk about analyzing the electronic
health records and creating these use cases
which come up with recommended treatment plans.
The basis of that is that there are certain conditions
which only involve looking at certain parameters and depending
on the context of the patient, the recommended treatment
plans are a few and you cannot
really go wrong. It is really just a few options
that a provider has to consider.
So here you need to understand the
user needs. When you're creating the system, you have to think about the patient
trust issues, right? What if you give the information
of these recommended treatment plans to the customers
or to the patients directly? Right? Are you ready for that
kind of world where people are getting their treatment plans
from an AI solution? Would they be able to
trust it or would it create us? Could it create distrust
in the physicians and providers themselves that
oh, if these are so simple things, there is
a possibility of misuse of that information by the
patients themselves, right? If they have just considered two
or three parameters and coming up with their own treatment plans
and maybe ignored a larger health health condition which
a physician would have been able to look at much more in detail,
right? So maybe for this particular use case,
patients are not the user group that you should be focusing on.
Maybe it still needs physicians or the human supervisions
of certified nurse practitioners or
any other provider Persona type, depending on
where one group is permitted versus other. For example,
nurses should get only these aspects of recommendations.
Probably doctors can look at, or the high level of
recommendations, etcetera. So you probably
need to rethink about your user groups and the needs of that user
groups. Maybe patients are not at all a user group in this particular use case
at this time, right? The next one is why is it
a wicked problem? No one knows the right answer here.
No long term study has been yet done
on automated recommendations of treatment plans
in various conditions. Maybe recommended
test plan may be good for weight management type of issues,
maybe not for a diabetic's weight management,
maybe not a weight management in pregnant people, etcetera,
right? The correctness of this has not yet been been studied
in long term, it is fairly wicked problem, right,
that you have also not studied it. And what happens
to the experience of the end customers and the
provider spaces as well?
You have to create the unique experience here. No one has lived
in the true AI world and you do not know the repercussions
of this, right? So you have to create an experience that
fits their requirements right now that increases
their productivity without compromising on the quality of
care that the end customer is getting. It is
a perfect use case where design
thinking should come in before any actual AI
application is launched in this particular area.
There are several school of thoughts in traditional design
thinking now. Literally, they come from schools,
many universities. The one that is most popular, which we will
be going in detail today, has come from Stanford's
school's framework for design thinking,
and we would look at it in a bit more detail.
But all these different frameworks that exist out there, their key
goals remains the same. They all focus on
understanding the user. They all rely on radical
brainstorming in cross functional groups,
they all promote rapid experimentation and going
back to the users with the test of those. And they all believe
in co creation and collaboration between user groups,
different teams, different sets of data, etcetera.
So doesn't matter which framework you are picking up
until it is these coordinates in it.
But we've talked about all what design thinking is.
Let us also see what it is not. It is
definitely not a quick fix or a band aid type of approach,
right? Where you are seeing certain issues in production
and there is a big user trust problem which you need to fix
in a week or so. Design thinking is not the answer.
You need to do something else about it. Design thinking is generally
needed when you are starting a new product or service
because it takes time to do the iterations. It takes
time to come up with the final solution. It is also
not an approach where you have the technology at hand
and you are looking for where to apply it. Right? What is
happening with most organizations right now? They have
AI and machine learning. Oh, how can we use generative AI
in insurance right now? How can we use generative AI?
So there again, hammer looking for nails.
Right? This is not where, again, design thinking would help you.
Design thinking would really help you. If you have a problem space,
if you have a user group in mind and you want to solve
a specific problem, it is also not a quick response
to competition. Again, enterprises these days are saying,
oh, our competitor a is not yet
using generative AI. Maybe if we do, we will have a
better edge over them. Maybe not,
right? Nobody knows the answer. But definitely design thinking
is not a way to get that competitive edge.
You probably need to look at the strategy a little differently.
You need to look at your industry specific use cases.
You need to look at where your business is differentiated,
etcetera. Again, a whole side of business that
cannot be solved by design thinking. And then it is definitely
not a foolproof formula for sure. Short business success. Again,
related to the third point here, there is,
it's not that you can solve every problem that your business
is currently facing or your use case has with just design thinking.
It is really about creating new products
or experiences or services for known user
groups. So let's look at traditional
design thinking as well as what is emerging for AI in
the market space right now. So as we talked about this
Stanford D school design thinking approach,
there are five modes to this approach, right?
You go from understanding the user, which is the empathize
mode, and we will go into the depth of these modes again
in a little bit. But there are these five modes where you understand
the user, you define the problem space, you collect the ideas,
then you try to solve the problem with rapid prototyping,
and then you test it with the user. Again, this is how
traditional design thinking works. Again, it may look like a waterfall approach,
but there are several iterations that can happen between
the modes themselves. For example, between empathize and define.
You might go back to the user to understand the problem
space. Again, you might want to refine the problem statement again
by talking to the users again, and you can do several such cycle
back and forth, similarly between prototype and test, or between ideate and
prototype, or the whole cycle itself.
It is a very iterative approach, although it may look like waterfall in
this diagram. So this is what traditional design
thinking approach look like. If you look at what is
out there in the world right now, this is an example.
It has come from IBM's designed for AI framework.
Now, again, they are looking at design thinking from the AI's
point of view. From what, how AI is making you think about
the business, about data, about the
understanding of the users. How should
you prototype about it? What knowledge have you created?
What knowledge would you continuously learn about,
etcetera? Again, looking at the problem space from the
AI's lens. Now I want
to explain this for the benefit of those of you who have worked
with design thinking approach, as well as for those who
are doing it for the first time. The traditional design thinking approach
has been there for some time now. People are comfortable with it,
they have deployed it, you've seen it succeed multiple
times. So it is very much possible for people
to just fall back on the traditional approach without really considering
these new frameworks. And then this new framework that
you have seen, for example, that of IBM, requires you
to again practice it, learn it, and without really
implementing it in the real world, it is difficult to be
an expert in these new frameworks and approaches.
So that's why what I want to talk about today is
how you can train the traditional approach and apply some
AI considerations on it so that your learning curve
is not as deep. So that is what approach
I'm going to take for today's talk. If you all have different ideas,
you have any discussion point about it,
we can talk about it later through questions as well.
But right now, for the purposes of this discussion,
let's just take traditional approaches and then apply
AI considerations on them. So again,
I'm going back to the five step or the five
mode framework, the Stanford schools, and for
each of the steps that we would talk about, I would talk about the goal,
the process that is generally taken, and the tools that are available in
traditional approach. So first of all, mode one for empathize.
The aim is to understand the users within the
context of your design challenge.
The process is basically to observe, engage,
and immerse with these users. Some of the tools that are available
for you are interviews, empathy, maps, and in context immersion.
Now, let me give you a quick example of how this traditionally works.
So, for example, if you are creating
an application for kindergarteners,
generally what the designers would do is
be in the classroom at the level of kindergarteners,
sit in that space, see what are their physical constraints,
what do the kids need, what frustrates them, what excites them
in the classroom. And then look at that,
and then see all these concentrations are met when they
are designing the final solution. So they would actually immerse
themselves, they will interview their end users, and they
would create the empathy maps of saying, okay, what do they say?
What do they do, what do they feel, what do they like, what do they
not like? Etcetera. And this in context immersion becomes
extremely important, even for
non AI applications. So let's now, having said that, let's look
at the AI considerations that any of
these modes need to take off. So first we'll go to empathize.
So it is extremely important to observe
the user in non AI word for, again, going deeper
into it. If you are creating an AI application,
it is always very much possible that you would just
interview a user virtually and say, okay,
this is how you are going to use a treatment plan,
right? Maybe somebody who is considering using treatment
plan, somebody who is going through the diagnosis right now,
or somebody who is going through the treatment plan itself
right now. Until, unless you sit with the
user, when the diagnosis is communicated
to them, what do they go through during that time? What kind
of questions do come to their mind? What do they ask
their provider? What does the provider explain to you? Until, unless you
sit in that context and you observe all your user groups
properly, it is often possible to leave
out certain data, certain key consideration,
the point of explainability and all
this is extremely important for your AI applications to create the trust.
Also, you have to select the data sources based on the
authenticity and accuracy, right? Again,
considering the patient's questions, considering what
do the providers or the doctors look at most,
and what do they believe in? Making sure you prioritize that
in your AI application is important. And then you
have to ally the AI solutions to users context,
something that may work very well in,
say, a generative AI chat bot that is general
purpose may not be right for
what a diabetic patient is looking at to consider
its next treatment plan, right? You cannot
give them the same kind of disclaimer
that AI results may be wrong. Here, right? What you
maybe again, as we said, patient may not be the right user group
at all. If it is the doctors and physicians
that are your writer user group, you might consider giving them a confidence
score, saying that this is what algorithm thinks is 90%
accurate or this is 60% accurate, etcetera, so that
it increases their productivity, but also tells them that
how much relevance they should give to a particular again,
it is extremely important to align a general purpose
or a cross function AI application to the user's
context. Going to the second mode, the define generally
in traditional design thinking, you capture the findings of
your empathize mode and you create a deeper understanding by
creating a Persona definition of your users. You craft a
meaningful statement, an actionable problem statement really,
which is generally said like this, right? A user needs
something in a way that they are able to do something.
For example, they may something like diabetes
especially needs to provide treatment plans
in a way that increases their efficiency
and productivity. So that could be our high level problem statement.
And then you might be able to come back with more how
might we statements saying that how
might we be able to increase their productivity through AI?
How might we be able to increase the trust
of patients in the solution through AI?
So there can be several how might we statements that could be horrible.
The next one is really about storytelling, journey mapping
and Personas. These are the tools that are available that people
often use to create the problem statements,
or create these how. How might we statements. These are the
supporting tools that they use now. These are the considerations that people
should keep in mind. They should go back to the user for
validation of these problem statements. Are we considering the
right things for this solution? Is this what you
would really like to see in the final how
would you act if you were able to solve this particular problem
for you, etcetera? And you have to develop these insight
questions for non functional requirements also.
And this is where cross collaboration happens between teams. You need
to bring the security and privacy teams,
the technology team, the infrastructure team, and you have to also
bring in. So when we are talking about efficiency and
productivity of the provider, you also have to
look at what are the security and privacy concerns that you need
to take in mind. What are some of the technology concerns,
investment concerns, etcetera, that you need to take in mind. So again, you have to
cross collaborate, not just with end user, but also
with internal business team. You have to define the
values of your solution that should be tested with each phase.
So this is also extremely important that especially for
AI, you have to put in the values,
for example, quality. What is the targeted accuracy
of the solution? How much drift should be allowed?
What is the continuous learning approach to
this, of your overall? What are
some of the privacy guidelines that the solution should always follow?
And you would see, if you do that in this particular mode,
it's really going to help you when you are coming up
with ideas. It is also going to help you when
you are coming up with prototypes, the final solutions,
etcetera. And this is where again, design thinking can
greatly help AI solutions in maintaining
their quality issues, as well as increasing
the consumers trust in the overall solution.
Coming to mode three idea
in this mode, you basically try to
get as much ideas for solution as
possible. The main aim is to create both
volume and variety. You are totally non judgmental
about the ideas that you are collecting. Usually people use
brainstorms, brainstorming sessions,
and looking at existing solutions with cross
functional teams and creating. They use
mind maps and notes, clouds tools, collect as many ideas
as possible. Now, again, from AI perspective,
if you think if we really actively seek out alternative
sources of data and perhaps even conflicting
point of views to include in our models
of the work, perhaps the algorithms
will be less biased, right? They will be less open to
manipulation. And this is all true for supervised
learning. We are not yet talking about the
future where AI is taking decision of its own. Remember,
singularity is still not here and humans
still have the unique ability to engage in the
decision making. So this is where, from AI's point of
view, you have to involve cross functional teams,
get their context, their alternative views of the world.
You do not have to really solve the problem here. You're just collecting
the data sources, the data sets, the values that you
should consider, things that are important for other teams and groups,
and you're just collecting those ideas,
those solutions. And here you have to surface all
the AI opportunities and pitfalls. Again, this is important.
Traditional design thinkers do not do this, but AI
has almost made it most important that you consider
the security aspects, the privacy aspects,
the infrastructure constraints that you may have on your solution.
And you involved technology, not just business people,
in coming up with the ideas for the solutions. You also
have to consider the pitfalls that your AI solutions may have,
right, the risks that are associated with
getting the solution in the hands of the end user.
And then you have to draw, and this is really
an example, this is really a tool, I would say, which works
for anything, AI or not. When you're doing such brainstorming and just
collecting ideas is to draw some inferences
and some motivations from parallel universes,
which in our context, we would say, for example, if you are developing
something for healthcare, you may look at retail,
or you may look at finance to get some ideas as well. From there,
how people have used AI solutions in those contexts,
there may be a few ideas hidden there that may be applicable in your
context as well. The next mode is prototype.
Here, people create the physical form of the best ideas,
the prioritized ideas, and then they allow people to experience and interact
with them. And that is where they again record their emotions
of how people would react if a
service or product, as hypothesized, would be presented
to them. The process that people use, they generally learn
and explore. They solve any disagreements that could
be there between ideas, between teams. Here, this is a
great opportunity for solving that. They would start conversations about
things that have yet not been talked about between teams or inside
teams yet. And for example, in AI's world,
we can talk about policies that have not yet been considered,
the fears of the users that have not been yet taken into account.
And how could technologists look at solving that?
Right. Breaking the larger problem into smaller
components is again a key tenet here, which really
means that if you are creating, say, generative AI
based chatbot, what are the different aspects
that it would have? What are the different modules that it is going to
have? And you can create a prototype for each different subset of
the problem and test it individually before actually
bringing everything together to see whether.
And again, this is to reduce your risk, reduce your investments of
future. You can fail quickly if people do not like something
at the prototype stage itself. Generally, people use catching
physical mock ups, wireframes, they use interaction
flows, storyboards, prototypes, which are
again, something that you can very quickly create
and test with your end users. For AI,
you have to now think of your prototypes
a little more advanced than they have been before. You have to focus
on the technology a little bit more. You have to set clear
test goals for each prototype. Now, these test goals, remember the define
mode. You are deriving your
test goals from that defined mode. Again, the values that your
system should consider and are your prototypes.
Considering those values, what and what not should
be presented to user in the solution. Have you considered
all that? So test for those goals again
with each prototype. Think how the user will test this
when you are presenting an AI solution to them, right? What are
the things that they could break? What are the lines
outside which they are going to color the solution and
then test the values again and again. I would explain
this. I would emphasize on this rather that
you have to think of explainability, you have to think of bias.
You have to test those values in your prototypes as well as in
the real solution. Next is the
test mode, the final mode of traditional design thinking where you
solicit feedback on prototypes by putting them into the context
of use. You define these prototypes and solution and
learn more about the user. You continue to ask
the why questions and refine your point of view.
Sometimes you create things which and by the time you have
created there is additional insight available to you which
might ask you to pivot totally or change your initial
goals again. So sometimes the iteration can happen after
you reach this end state as well.
So be ready for that. Continue to ask those why
questions. They would really help you to do that. Some of the things
that people use here are again, desirability testing,
field studies, feasibility testing. They do cost
analysis, they do swot analysis, etcetera. Again, some of your
MBA friends can actually help you with that, but this is where
you actually test the feasibility, desirability and viability of
your solution alongside your competition, right? An extremely
important stage. This is what you do before you actually go
ahead and code in a real prototype.
And then some of the AI considerations for this mode
are again, show the user something
that they could test. Don't just tell them an idea and
try to gain their feedback from that verbal idea.
Regard their reactions when they see something for the first time
and how they use it, right? Not just yes,
they like solution and yes, it is passed, right?
Test with a new set of demographics. You may have
collected your requirements or ideas, or even you
may have studied a certain type of demographics and
user group along with testing with them.
Consider alternative sources if you have till now for
your say again going back to our healthcare treatment
plan recommendations use case, suppose you have
tested it with older demographics till now.
Think of what happens when you go to teens or when you go to
young mothers, etcetera, right? So that is where this can really
help when your test mode expands. And this is where again those
trust issues could be avoided. Because till now you might have been considering
a view of the world that was not encompassing of certain things that you
have not considered. The last is test with
newer versions of data sets. This is again just given
how time boxed things generally are in our industry,
you may not have the freedom and ability to test
all the data sets at every stage, even at the prototype stage.
Or before that, when you finally are in test mode
with a real prototype, open it up to newer data source
and then see how it fares against them. See whether it
suffers from hallucinations, drift, or those parroting issues
that we talked about. So this is a place where I've put all
the considerations together for those of you taking screenshots,
so that when you are practicing traditional design
thinking, you have something to go back to and
look at all the recommendations at one place beyond
the design stages too. This work does not stop
and you have to keep these things in
mind when you are actually designing the user interface,
the final solution for your customers. You have to keep transparency in
mind, you have to keep explainability in mind,
and you have to keep testing for alternates users
more than ever, right users, data sets, how your
different releases of your software are acting and you will keep doing
that, not just during the development of your product or service,
but even after it has gone into
production. And when can you do this is
a question I often get right when I introduce these concepts to
senior leaders, cxos, etcetera.
They are like okay, we have already started a pilot or we
do not know anything about AI or we are already
on this journey, are we too late? So you
can start it before you start prototyping your next idea.
You can also employ it in your current project
as a parallel stream and you can
always look at any weird wicked problem and
apply it there. Start wherever you are and you will be fine.
Again, thank you so much for your time today. I hope
you liked this session. Let me know if you have any questions.
You can always shoot me a note at arushi dot shivastava
mail.com or arushi dot shivaswapntdata.com and
I would be happy to discuss this further, especially if you have
any alternate views about this because just like this
approach, I would like to consider diverse data sets in
my thinking as well. Thank you so much again and
hope you enjoy the rest of the conference. Thank you.