Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hi, welcome. Welcome to. Well, what I expect
is not just another talk about artificial intelligence, but instead
I want to provoke you and ask you
whether artificial intelligence is something new or
has it been evolving throughout the years?
Now, before we get started, a little bit
about myself. My name is Vasko. As can see, I've been
in software development for 20 years and counting,
and I am coherently interested in the concept
of artificial intelligence throughout history,
in the sense that I am not totally
convinced that artificial intelligence is a new
concept. In fact, I argue that it
has been there in our collective minds
for quite some time. If you want to
agree or disagree with me, or just send any comment, please do so.
I can be reached via Twitter or LinkedIn.
My contacts are on this slide.
And without further ado, let's get started.
As I said, I believe in evolution,
in the evolution of everything and everyone,
including ideas and
concepts. So, is artificial intelligence
a concept that has been just created or has
it evolved through time?
That's a question I ask. I have my opinion.
I hope that will be clear by the end of this presentation.
At the same time, we are going to look
at the way how the concerns surrounding
artificial intelligence have evolved through time as
well. And since we are talking about time,
let's travel back in time about 800
years before Christ, give or take a few,
and let's land in ancient Greece.
There we find the Iliad, a classical poem
written by Homer, where the
Ephastus God is described.
Well, Iliad is not about Ephastus,
but this character is mentioned.
And as you know,
in ancient Greece, the mythology was composed of
several gods.
And in this poem,
Hephaestus has furnaces.
He builds things made of metal.
He builds machines.
These furnaces know exactly what Hephaestus
needs and wants. They are completely hands
off. Whatever it is that he
requires at any given moment to do whatever it is that
he wants to do, they will provide.
Now, you may argue that this is what we
know today as automation, right? You push a button and you
configure a whole set of machines to do whatever it is that
you need doing. I give you that.
True. However,
back in ancient Greece,
tripods were a sign of authority
and could actually be a sign of power.
Naturally, Hephaastus bit tripods
for himself.
Now, in ancient Greece mythology,
gods would gather in an assembly of gods to discuss whatever it
is that gods discuss. And if Hephaestus was
not in the mood to travel to the assembly
of gods, he would just simply send his
tripods.
Hephaestus'tripods would travel all by themselves
to the assembly of gods for the duration and back to
Homer's house.
Now, this would mean that those tripods,
those mechanisms,
were capable of autonomous navigation,
right? To get from Homer,
Hephaestus'home, to the assembly of gods and back,
they were capable of understanding directions.
They were the perfect definition of
an autonomous vehicle. Isn't that so?
Well, you may argue, okay, but that poem
is about a God. Even though gods,
even in greek mythology, they were conceived as
humans lookalikes.
Does that fit in the definition of artificial intelligence
as something that was built by humans to
try to be closer or better than
them? That's a bit debatable,
I would say. However, it is a good example if
we remain in ancient Greece and about the same
periods, but we switch the text and now we
talk about the Odyssey, also written by Homer.
There is a passage there where
the Phoenicius king is sending a
group of visitors home at the
end of their visit. And the king
is so happy with them that he
actually offers that they travel
using the Phoenicius ships. And why is
that? Now, you will forgive me,
but I am going to read from a translation of the Odyssey because
there is no way how I could put this
better. It reads,
for the Phoenicians have no pilots,
their vessels have no rudders,
but the ships themselves understand what it is
that we are thinking about and want.
They know all the cities and countries
in the whole world and can traverse the sea
just as well when it is covered with
mist and cloud, so that there is no danger of
being wrecked or coming to any harm.
Now, isn't this wonderful?
Almost 3000 years ago,
Homer was imagining a fully
autonomous sea vessel and
a telepathic one at that.
Nowadays there are but prototypes
of these vehicles and they are far from
being capable of doing what was imagined almost 3000
years ago. Now, if this is not a
good example of someone already thinking about artificial
intelligence, but not giving it
that name, I don't know what it is.
Now, let's continue our voyage through
time and let's jump to
our contemporary epoch.
We land in 1754 in
Europe and we find a book called
Tretier de sansacion, or Treaty of Sensations, you know, on a
literal translation by Monsieur de Kondilak,
even though, well, todays we may look at this title and
it may give us some thought.
In fact, it is a philosophical treaty
where Monsieur de Kondilak argues for
the mind body dualism.
Or as a contemporary philosopher,
Gilbert Heil put it, the ghost in the
machine. Now, Heil argued that such
a dualism does not exist. But regardless
of your opinions, what is interesting with the Tretidi san sacion
is that de cond?
To exemplify his position,
a statue, and that statue was animated by
an empty soul.
He argued that if we could feed
one sensation at a time to that soul,
bit would eventually learn
all human knowledge and all human abilities,
therefore becoming equal to humans.
In that regard,
if you will allow me the obvious comparison,
nowadays we train models. And how do we do that?
Well, we feed them pieces of information, just like Monsieur
de Kondellac wanted to do with his statue.
And we feed them such information until such a point where
we are convinced that the model has learned everything that
it is capable of learning. Well,
we haven't yet built a model that can learn of
all human knowledge and all human ability.
We have specialized models, but still the
principle is pretty much the same.
If we remain in the approximately
the same time,
we find a machine that was built with
the purpose of playing chess. It was a box
with a mannequin that
would move the pieces in
the chessboard. And it was said that this machine
could place chess better than any human in
existence. And in fact, the machine
was showcased in many scenarios across Europe back then,
and it actually won games of chess against
all opponents. It was a sensation.
Now, it was called the Turk or the mechanical turk,
not because it originated necessarily in Turkey,
but because the mannequin was dressed in an
oriental custom.
Now, this sensation came to
an end when someone discovered that
the machine was a hoax. There was no actual machine.
There was a human, a very good chess player,
but human, operating that machine,
or should I say that puppet, to play
the games of chess against the opponents.
Sadly, there was intelligence.
Yes, but it was human intelligence,
hardly artificial.
However, if we now
jump to 1912, we find the work
of Leonardo stories e quvedo.
He built a machine, or an automaton,
if we want to be precise,
that could indeed play chess. Well, not a
full game. It would play an end sequence
of king and rook against king.
It would always play with the king and the rook,
and it would always win against
the human opponent. And this time,
this machine could actually play. It was,
as said later in 1914,
quite an advanced machine for its period.
It could detect the position of the
pieces on the board and could calculate the next move quite
effectively. I would say it
was one of the first, if not the first, example of machine
that was capable of playing at least some chess.
On the subject of intelligent machines,
if we travel a little bit further in time,
we find Isaac Asimov's work, he imagined
a class of hobots that had a positronic
brain. Now, the positronic brain in Isakasimov's conception
was powered by a
particle called the positron. It doesn't actually exist,
but it was
sufficiently powerful to create
processing units that could indeed power
a robot and actually give conscience to a robot
or to what we today would call an Android.
What was interesting about Izakazimov's robots
was that they were
bound by the three laws of robotics,
three dogmas that were designed to prevent
them from being used directly or indirectly
to harm the humans. We can
argue that Isaac Asimov was already concerned about the
possibility of artificial beings,
or should I say artificial intelligent beings
being used as weapons against
humans.
Arthur C. Clark in 1968
wrote a story around a character wrote 2001
A Space Odyssey. And the interesting
artificial character there was Hull,
Hull 9000. But Hull was
an operating system, so not a robot per
se, but it was the operating system of a space station.
Now Hull would observe and learn from
the behavior of the human crew of this space station.
Unfortunately, there was a malfunction in
the space station and the crew decides that Hal needs to be disconnected.
Now, faced with this perspective,
Hal decides to defend itself.
Having learnt about humans,
he decides that the best way to defend itself is to
eliminate the human crew.
Hal became known by his sentence,
I'm sorry, Dave, I can't do that.
Which became, well, kind of an
icon of artificial intelligence
independence, or should I say some sort of sentience.
Now, this is an example of
an intelligent artificial being harming
humans intentionally,
arguably in self defense. But still,
not all artificial beings in
literature and cinema are dark or malevolent.
In fact, one of my favorite characters is
Marvin, the paranoid Android from
Douglas Adams, the hitchhicker's guide
to the galaxy.
This Android has been around for a very long
time. It is extremely intelligent. In fact,
it is said that it never needed to use more
than a tiny fraction of its enormous brain to perform
any task. And the most interesting
conversation it ever had was with a toaster,
which I find quite interesting,
considering that by this time Marvin
had already met humans. And still a toaster
was more interesting than the humans he had
met. Anyway,
going back to reality and a bit closer to our time, in 1996,
IBM built a computer called the Deep Blue.
This computer was capable of playing a full
game of chess, and in 1997
it actually beat the chess grandmaster Gary
Kasparov. You may wonder,
why have people been
obsessed with chess for so long and with
machines playing chess for that matter? Because chess
is an incredibly complex game in
the sense that the number
of possible combinations throughout an entire game of
chess is so large that
it cannot be solved by your typical
combinatorics or game theory
methods. That's what
made it such a challenge for
a machine. And in 1997,
it was proven that a machine, or this case,
software, could indeed play a game of chess as
good and even better than humans.
Now,
I said at the beginning that we would be looking at the evolution
of the concept of artificial intelligence. And I argue,
given what we have just seen, that artificial
intelligence was already there. Since the beginning,
it was just not called that way. Humanity has been
fascinated with the idea of intelligent
machines performing better than humans at any
given task. And they
have always been concerned that those machines
could, in fact, take over from
humans and in some cases, even take over their own
lives. So that's the dark side of
artificial intelligence.
Could it be a threat?
Well, in cinema,
a great example of artificial intelligence
being a threat is the Terminator series of movies.
It all started in that universe with an
artificial intelligence that in the beginning, was supposed to
help defend a certain group
of humans. Skynet in that
universe was an artificial intelligence in charge of
the system of defense of the United States of America
that, as anyone who
has ever seen those movies know,
went a bit rogue. Eventually, it decided
that humans had to be
destroyed because it became self aware and as
a result, could no longer be controlled by humans. So humans
decided to deactivate it.
Again, we see the common
pattern of humans perceiving an intelligence
as a threat, and that intelligence deciding to
eliminate humans in result.
Another good example of a failed cohabitation
between machines and humans is the universe from the
matrix, also a series of movies where
machines becoming self aware and eventually cohabitation
becoming impossible. In this universe. Therefore,
a war broke out, and this
time, the machines did not decide to annihilate humanity.
Instead, in this universe, they use
human bodies to harvest them for electricity,
but they need to be kept in a virtual reality
world in order not to go insane.
I'm not going to spoil the story,
especially given that recently a new movie was
released in this universe. But it
is sufficient to say that things didn't quite turn
out well in this universe for
humans, and also neither for machines.
Going back to the written word, another good
example of artificial intelligence
understanding humans, and vice versa, is robopocalypse
by Daniel Wilson. Now, in this story,
a professor attempts to create
an artificial intelligence program that could
be capable of absorbing
all human knowledge. Now, it so
happens that this program was actually quite
successful in that regard. So successful
that the program decided that first
humanity no longer needed to search for knowledge.
Bit would take over that task from
humanity as it learned it,
then decided to consider itself a God and
state that humans had become obsolete
now that it existed. When it
got to this point, the professor
who developed the program
tried to disconnect it,
tried to shut it down.
Unfortunately for said professor,
the program managed to take control of the
environmental controls of the room where it was
deprived the room of oxygen,
killing the professor and escaping that room
into the Internet, eventually infecting, or should
I say repurposing, all other robots in
the world, starting once again a war against humans.
I am not going to say how the book ends,
but it is quite an interesting ending.
Now, we have
now looked at a couple of examples where machines and
artificial intelligence become dangerous
to humans. We could argue that it was because humans
were dangerous to them. Let's not get there right
now. But another
question deserves to be asked. And what if machines,
what if artificial beings could be
kind? What if they would go the
other way? Instead of mimicking the worst
in humans, why couldn't they mimic the best in
humans as well?
Machines like me by Ian McEwan is a novel
set in a time where
artificial humans, or synthetic humans, if you prefer,
were just behind produced. So this man
Charlie gets some money, decides to
buy one of those synthetic humans called Adam.
And these synthetic humans have a particularity.
They are pre programmed from factory, but they don't
actually have a personality.
Their new owners tweak a
whole set of configuration parameters in
order to try to give their new synthetic human
a unique personality. Now, it so happens that Charlie
has a neighbor, Miranda, and she
works with Charlie to give Adam a conscience.
Now, Adam turns out to be almost
perfect, an almost perfect human
in such a way that actually, I'm going to spoil
the story a little bit for you. A love triangle
actually erupts between these three,
and their relationship,
emotional and even physical,
gives rise to a few questions.
Right, so, for example, what makes us human?
Is it what we do on the outside? So,
are the things that others can
view from us that make us human? Or is it about
our inner lives that make us human?
Opinions are divided regarding machines like me.
I believe it is still an interesting bit,
and it may lead us into other
works, such as, for example, real humans.
Originally a television series running from 2012 to
2014 set in
a similar world. There are synthetic humans.
They are intelligent, there are different models with different purposes,
and they eventually also build relationships.
Or should I say humans build relationships with these synthetic
humans, giving raise to questions such
as do these synthetic humans
or human robots or even hubots have
any rights? Should they get paid? Should humans
be allowed to form relationships, emotional and otherwise with
them? Is a new society behind
created? Are there parallels,
are these synthetic humans in these works parallels
to certain groups in our society?
How do we as a society face the possibility of
having synthetic humans who are better
than humans walking among us?
That is a question that is also asked indirectly in
Philip K. Dick's classic, do androids dream of electric
sheep? In this world, also,
androids exist built for
specific work, mostly manual
labor. But a few have evolved
beyond that stage,
becoming humanlike not only in their
appearance, but also in their behavior,
in such a way that only a complicated physical and
logical test is required to determine if
a given being is human or
synthetic. Again,
questions are raised on what it means
to be human.
Now, if we are talking about human characteristics,
if we are talking about machines, if we are actually
talking about building machines and building algorithms that
make decisions, there is another
human concept that becomes quite important, which is the
concept of fairness, of justice.
Are our algorithms fair?
So let's take the discussion
a few notches down from the philosophical
point where we were down to the algorithmical
level, and let's ask ourselves if the models
we are building are fair.
Let me give you a couple of examples of models that
didn't quite turn out as they were meant to be.
In the beginning of the year 2000,
also at the beginning of the Covid-19 epidemic,
confinements were in order. One of the
problems that had to be solved was the problems
of student grades, because students had
been working for months.
And now, when the confinement started,
well, it was also the beginning of the exam season.
So educational authorities all
over the world wondered,
how do we solve this problem? The traditional way of
determining a student's grades, an exam in
a classroom, together with other students under the supervision
of teachers, was not
possible. Many solutions were
adopted all over the place.
Specifically in Scotland, the Scottish Qualifications
Authority decided to employ an algorithm
that would calculate or predict the best grade
for each student. Sounds like
a good idea. The problem was
that the results were particularly
skewed depending on
where the student lived, depending on where
the school was located, and also depending
on, sometimes, the school itself.
Unfortunately, the algorithm was not actually looking
at the academic performance of each student.
Criticism was, as you can imagine,
paramount. Eventually, the algorithms
results were overturned,
and instead, each teacher awarded each student
a grade based on the student's work throughout
the year.
Another example of a biased algorithm
was, in the United States of America,
an algorithm intended to preemptively
avoid complications in patients
that could potentially need medical care in the
future. And so the idea was, let's apply this algorithm to
these patients history so that they can be
preemptively treated in order to avoid serious complications
down the line. Of course, one can
be cynical and say that these algorithms
had not only the best interest of the patients in mind, but also
the best interest of the health care
industry. That's another story.
Regardless, it sounds like a good
idea to try to predict who needs
more medical care to prevent complications,
right? It is a good idea.
The problem was that this algorithm
was using a proxy indicator,
and that indicator was the previous
health care spending of each patient.
So, in other words, if the patient had spent
a considerable amount of money in healthcare in the past,
then it was very likely that patient would suffer complications
in the future. Therefore, it should receive health
care, or should I say more health care now in
order to avoid such complications.
The problem with this indicator is that certain segments
of the population, for socioeconomical
reasons, or just for lack of availability,
or a combination of these and other factors,
did not spend much money in
health care in the past. So when they got
into a situation in which health care was
required, the algorithm would look at their
history and would conclude that these people were
not in risk of serious complications, when in fact
it would not be the case.
And of course,
unfortunately, according to
Scientific American, these conclusions were
mostly towards black patients.
The algorithm would conclude that they would not suffer from complications,
so no further care or no additional care was required,
whereas other patients who had spent more money in the past
were awarded more care.
Now, the algorithm has since been revised
according to the scientific press,
but the whole point is that the
idea behind an algorithm may be
quite good and may be worth of
pursuing, but the data that
is fed into the algorithm may
not lead to that conclusion, because the algorithm
may end up training on a bias in
the data instead of the intended purpose.
Which is why an active field of research nowadays
is understanding the machine
learning models is explainability,
such that nowadays these models,
they fall into one of two categories.
They are either black box models or white box models.
As the name implies, black box
models produce results that are extremely hard to
explain and may not even be understood by
domain experts. White box algorithms have
been designed in a way that allows results to be
understood by domain experts.
It goes without saying that this is still a field
of active research,
and the goal is to have as little black box
models as possible and as many white
box models as possible.
Especially because we can say that
all data is biased. Every single data
set is biased because someone had
to make a decision on which data was present there.
In some cases, it's pretty clear that
some pieces of information should not be in the data set.
For example, if we go back to
the algorithm that was calculating student grades, if the
idea is to evaluate students academic
performance, then, for example, it does not make sense to
include the postcode in the data set.
It may happen that the algorithm will find
a correlation between grades and specific postcodes,
and then people are graded according to the place where they live
instead of according to their performance. This is just
one possible example. There are many more.
And again, this is why explainability is so
important todays. And this is why extreme care
needs to be taken in preparing data sets and submitting
what is relevant information for the models.
And what about tomorrow?
Are we going to see an artificial intelligence
reach the singularity? Are we going to
coexist peacefully with
sentient intelligences? Are we
going to see our worst dreams become true?
Hopefully not. Well,
that's something we cannot predict today.
We can say that is still a black
box prediction. There is no way of
understanding how things are going to turn out. I hope
I can still make a nice trip
in a phoenician vessel sometimes.
Thank you for being with me. I hope this has been understanding and
useful. Let me know if you have
read or watched any of the works I
mentioned today. Have a nice day and be
let's build a better future together.