Transcript
This transcript was autogenerated. To make changes, submit a PR.
Everyone, welcome to the session talking about quantum s pad algorithms as
a means of harnessing quantum advantage in Niscara. So,
before going ahead with the session, I'd like to explain how
the whole session will be structured. So, there will be two components to the whole
session. One would be the presentation deck, and the other would be the notes deck.
The presentation deck would serve, as you can say, major bullet points or the main
points which I'll be covering in this whole session. And the
notes would contain an in detail explanation of the main points which I
talk about in the presentation. So the two things will go hand in hand,
and I think both of them will be provided to you after the whole session,
so you can go through both of them and use them as
a computing material to each other. So without any further delay,
let's get started. So here we have the presentation deck. And today,
as I mentioned, that we'll be talking about quantum and SpAd algorithms as a means
of harnessing quantum advantage in Nescara or quantum computing.
So even before we get started with the whole presentation, it's important that I
talk about what Nescara is, because I'll be using this whole
chronology of Nescara a lot in this whole presentation.
So basically, Nisq, as we call it, is noise intermediate scale quantum,
or you would basically say and has of quantum computing,
in which the computers which we have right now, the hardware which we have right
now, are prone to error. They are not that much scalable,
and they have their own limitations. As we'll talk about going ahead with the presentation.
So the first thing is, again, always is the introduction
in which I'll emphasize why even we are talking about quantum computing
and what is the potential of quantum computing when we talk about solving
large scale problems, then I'll emphasize on limitations of quantum
current hardware. In sense, I'll talk more in detail about NisQ era of
quantum computing. Then we'll talk about some concepts
of quantum mechanics that we particularly exploit when we are talking about quantum
inspired algorithms. And. Yeah, I think before further delay,
I will now I'll switch to the notes part of the whole presentation and we
can get started.
Yeah. So, as mentioned, even before we get started,
it's important to talk about why even are we talking about this whole.
Why even are we having this session, to be very honest. Right. So that would
be the capacity of.
Yeah, give me a second. That would be the capacity of the
quantum computers.
Computers to solve complex problems.
So I think there already would have been a lot of sessions which will
be particularly highlighting this aspect of quantum computing before you get to
this session. But I'll try to keep it brief just so that, just for
context, so that we have the continuity of explanation.
So, see, in quantum computing, we have a concept called qubits,
right? And just to put it very, in a very abstract
sense, we know that the qubits, whenever we add one qubit to
our, let's say we have some n qubits
here, like we have some n qubits which we are using to solve a problem.
Let's say the problem is px, right? So if I'm adding one
qubit to this whole system, I'm basically doubling my
computing capacity, or I'm doubling the potential of this system to solve
any kind of problem, right? That is how it goes. I think that
is what you must have heard by now from previous presentations
or by your own knowledge of this domain. So this is the core
concept, this is one of the core concept of why we even want to go
towards the quantum computing side. Some problems,
like you can say your traveling salesman problem,
your traveling salesman problem, and then you have problems like,
you can say topology optimization problem,
you have your genetic, you can say your usual combinatorial
optimization problem, I would say combinatorial optimization problem. So these problems
don't have a particular solution, right? So they work on an approximate algorithms.
So there's no better way to solve these problems rather than doing
it on a quantum computer. So this in some sense,
is where we see the potential of quantum computers
to be able to solve complex problems.
But again, now, this would be a sort of like a basic of why
even do we go towards the quantum computing side? I means I think I
didn't go much in depth for this particular section. And the reason for that being
that I think it would be already being covered by other sessions, because I
already saw the lineup. So they are like particular sessions which talk about exactly this
aspect of quantum computing. Now I'll come to the
more on the side of the things which actually rates sense,
which is actually important in sense of what we are trying to achieve here.
So that would be the limitations,
limitations of current
hardware.
Let me switch to a different pen.
See, the very first number, very first limitation is how we put it,
is scalability.
So whenever we are talking about quantum computers, right, whenever we are talking
about problem solving, any kind of problems. So an aspect of scalability is one
of the most important aspect, because let's be honest here, right, whenever you
are trying to solve an industrial case problem, the number
of variables or the number of potential candidates, which you have
to consider as a potential solution to a problem, they increase exponentially.
Right. Let's take a very simple example here. Like, say, I have to
say, just basically find this traveling salesman
problem, but instead of a city, I have to solve it for a whole
country, right? So it would make sense to you, right, that now the number of
possible solutions or the number of possible path that that salesman
can take, they increase exponentially. Right? So whenever, basically we're running
one more point or one more possible endpoint or access point
to the whole TSP problem, we are potentially increasing. We are
doubling the number of possible candidates, right? So it's
easy to see that. And scalability is an important aspect whenever we are
talking about any kind of problem. But that is
one of the major limitation of current hardware, because you
can see there are multiple architectures which we have in
our quantum computing domain right now. We have like,
annealing architecture. We'll talk about annealing in details later.
We have gate based architecture.
Gate based architecture.
You have your topological qubits,
logical qubits. So these
are some of the major architectures which we have in quantum computing.
All of them, they could be like superconducting qubits. You have
your diamond head qubits and your electron spin qubits. All of them are derivative
of this major architecture here. I'm talking about the architecture of hardware, not exactly
type of qubit which we use, right? So now, considering these
architectures, there are a certain hard
limits or the number of qubits which we have available, right? And as
we'll proceed, we'll also talk about your quantum error correction
as a sort of like, what do you call, like extra weightage, which we need
to carry when we are never, we are trying to solve anything on a quantum
computer, right? So coming back to this.
So scalability becomes an important aspect because, see, due to certain aspects
like quantum error correction, we are already losing some qubits. So by
some chance, say we had total n qubits available as
given by any hardware vendor, hardware being the quantum hardware.
And out of these n capital n qubits,
some small n is usable, while some
small m would have to be reserved for certain aspects.
As we'll talk in detail has, we go ahead, like, there are aspects like quantum
error correction, ancillary qubits, support qubits. So those are
not usable qubits. So we are saying that even if we have
this capital n number of qubits, not every
qubit of every qubit is usable. That just means that we are further reducing
the scalability. And this value of capital n is already not that
large. We are talking about 120
usable qubits. That is like more or less the upper limit,
which we have right now. And 120 possible qubits is not that much as,
I mean, you have to take my word for it, because if I go in
detail and you try to explain what different kind of
algorithms would require different kind of number of qubits, then it comes out to be
a large number. Any algorithms which you will develop right
now on quantum hardware or quantum architecture will have certain limits.
And those limits are very hard, and those limits do not scale well,
when we are talking about scaling the whole problem into something which
can be used directly in the industry. Right. The second
aspect, or let's say one of the second issues which we
have in the whole quantum computing thing, is noisy qubits.
Okay? So even if we say in the previously
I mentioned that say we have n qubits, right? But I said that
out of this n qubits, only small n are usable,
right? But even the small n are not high quality qubit.
They are not usually. Okay, when I say
quality. So, see, when I use a qubit, like any
other thing or any other particle is just, what do you call,
like, say, if this whole box was my quantum system,
then this qubit is a particle in that system, right? So in
ideal situation, we would want this particle to be able to retain its property
for lawless amount of time, right. We would want this particle to be able to
maintain its state, as we call it, in the quantum realm,
for the largest amount of time. But as I'll talk about that,
there are certain, you can say, losses, which occurs, right?
So this particle is not able
to maintain its state. So these, what we call are
known as noisy qubits, because over time, due to
decoheration interference or through outside interference
of material, they lose their potential
state, right. What would mean is that,
say, somehow, if this was a quantum
particle, this was a state of a qubit. Like this has some information
which is important, right? But if I were to leave this qubit,
even in an isolated system, isolated system here being a quantum
hardware or a quantum system, the state would be lost
over time, and this would actually release back,
come back to its original state, which is zero. Zero being your
usual qubit, zero or one. So this is usually called
decoherence. Decoherence. And what
this leads to is that this leads to loss of information.
So basically, you cannot rely on a
qubit to be able to maintain its state. And other aspect
to this whole noisy qubit thing is one would be, as I
mentioned, coherence.
Coherence. The other is the error
propagation.
Error propagation.
So what error propagation basically means is, I'll take an example, example of
a gate based architecture. So in rates based architecture, what we
usually have is we have a qubit, which usually starts as zero
and it passes through gates. Whatever gate is X-Y-Z
Hadamard or whatever gate it is, it passes through the gates. So,
yeah, for this talk, I'm assuming that you have the basic understanding of
quantum computing. So that is something which sort of assumed
in the beginning. So again, this would be the x gate, and then
say we have some different gates, say y gate, and then we had some different
gates, h gate. So every time a qubit
passes through a gate in mathematical sense, it's very easy for us to
know, right? All of us at this point, if you have
read about basics of quantum computing, you already know that every
qubit is basically a matrix, right? Say I'm having a simple matrix of
zero and one, if this is basically a one. So I mean, when I
say I pass it through a gate, I basically means that I multiply it
with the matrix associated with that rates. In the rates based
architecture, at least, right?
We get some solution state, and which is the state at the end of this
whole, let's say this whole socket,
right? But in actuality, in actual physical terms, in hardware term,
we know that we use different possible principles.
Like we use microwaves, we use certain,
what do you call photonic radiations, right? And we use certain aspect,
certain actual physical, physical properties of
these particles to store the state of the particle,
basically to change it, like to modulate it, right? So basically what
we are saying is that after, say, if I started with psi
one, wait, just give
me a second. Yeah. So if I started with
psi one state, so what I'm saying that after passing it to every gate,
I'm changing it to some state, psi two, and I'm changing it to same state,
psi three, right. And so on. But in the course of this action, because I
said that the qubit, which we have right now are not of high quality,
we sort of lose some data, we lose some state of that
particle. So some error gets propagated, like say some e
one error is propagated the first time we pass it through a gate. Some e
two error is propagated the second time we pass it through a gate. And basically
this, and goes on. So basically what happens is that now this e two
component will be a sum of e one plus sum plus e two.
Basically the error of the previous gate plus the error of this gate.
So what I'm trying to imply here is that there's a propagation of error as
we go on to pass the qubit to gates.
What this means, and what this actually means in terms of consequences sense, is that
we have a limit to
the number of rates, number of gates,
or circuit depth, as we call it in
quantum computing. So the circuit depth is limited, and limitation
of the circuit depth would mean that you cannot apply certain algorithms
because certain algorithms require huge
sockets for larger number of qubits. Because,
see, usually one thing which I think you
got out of intuition is whenever we are talking about scalability aspect, we are
basically talking about number of qubits.
But it's also important to understand that when you increase the number of qubit,
you are in some sense also increasing the circuit depth because
more qubits require more gates, right? I mean, in sense of the circuit depth.
So circuit depth and number of qubits both becomes an equal
problem when you're talking about the scalability aspect of current hardware.
Okay, then I'm talking about errors,
right? So it's important to know that there are error correction code,
quantum error correction code, or QEC, as we
call in short terms. But these error correcting code,
error correcting codes right now are not of high
quality. This is something of a very high research priority right
now. There's a lot of research being done in this exact domain.
But again, what does this mean in physical sense? Right? So in physical
sense, it means that even if we are able to get capital n user
qubits, we will only be able to use small n number of qubits,
because we will have to give this some m qubits for
these quantum error correction codes. This m can sometimes
be large if you really want a good higher fidelity
results, right. Then you'll have to make this m large,
which would mean that this small n would be small. So basically you'll be
limited by the number of usable qubits which you have for solving any kind of
problem. So this becomes an issue. And again,
you can say this is more on the, these were like the major ones,
which have actually have physical implications, but there
are other smaller aspects as well. You're talking about calibration issues,
right? How do you calibrate calibration
issues? Like how do you calibrate these qubits? That's again, that becomes an issue.
And then you're also talking about the variability as a factor.
The thing is that, as I stated, in ideal world,
all the qubits should behave identically. Right?
But in physical sense, it's important to understand that. And qubit is
basically a particle, right? That follows certain principles,
as you must have heard, just for context, that any qubit is any kind of
particle that has two possible end states, which being zero or
one. We are just naming it zero one. It could be anything. It could be
plus, it could be minus, as you must have heard. So we have different
bases, right? This is the computational rates. We have the planar
bases. We have the polar base. There are different kind of bases, which we have.
But any particle which has two possible end states and has
a possibility of being existing in a state, say psi,
such that it could be a combination of both. Sorry.
Zero plus beta one is basically
a qubit. So, qubit is a concept. It's not actually a physical thing.
And physical particle can behave like a qubit, so that physical particle,
again, will have its own limitations. But how do we ensure that?
Say we have different particles, but all those particles behave exactly
the same? Because I can assure you that. Okay,
I tell you that, okay, this particle is a qubit. And I'll tell
you. Okay, basically, this p one particle, I tell you, is a qubit,
and this p two particle also behaves like a qubit. But how do
I make sure that these two are calibrated? They don't
show any kind of variability. Right. That, again, becomes an circumstance,
sort of like an issue on the hardware side of things, to be honest.
Like, this doesn't have a lot of limitation directly,
but it becomes an issue when you're talking about from a more in
depth hardware perspective. Right?
Again, one of the other important issue which you'll face when
you talk about limitation is qubit connectivity.
This is one of the issues which you won't directly have
issues with. This won't become an issue to you, like in the
initial phases of, what do you call your planning, your algorithm.
But it becomes an issue later. So what happens is that, see, because,
as I said, every quantum hardware is basically a quantum
system. So if I say that I had qubits,
I told you, right, every particle, a qubit
is basically a particle that follows certain principles. So,
say these are the particles. And I have already proved by
some physics method or physical method, that they
fulfill the properties of a qubit. Okay? So,
now, given that this system, this is a closed system. So this is a closed
system. Now, I already stated that
this system is already existing quantum hardware.
But the issue becomes here is that, see,
there is a limitation of the connectivity between the qubits.
Like say this is the kind of connectivity which we follow
and say this is connected to this and this is connected to this. So this
is the kind of connectivity which we have. I think if you're interested in this,
you can just Google connectivity graph of any, say quantum
hardware, famous being the IBM basic. The IBM
provides the quantum topography. Or you can say this particular
connectivity map of all their hardware. So you can go look it up. So let's
say this was some quantum hardware, q one. This is the connectivity
map of that hardware. So this is particle p one and this is a
particle p two. So let's particle, this is
particle p three. So you can easily see that there is a direct path from
p one to p three. So now, okay, now I'm trying to say run three
qubit algorithm. So I am basically taking p one,
p two and p three. And all of them start with
zero. Because initialize, we initialize it with zero.
So it's easy to see that. Okay, now say I have to
use a c zero gate, right, which is a two qubit gate. And I want
to make the control qubit at p one and basically the not gate
at p two. It's easy to see that.
Let's make it p three and this would be p two.
Yeah. So it's easy to see here that if I want to perform this simple
operation, it is possible, as my connectivity map tells me,
that there is a direct connection between p one and p three. Particular p
three qubit. Right? But what if I now need to form
a c zero gate between p one and p two, p one
and p two. It's easy to see that there is no direct path between the
two. I mean, I'll have to connect. Basically there is a path,
I mean there's a path, but it goes to p three and say p four
and p five and p six and so on. Right? But there is no direct
path. There is no one step path from p one and p two. So p
one and p two are not directly connected. This becomes a limitation.
Okay. I mean, there are ways to do it, right?
If I didn't tell you. So this is not even if I want to,
this operation, it's not a wrong operation. It is possible.
We basically use swap rates now. So what basically swap gate is just for
context is in simple notation term,
we just represent it through two x's. This is basically
a swap rates, but in actuality a sortnet
swap gate. Is basically three c zero gates. So it's
like this and
this. So you can basically swap gate has a whole
matrix associated with it. And if you try, if you multiply the matrices
for three c zero gate, tensor product them, you also get the same matrix.
So basically, in theory,
we just use this notation. But in hardware sense, this is what happens in the
background. What this swap actually
means is now, if I want to go from p one to p two,
I'll basically be swapping this p three with p four and swapping this p
four with this p five and p five with p six and so on until
I'm able to swap p two with p three. So basically,
I'll swap this gate with this, right? So it's easy to say
here that what I'm saying right now is that for every swap, I'm basically
adding three c zero gates, and every c zero rates is
basically two single qubit gates, right? One is control and one
is c zero. One is the not gate. So basically, what I'm saying is,
for every step, I'm having six gates
more to the circuit. So for this kind of operation, let's say,
which we had 1234-5678
so it's basically eight into six, like 48
gates just to do a single operation, single operation,
single logical operation. But 48 physical operations needed to be
done in order to do that, right? So it's easy to see that this affects
the whole scalability aspect of our hardware. So again,
connectivity becomes an issue when you're talking about the
broad and broader sense, the limitation of niscara devices. So these were
some of the major limitations of, you can say, your niscara
devices and why even we are considering these are
some factors, why we do not basically run all the algorithms
which we have right now on quantum hardware. These are the things which are stopping
us. These are major research points. I'll be honest with you.
I'll not say that this is something which will affect us for a longer time.
Like, people are diligently working on these things, particularly to
improve the state of the current hardware. But this is what
I say, what is what I mean when I say that we are in sort
of like a Nisq era of quantum computing, where we are limited
by the current hardware. So just last
put it, I talked about
Nisq. So now I think every point, every single
wording of this whole abbreviation should make sense to you,
right? Nisq, noisy. We talked about noisy
qubit intermediate, because we are in an intermediate
stage right now scale, because the scalability is a huge
issue right now. Mean we do not have scalable hardware,
quantum, because again, we are talking about quantum hardware. So why
not? So this list should make sense to you.
I spend a lot of time on this because it's important that you understand
what are the limitations. Why are we even talking about this? Why are we having
this session? Why are we even talking about quantum has bad algorithms when we already
have algorithms for quantum hardware? So now
comes now, coming to the main subject,
quantum s pad algorithms.
Algorithms.
Okay, so what are quantum AspAd algorithms?
So, let's go with the wording. First, we have
quantum again, which would means quantum mechanics principles.
Inspired, meaning that we are inspired by the principles. We are not exactly.
We are not able to simulate them in some sense. Sorry,
we don't have that hardware of that kind. We are not doing it exactly on
a quantum mechanics realm. Right. But we are simulating those
properties. So inspired would means that we are limitations simulating
those properties. Simulating and algorithms,
again, is a series of steps. So, basically, when we
say quantum has bad algorithms, we are talking about the algorithms which simulate some
quantum mechanics principles to get performance benefit.
I mean, it wouldn't make sense, right, if you don't get any kind
of performance benefit. So we are talking about some performance
benefits that go along with the algorithms. So,
coming back to the presentation now,
so, again, we talked about, in an introduction, we talked about the
potential of quantum computing to solve complex problems.
We talked about niscara, basically, the whole Nisq thing,
and we introduced the concepts.
Now we'll go in detail in the whole concept thing.
Again, this is something which we have already covered, basically, understanding the
niscara of quantum computing. It's important to understand the challenges and
the limitations which we face with the current hardware. And again,
we could talk about, you can see basically the third point, which means
leveraging the use of quantum AsPAD algorithms to still get some
advantage, but not exactly be limited by the hardware of
current time. As you will see, that basically, what, when we say, when we talk
about quantum aspired algorithms, which we
are basically saying, as I stated, that we are simulating the properties of
quantum computer, we are simulating some quantum mechanics principles. Those principles
I'll talk about in the next few seconds.
So, those simulating certain properties of quantum computing to
get benefit. Right. So, this simulation happens from
classical hardware and by lot of work
done by our ancestors. Classical hardware is at its
very best right now, in some sense. We do have great hardware. We have
gpus, we have cpus, we have GPU, CPU, collaborations. Right.
So we have hardware that is classical hardware that can scale.
Again, one thing which I would particularly point
out in no sense, by simulating the property of quantum
mechanics, will we be able to get the whole benefit or the full potential.
We won't be able to achieve the full potential that these algorithms
have. These algorithms are basically, in the end, are meant
to run on a quantum hardware. But because the hardware is still in
the budding stage, we are just simulating them on a classical
hardware and still getting some performance benefit. So,
basically, the thing of essence here is that scalability
is possible on classical hardware, even if we are
not exactly gift. It's like you can select a trough, has sort of a trade
off. On a real quantum hardware. We expect huge performance bonus,
but scalability is an issue. But in quantum inspired, the trade
off is that performance might not be that big as compared to a
pure quantum hardware, but the scalability is there. We are
potentially able to scale them in order to solve, you can say,
industrial problems. So that is where the essence of quantum inspired comes into
play. Okay, again, we have talked about what quantum inspired algorithms
are. I sort of talked about the motivation.
Why even are we considering quantum inspired? We are considering quantum inspired
for its scalability aspect and still being able to
deliver some performance benefit. Because, again,
we are here because some kind of performance benefit is what actually
a performance boost is actually what we are behind. So,
again, now I'll talk about the properties.
I've always been saying that we use certain properties of quantum mechanics and this
and that. What exact properties am I talking about?
The very first is parallelism, or means,
as I like to call it, parallelism or superposition.
Superposition. So even before getting, or getting in
exact, detailed discussion of his concept, I'll take
a very simple example. So let's say I have a coin, right?
And for any coin, we have two possible end states,
heads and tails. Now, let's say that this is a
qubit. For the sake of this discussion, I'll assume that
say this, my coin is a qubit. Now,
whenever I flip this coin and
I say it falls into my wrist, right? It falls into my wrist. And now
I close my wrist. So until I open my wrist,
there is a 50%. I means this, heads and tails.
There's a 50% chance that it could be has. And there's a 50%
chance that it could either be tails.
Right? So we can say this particular coin right
now is in a parallel position. It is a parallelism, is in a state of
parallelism. It's in a state of superposition, it's equally likely to
be heads. It's equally likely to be tails. Or if, for the sake
of consideration, as I previously mentioned, that I'm considering,
this coin is basically a qubit. And we know that
I talked in very slight details that any
qubit, basically, should follow this principle. That it should be
able to have a state which should be defined with
the help of this alpha and beta component, which I think
you guys should know that the square of this alpha and the square
of this beta is basically the probability of
this particular state, psi, after measurement, to either
gives heads or to either give tails.
Right? So, I mean, this is some basic understanding,
which, as previously mentioned, I'm sort of assuming
that you guys have this kind of understanding. If you don't, don't worry,
you can just record this. You can just pause this session and you can go
back and just type introduction to quantum computing.
You might even find some of my videos back there. But,
yeah, my point here being that say, okay, now, the sake of this. Coming back
to this discussion, for the sake of discussion, as I said, I'm assuming that the
coin is sort of like a qubit. So in theory,
it should follow this principle as well. Right? This coin
should also have a state, right? And as a state, and as I previously mentioned,
that a coin has two possible end states, heads or tails. So,
basically, the state of a coin should be. We should be able to define it
in this sense as alpha heads plus beta tails.
Where the square of alpha and the square of beta defines the probability
of the coin being either heads or tails, respectively.
So, because as I previously, in our case,
I'm saying that the coin is in my fist,
right? And my fist is closed.
So by common sense, we can say the value of this
alpha and this beta with a square of these values
are both zero five. Right?
Because, as I said, I mean, right now it's in a 50 50 state,
right? So it's zero five, right? Square. I mean,
the value of the square is zero five. Mean the actual value would be somewhere
along one by under root two. Right? Because one by under root two, the square
would be one by two, which would be 0.5. So basically,
alpha and beta, to be precise, would be one by
under root two. And the square of these values is one by two.
Or 0.5 or 50%.
Right. This is what we call superposition. Now,
how exactly does this come into play? In actual, you can say,
quantum computing aspect. So what happens is that. See, okay, now, there were
only two possible states. But now,
in some sense, I'm saying that this one single
coin, c one which has a state, psi one,
when it is in the wrist, right. It is a combination
of heads and tails. Or basically, it is a computational of two
states existing simultaneously until we open the wrist.
So, just for context, the opening
my fist. Opening fist,
basically, is what we call measurement,
which, again, is, again, is a concept which you
must have heard when you were studying introduction to quantum computing.
So, basically, measurement is at the end. Measurement is nothing.
But after I open my wrist, it should always be either heads
and tails. After I open my wrist, it cannot be in that in between
state. Right? Now, it has to decide. Now, it has to either be
heads. It has to either be tails. So at
end, that finite, that discrete state
which the whole state, which the quantum particle has to fall
into, this is in some sense called measurement. I mean, measurement in actuality
is a bit complex. It's just a complex concept. But if
I talk about it, then I have to talk about a computational basis and then
talk about how exactly the best example
I can give a measurement is like you're talking this cat, the whole
experiment. Until you open the box, the cat is both dead
and alive, right? But has you open it and you observe the
particle, you observe the cat. It has to either be dead or be alive.
Then it cannot be in a state in between. Right? Exactly. Like a coin.
As soon as I open my wrist, as soon as I observe my coin.
Now, it has to be either heads or it has to be tails. Exactly.
Now, in actual theories, in qubit, which has two possible states,
like zero or one, as soon as I perform the measurement, or I observe that
particle, it has to be either one or zero. Now, it cannot be in that
in between state. But in quantum computing, what we do is we
take benefit of the
principle or the possibility of this particle
to exist in state of in between, say,
zero and one, we take the advantage of this while it's
still in superposition. And what we do is like, see, this I'm
talking about in pure quantum computing sense right now. So what we do is we
modulate this values of alphas and betas. And by
doing that, we are basically modulating the final result, which we'll get when
we perform measurement of this psi,
of this state. So, let's, for the sake of convenience,
like, very simple example, if we had a black box,
and we need to find the solution. This black box is basically a function f
x. And we need to find the minimum of this black box or
minima of this function, right? And for the sake of this convenience,
let's say that the minima was zero,
right? Basically the value of x being zero is what
gives the minimum value of this function, f x. The sake of this convenience.
And we are using only one qubit. We are using only one qubit,
says zero. So this right now is in a state where alpha is one
and beta is zero, right.
I should just make it one. I think it would be better. So it
would be alpha is zero and beta is one. Right.
For the particle, following basically
this whole aspect. So now that
we know this, now we know that the solution in our case,
for the sake of has convenience, we know that at the end the alpha should
become one and beta should become zero for the
particle after measurement to give this discrete value zero.
Right? Now, as an algorithm
designer, in this case, we know what we need to do.
But usually we have a fitness function which tells us how close we
are to a solution. We have some function, let's say we have some finite function
which tells it how close we are to the solution,
okay? Or some methodology by which we keep telling the
algorithm that, okay, you are approaching to the solution, you are getting farther from
the solution. So make changes accordingly. So let's say that here,
because we already know, and we are an observer, so we know we need to,
at the end, make this alpha to be one. So now what I'll do
is I'll make an algorithm, the particle.
After, let's say I'm using an Hadamad gate,
after I put it into a superposition, I'll now perform certain operation
which will make sure that after those operation, this alpha
falls into alpha becomes one and beta becomes zero.
In this case, because we had one qubit. So there was no sense of using
parallelism. But now let's say we had n qubits.
So now when we are talking about using n qubits,
we are basically talking about two to three n potential states
existing simultaneously. When the n was one, basically we had
two states existing simultaneously, right? Which was zero and one.
So we were basically checking the values of zero and one both for the
black box, right? And we saw after, because now in this case we
already know. Like we just saw that, okay, this has to be zero.
So we make alpha one. But let's just say
we did not know that. So now in some sense, we are basically using the
two values simultaneously for checking the value of the black box.
This is exactly what happens in dutch algorithm or Dutch Zosk algorithms.
If you're taking multiple qubits and for single qubit, it's called dutch
algorithms. So you can look it up. I mean,
it's a pure quantum algorithm. It's dutch
algorithm.
Okay. This is once, in some
sense, what supervision is. So, superlation is basically using two
to the power n possible states simultaneously for a given function,
f x, to find whether. Which of them is the closer to
the solution or which of them is a solution sometimes. So that would be
even the Grover search algorithm takes advantage of this, which is comparatively
a bit popular algorithm. So that also takes advantage of quantum superposition.
Again, now, this principle is, we can say one of the most important
principle which we make use in quantum has bad algorithms.
So if you talk about. I'm talking from
a very simulation sense here, what we are basically doing is we are
changing this alpha and beta parameters, right? For any
particular particle,
we are just a state, or say we have the particle psi
one. These are a function of these two particles, alpha and
beta. So if you modulate this alpha and beta, in some sense,
we can simulate this quantum property. Right? So initially, what this
mean is that initially put it something random and over time, keep on
modulating the properties of alphas and betas and accordance to the closer you
reach to the solution state. So this is, in some sense,
how we simulate quantum parallelism in quantum inspired
algorithms. And this is some sort of like an important
concept. And again, the second concept, which we usually have
in quantum inspired realm.
Let me just change my color.
Yeah. Quantum tunneling,
this is more. If quantum superposition was more
on the general concept. Quantum tunneling is more
prone or focused towards your quantum annealing concepts.
Your quantum annealing is basically, as I previously mentioned, is a type of architecture
which we use,
I think annealing in general, I think you do have an idea, right? Annealing is
basically like, over time. Like if you heat a metal, over time,
it cools down, right? So every particle in this whole universe
has a tendency to be in a minimum state of energy. So if I draw
the energy versus time graph, if I draw this,
so every particle will eventually reach that state of global
minimum or global minimum energy, right? That's a property. So they will
basically utilize this property to, what do you call,
like, simulate, to basically find the minima, global minima
of certain functions. Right? Fx. So basically, when I say
Fx, it doesn't mean that this is only limited to your function optimization.
In theory, it said that any kind of problem which
you are trying to solve, it is always possible
to map that problem onto a function. And so
every problem, in the end just becomes an optimization problem if you think
about it, right? So this Fxfx is basically the problem which we
are trying to solve. Okay? Now, let's say the landscape
of this function f x over x
is, say, along this line,
it changes like this, right? So as I said, annealing works on a
simple concept, classical annealing, I'm talking about.
It will keep going down. It will keep going down until it reaches
this point. Let's just give me a second. Okay, this was how it
was. But you can see that this is a local minima,
and this right here is the global minimum.
Right? Now, here is a limitation of classical leveraging.
Now, if it needs to reach this global minimum, it also needs to go
over this whole steep hill, right?
Until it reaches again this valley of this whole,
you can say landscape, right?
But here, whereas classical would
depend on randomness or would depend on sort of like a miracle
to just cross this whole trope.
And in quantum, we have a concept of tunneling.
What exactly is tunneling? I'll give you a brief about it. So what happens is
that, okay, this is, interestingly enough, quantum tunneling
was discovered while we're trying to improve a classical hardware. So what happens is,
has, per Moore's law, you already know that the number of transistors
which we have on an integrated circuit doubles every decade,
every three to four years, right? So that means if you're doubling
the number of transistors, that would be. There are two ways of doing it,
right? Make the integrate IC chip larger or make the transistor
smaller, right. So again, it's obviously not possible to make the IC larger
because we want our devices to be smaller, not bigger, right? So we
make our transistor smaller. So, see, again, there is a
limit to which we can reduce the size of a transistor. After a certain
limit, the quantum mechanics principles comes into play, and they start interfering with
our results, and we don't get the idealistic results. So when
that happens, the scientists started to notice that there is this
phenomenon which happens of quantum tunneling, that the transistors become
so small that electrons just passes through them and they don't
have to interact with them, they just pass through them. So this is what they
call tunneling. So in our respect, what happens is that in
quantum annealing, there's a possibility that the
quantum particle, the particle which we have in annealing, they can just
tunnel through this whole steep and directly reach
this point. And then by usual annealing, we will be able to find the global
minimum. This is sort of like the concept of quantum tunneling. And this is
the concept which we used when we talk
about quantum inspired annealing. Quantum inspired
annealing algorithms, which I think you do have
a session on again. So, again, I won't go in bit of detail about
this, but this is another concept of important concept which
we use when talking about when designing, you would say,
quantum annealing algorithms. Okay, so these
were the two majors. One, I mean, we do have concepts of quantum entanglement,
but those are sort of like pure quantum concepts.
And it's very hard, or it's close to impossible
to sort of get those on,
simulate them on classical hardware. So these are very
advantageous if you have quantum hardware. But in quantum
inspired world, it becomes a bit harder to simulate these.
So we don't particularly rely on these things.
Okay. So these were some of the basic or
important principles of quantum mechanics, which we try to simulate whenever
we are trying to work on designing quantum as pad algorithms.
So we just cover this slide in
the previous slide where I talked about the principles which we usually employ when we
are talking about designing quantum inspired algorithms.
So this slide talks about some of the major, you can say, quantum inspired algorithm,
which we do have in operation.
Surprisingly enough, not many people know this, but algorithms
like QAOA or quantum approximate optimization algorithms,
your variational quantum eigen solver, and even your quantum
neural networks, you can say all of them, are in some sense, example of
your quantum inspired algorithms. So how does
that come into play? Is that, see, what happens usually, is that these algorithms
have, basically what they do is they map the problem.
We have a problem statement, and they map the problem onto a function.
Right. And what they try to do is they try to optimize that function.
And while optimizing that function, as I previously mentioned, they try to use the principles
of quantum parallelism to improve the result.
Again, these algorithms, as I like to call them, are also quantum ready.
That would mean that you can run these algorithms on actual quantum
hardware as well. So what they basically, you can say,
assure you, or they try to guarantee you, is that in future,
when you will have better hardware, you should be able to just take
these algorithms from here and just put it on a real quantum hardware
and get all the benefits of quantum computers as well.
Okay, now I'll talk a bit detailed about some
of the use cases which we have of these quantum
computing, quantum inspired algorithms. And one of my personal
favorite being quantum inspired optimization algorithms, which is sort of like my specialty.
Sort of like what I particularly work on.
Yeah.
So, again, some of the let me just switch my color
some algorithms.
So one of the most famous and common algorithms
are your Qio algorithms. Quantum inspired
optimization, as I call it, basically optimization algorithms,
again, which is something which is more of my expertise.
This is where my expertise lies. So, in Qio, we have evolutionary algorithms,
we have annealing algorithms.
So these would be the two type of algorithms which we usually have in quantum
inspired optimization. In evolutionary algorithm, you can have a number of algorithms,
like QGA, quantum inspired genetic algorithm, basically quantum
inspired particle sum optimization, quantum inspired wolf
optimization algorithms. In annealing, you'll have your usual annealing
algorithms.
I'll not go in detail about each and every algorithm,
but I'll try to give you a glimpse of how they function.
So, basically, any kind of optimization problem is
basically optimizing a given function. And this fx is basically
the mapping of a problem, px. So every function has
a landscape. Let's say this was the landscape of this
function. And within this landscape,
there is one global minima, right? There could be multiple local minima,
said this is also local minimum. This is a local minimum. This is a local
minimum. There's a local minimum. I'm talking about. This is like
a 3d projection, just viewed from above. But in
actual sense, you can also look at it like this,
say this could also be the landscape. And you can see here that this should
be the global minimum. So now what happens
is that when
we are talking about your classical optimization algorithm, so they usually follow a
very first, I'll start with simple gradient based algorithms,
like your stochastic gradient based algorithm, or gradient based algorithm.
They start with some, say some point x. So this
is some point x. So these algorithms then follow this gradient
of the landscape. So in our case, it's easy to see that
this algorithms is likely to get stuck in this local minima,
because it won't be possible for the
gradient based algorithm to make the jump to this side,
right? We also have stochastic based algorithms which can make
this jump, but again, they too have the limitations. And for
complex, say, non convex, what do you call landscapes,
they fail. So that is where evolutionary algorithms come into play.
Evolutionary algorithm. See, I'm talking about classical evolutionary algorithm right now.
So those do solve these problems. That is why. So if
you ever work with genetic algorithms, then you must have heard it that genetic algorithms
can, in theory, solve any kind of problem.
Given that, first thing is you can map that problem onto
a fitness function, fx. And second is that you have infinite computational
resources. Those are very computationally expensive,
your genetic algorithms and stuff. So how the genetic algorithm, or any evolutionary algorithm
in general works, is that they prepare candidates c one, c two,
c three, c four. And they prepare candidates and they test every candidate
on this function fx, right? So this candidate
would basically mean maybe some random points on the landscape.
Some random points on the landscape. So they depend on the
theory that out of all these candidates, one would perform the best.
In our case, this is the best performing one, right? Let's name
it, c three. So c three is now the best, and the worst is,
say, in our case, it's obvious to see that, okay, this was c one,
then c one was the worst performing, then c one would get eliminated.
It just follows the basic darwinian theory of evolution that fittest
of the survival of the fittest, right?
So c three is the best, c one is the worst.
Now, we perform certain operation depending on what kind of evolutionary algorithm we are working
with. So in order to ensure that more candidates in the next generation
are towards c three, right? So basically, we keep
on repeating this process until we finally reach this point
x or our global minimum. This is how your evolutionary
algorithm works, right? So where does quantum come into play? Is this
that. See, as I said, that the biggest bottleneck here is the computational
resources for genetic algorithms or for evolutionary algorithms.
So, if you remember, I talked about the principle of quantum superposition.
So what that would mean is that now that now I can
test multiple possible, for any psi estate,
we know that there are two possible parameters, alpha and beta.
So if I modulate this alpha and beta
by using some, I won't go in details, but I start using
some operations, then I
should be able to check multiple solution, multiple candidates on this landscape
simultaneously in single operation. So, basically, I used to produce
this advantage of quantum superposition, or quantum expand
superposition to improve, to reduce the number of generation
it would take me to converge, or basically, I would improve the coherence rate
and I would reduce the computational resources required. Because I'm
actually saying that even if I use less number of candidates or,
say, small n, then also
I am assuring that I can still converge to the global minimum.
So that is, in some sense, how we look at quantum
inspired algorithms. From a practical point of view, we are talking about scalability,
right, and still getting some performance boost over the classical hardware.
And that is one of the reasons why we are even going towards
these kind of algorithms. So I personally, just for the
context, I have worked on QGA
quantum genetic algorithms. That is qpso quantum
inspired sampling algorithm. Quantum inspired sampling algorithms.
I have worked on some of other quantum annealing algorithms and
everything. So, okay, just for quantum and annealing, we use the same concept,
but because annealing, we are talking about reducing the minimum state of
energy. So we convert our function fx into a hamiltonian.
What is a hamiltonian? You can look it up. Hamiltonian is
basically an energy function which we try to minimize.
So some of the practical use cases, which I have
myself worked on by using this evolutionary algorithms
are some of the, one of the huge industrial use cases. There could be
airfoil optimization,
optimization, there would be topology
optimization,
and much more. So these are some of the problems which I have personally
worked on. And if you look it up, these are very industrial
and very high scalability problems. They are computationally
very expensive. The function fx associated with
this function fx associated with both of them, it is
very expensive. It's not easy to evaluate this fx multiple
times continuously. So it becomes very important.
The less the number of times I need to calculate this fx, the better my
algorithms is. So I was able to get best results
and better results for these two kinds of problems. So these
are sort of like the use cases which I have worked on. There are a
tremendous number of use cases. So, as I previously said, right,
that every problem px, which takes argument
as x can be, if you can map it onto a function f x,
then you basically need to perform function optimization over this function, right?
You just need to improve the fitness or improve the value of this f x
function. So it becomes a function optimization problem.
So basically, any combinatorial optimization problem
can be mapped to function fx, which can be solved using quantum
inspired optimization algorithms. So that is,
in some sense, I'm talking about it from a very abstract blueprint point of view.
That is, in some sense, how we use quantum inspired algorithms to
perform certain industrial or practical applications.
So, again, this is what I particularly highlighted in the given,
you can say given rates that, what are the benefits of quantum
and Spad algorithms that they perform certain, they give you performance,
performance, certain performance boost over your classical algorithms or classical
counterparts, while still being able to maintain,
in some sense, scalability, right? But again,
there are some limitations of quantum inspired as well,
because, see, as I said, that we are not using the full potential
of your quantum computers, right? You're still sort of simulating the properties.
So whenever we talk about simulating the properties, it's important to know that we
are restricting the performance which we can get. So this restriction,
depending in some sense, in actuality,
depends on you. How much performance can you give up for scalability,
and how much performance do you actually need to be honest in
the problems which I mentioned, like the airfoil problem, topology problem, even a
slight performance benefit or a slight performance boost means a lot, because it's such a
sensitive industry. And the function which I'm trying to
optimize is so computationally expensive that even if I'm able to reduce
it by ten generations, the convergence rate, that means a lot.
So that totally depends on the kind of problem or the kind
of use case you are targeting.
Okay, so, as I talked, considering all
of this into perspective, considering taking this into perspective, it's easy to
see that quantum inspired algorithms provide an immediate solution of
harnessing the advantages of quantum computers or the principles of quantum
mechanics, and yet not being directly dependent on
the hardware side and limitations of hardware. So they do provide
you a sort of like an upkey opportunity to
dive deep into the whole quantum computing aspect in sense
of practicality and sense of application basis, and yet not
being directly dependent on your hardware limitations again.
But this, again is sort of like, what do you call, like a backup plan
sometimes like an immediate plan of action, until eventually
we get the hardware of high quality, which is where we actually run our
quantum algorithms and get the so called quantum advantage,
as we call it, the quantum supremacy,
as we like to. I mean, we don't use this term anymore,
so I might get some backlash from using the
term quantum supermassive, but in some sense, I'm talking about the pure quantum
advantage. That is something which we can only do after
we have proper quantum hardware. But for the time being. For the time
being. And until that happens, quantum inspired algorithms are a good way of
still harnessing the advantages of quantum mechanics principle
by simulating them on a quantum hardware and getting,
you can say, some kind of performance benefits for your
use cases or your applications. So, yeah, this is sort of like
the whole session. I had to keep it a bit short and
did not dive deep in depth with a lot of concepts.
So in the future, I might has well do different
sessions on that. Or if you're interested, you can reach
out to me in person and you can ask me any questions which you have
in depth question or detailed question, or if you want to have a conversation
about detailed discussion about some of the concepts or some of
the use cases or application, which I pointed out, but could not discuss in
detail because of the shortage of time and everything. So I'm
always welcome, is always welcome from my side. You can always reach out to
me that has pretty much it for the whole
session. I hope you learned something new,
because I think. One of the problems which I used to face,
I'm used to hearing is that from a commercial and industrial
point of view, is that if this is not
scalable, if it is not that scalable, why are we even going with it?
Why we are investing a time with this whole thing? Well,
my answer to that is quantum inspired algorithms. Exactly the immediate answer. Like,
I'm still using the principles of quantum mechanics. I'm still using my knowledge of quantum
mechanics and quantum computing and quantum information, but I am simulating
those in some sense on classical hardware. So I'm still
able to get some advantage and still able to maintain some scalability. So quantum inspired
is sort of like one of my favorite things, because I get to enjoy
quantum computing, and yet I don't have to worry too much about scalability.
It is a bit complex. It does get a bit complex, because sometimes that trade
off, you'll have to decide on that trade off, and it's
not always good, and it's always about rainbows.
Right? You have some limitations to this quantum inspired aspect as
well. But overall, as I said, that it's one
of the potential perfect methodologies of
harnessing the advantages of quantum computing in this
career. So, yeah, that would be it. And thank
you for your time and patience. And again, as I stated that, I'll add
another slide after this, and the deck, which I'll share with you guys,
which will contain my contact info so that you can reach
out to me and you can ask any questions you have.
If you want any detailed, if you want to get a conversation about detailed discussion
of any of the topics, I'm always here and feel free to
reach out to me. So thank you.