Transcript
This transcript was autogenerated. To make changes, submit a PR.
Welcome everyone.
Today we're going to discuss Azure AI, Azure Prompt Flow, Vectors, and Appendix.
And we're going to see our end to end chatbot where we can
interact with for our own data.
I have chosen this presentation and this demonstration to show how easy
it is for us to create actually our own generative AI deployment.
Integrate it into our web app without touching a single line of code.
It is very easy to start with and it is very user friendly and beginner friendly.
So someone that starts now their journey with the AI technologies, Azure and the
prompting from the engineering, this presentation may be very beneficial
for you, but enough about it.
Let's see.
Who am I?
I am Konstantinos.
Pasalis is my last name, and I'm calling from Athens, Greece.
You can find a lot of my projects, including this in my GitHub, where
it is displayed on the screen.
I'm a Microsoft Azure MVP certified trainer.
I pursue a lot of certifications when I have the time to do I work
as a solutions architect here in Athens for a multinational company.
I enjoy solving complex problems, especially when it comes to hybrid
technologies, multi cloud, data and analytics, and of course,
artificial intelligence, AI.
Let's see what we're going to discuss today.
We're going to see how we can create our own PromptFlow and
deploy this PromptFlow into our Azure subscription using AI studio.
There are some basic blocks that construct the sum of our solution.
The tools we're going to use, of course, and the services we
need is an Azure subscription.
You can create a free one, with, A number of credits around 200 for 30 days, which
you have, you will have access to all services, including AI studio and open AI.
We're going to need an Azure AI search for us to have a vector database and store.
Our data for indexing and we're going to publish our prompt flow as an Azure web
app, which in fact will be our endpoint.
It will be an API endpoint at the end of the day.
So let's see.
We need, of course, our Visual Studio code for our developer, actions.
Our developer workstation would be that.
I'm using this code.
You can use whatever ID you prefer to.
What we're going to do is that once we have our Azure subscription at
hand, we're going to deploy our index.
We're going to utilize AI studio machine learning and our open AI model.
And finally, publish our PromFlow as a web app endpoint.
This is going to be a Flask web app.
We need the basic tier for Azure AI search.
We're going to use machine learning workspace, and we're going to see
how easy it is to interact with it.
You don't need to create nothing else.
Everything will be created for us in the process of this prompt flow creation.
It's actually will be created.
When you create your first Azure AI Hub, everything will be created for
you, including the storage accounts.
The serverless compute is something that we're going to select when
we want to build our PromFlow.
And of course, platform as a service, we can scale out, we can scale
up, we can use our custom domain.
And of course, it supports TLS.
End to end.
What is the, this aspect of problem flow?
And because there are a lot of examples, the one that I have chosen
is what you see here on your screen.
It's actually a bunch of PDF documents, which is our data.
We're going to throw them into our index, which actually are stored in an Azure
storage account, but we don't care.
Just throw them through the user interface and they are
stored in this storage account.
The index is creating the vectors for this data, and we have already
at our hands the ability to create this prompt flow based on our data.
When we start interacting ultimately with the endpoint will be able to
answer any question about this data.
We have also control over the system prompt and warn users.
In case they ask anything irrelevant.
So if you want to have a deeper look in the pro flow deployment guide,
this is a diagram where you first have to identify your business use case.
You have to collect your sample data and learn how to build the basic prompt.
Develop this flow based on the prompt to extend the capability
and then start experimenting.
It's actually the tuning phase.
And then after the experimentation, you start evaluating and refining
your, final version of it.
And then you are ready to deploy.
Let's go to our demo.
Our demo consists of, several
parts, let's say.
Okay, so let's see.
What is this for us?
What does this mean?
Sorry.
what we need is our portal.
Portal.
azure.
com is where we get access to our resources.
We're going to need a resource group, which I have already created.
Resource groups are, Containers that hold our resources into a logical separation.
Okay, so we have here our resource group that contains our resources.
I have already built the Prompt Flow, but we can build it together again.
Some important, remarks here.
First of all, we need an Azure AI Hub.
An Azure AI Hub is a resource in Azure AI Studio, where you can have your Azure IAM.
All your resources are there, one networking, and you can create
different projects under this hub.
So you can see here the parent resource, which is an AI hub.
And into this hub, you deploy your different projects.
For example, you can have projects that are dealing only with prompt flows.
You can have another project that deals only with chat completions and so on.
The hub.
It's the infrastructure, let's say, that holds all our projects.
You can apply different network configurations and so on.
So once we create our AI hub, then we create our project.
We can select our project and start creating.
As you can see, everything will be done through the interface.
Again, it's a very user friendly deployment.
Especially for people that are just starting to deal with prompt
engineering, prompts in general, and want to have a deeper look into
Microsoft's technology being Azure.
Of course, Azure AI and Azure Machine Learning.
When we want to create a new, AI hub, all we have to do is go to Azure AI Studio.
The address is ai.
azure.
com.
And from there We can create a new hub.
The hubs, as we said earlier, is the main Azure resource for studio, and
we have access to a number of models.
We have access to fine tuning, evaluation.
We can also include OpenAI resources and, extend our capabilities because Azure
OpenAI is a different type of resource and it's limited to OpenAI, but Azure AI
Studio and Azure AI in general includes Models from other vendors as well.
So we have created our hub and then we create our project.
If I click here on the project, you will see that I have these
flows, which I have created.
But I have also an open AI resource and other deployment
here and another flow here.
The underlying infrastructure is more than this.
And let's have a quick look.
The underlying infrastructure includes storage.
I have connected an AI search service, which is needed when we want
to vectorize and create embeddings and store them for our data.
And Azure Key Vault, for you that you don't understand what Key
Vault is a resource where you can store your connection strings,
secrets, certificates, and so on.
And instead of including these Details into your code for your
integration between different services.
You just refer to these from the Key Vault service.
A container registry, which is needed by Azure to store the images we create,
because when we create a new prompt flow, it actually creates a container
and OpenAI, as you saw earlier, resource and the hub and the project.
That's simple.
Let's return to AI Studio.
So here we are in our ai.
azure.
com, we have created a hub and a new project.
Let's say that we want to start using and creating our first prompt flow.
We need to start from the indexes.
So if we don't have an index, we need to create it, of course.
So if I click here, new index,
you need to select your data.
Where is your data?
Is it already in Azure AI Search?
Have you brought it over in AI Studio?
Is it another blob storage?
Is it a storage URL?
Or you can upload your files.
I want to upload my files right now.
So let's go and upload our files.
For your information, the files that I have selected are going to be
files that refer to, Microsoft licensing.
I'm going to explain what I mean.
So we know what we are going to deal with.
Let me upload this data and I'm going to show you what this data is about.
So this data is a number of PDF files, as you saw.
Let me share my screen from this point.
And let's open
one or two PDFs.
Let's see this one.
Licensing Windows 10.
So this type of data is like this, looks like this.
All of these are, one or two pages PDF files that explain.
The licensing options of different Microsoft services.
Here we have Windows 10, for example.
Here we have Office 2019,
right?
So we're going to ask questions through our prompt flow and see how
the model responds based on the data that it has extracted from these PDFs.
All right.
So I have uploaded my data and I'm clicking next.
In case you forgot to create your AI search service.
You have the ability to create a new AI search resource.
In our case, we have it ready and it's this one.
So I can create the vector index, I can give it my own name.
So let's say that this is files from the GLR001.
This is going to be our vector index.
Now, the virtual machine is the computing power needed to run it.
Extract data, break it into chunks, and store it into our AI search.
We can leave it to auto select unless we have specific requirements, or we can
just leave it to auto select, and I'm going to leave it to auto select for now.
We need, of course, to add vector search to this resource, and you can
see here that in order to create this vector search, it needs an OpenAI
connection with The relevant embeddings deployment in order to create the
embeddings from the data and store it back to the search service that we have.
So I have already here my OpenAI connection.
And I'm just going to click next.
We are ready to create our index.
Let's say, okay, create the vector index.
And it started creating it.
Now I will go back, not back, I will change my screen to ml.
azure.
com.
This is the machine learning.
Studio workspace.
We can select different workspaces based on which, ones are assigned
to our account and so on.
I have already opened ml.
azure.
com and selected the specific workspace.
It has the same name from our AI project.
You can see here that I have also this option, Prompt Flow, which is
actually the same one if I go here and.
Select PromptFlow,
this one,
let's give it a moment to finalize the, indexing.
If I click here to job details, you would see that I'm taken directly
to the machine learning workspace.
This is where everything happens.
Okay.
You can say that the underlying mechanisms of indexing the PromptFlow
creation, the deployment Are done into the machine learning workspace.
We don't need a separate workspace.
If we create with the method of creating the Azure AI Hub and the Azure AI
project, we're given also this machine learning workspace for our operations.
If you want to have a quick look in what's going on here.
You will understand by this, selections that we are in the process of cracking
the documents, creating chunks out of them, and generating the embeddings.
Once the embeddings are generated, they are stored back into the
Azure AI search resource, and it is registered as a new vector index.
It's going to take some moments, but in the meantime, we can
start creating our prompt flow.
So prompt flow and create.
From here, there are already templates for us.
So our life is much, much easier, especially when we want to build something
that is not ready for production.
We just want to understand the service.
We just want to see how that goes, how it works.
So the easier thing for me and my suggestion is.
To select one of the available ones.
So Q& A on your data.
What better than this?
Let's clone this one and let's say it conf 42 pl prompt flow pf.
Perfect.
Now when I clone this one, it is Provided to us with the default
options from the template.
We're going to work on this and don't imagine very hard work.
It's very easy to change it to fit our own case.
So we just cloned a template for our prompt flow from the available templates.
Let's see also how our job goes.
It's continuing cracking.
I have selected a number of documents, so it's going to take some time.
So let's return to our flow now.
You can see that the interface in Azure AI studio, it's the
same as here in machine learning.
Okay.
And you see here in machine learning, the new prompt flow is already available.
it's the same underlying, offering.
Okay.
Everything is here.
So no matter if you like to work from a Azure AI studio or machine
learning, it doesn't matter.
It's just.
Simply the same.
what do we need to do?
Our demand here is first about the question.
We need dynamic questions.
We don't need just one question.
in our case, this one will be changed to this.
I will continue with the next steps, but first of all, I need to start a
compute session, so I can save my flow.
Execute it once or two or twice and see how it works for me and it is just a
compute session that it's only for the purpose of running it and testing it.
It's not, a compute session that will stay here forever.
So I can just click here and machine learning will select the
serverless compute for me and will facilitate my testing process and
my prone flow creation process.
very much.
Because it needs computing power and that's the computer session four.
So we have two jobs running right now.
One is the job that creates the vector index from our data.
And the other one is the compute session that is starting right now.
One very important detail, especially when you are working with free.
offerings of Azure, the pre trial for example, you may start
creating more and more resources.
There is a very high probability that you may fall under the quota limits.
Azure has some limitations regarding quotas, as it is named.
And it means that I cannot have, for example, more.
than 16 CPUs per region, or I cannot have more than, say, five,
virtual machines in other regions.
So when you are experimenting, remember to delete everything after
you finished your experiments.
In case you want to keep it, make sure that you are creating a
stronger resource so it can fit.
All of your paradigms there.
Be careful or not.
Our pipeline, it's coming to an end, and the vector index will be created.
Let's see here.
We are waiting for the computer session to start.
And let's move on by editing our prompt flow.
We started by removing this question and adding the quotes.
This is going to be a dynamic question when we deploy it.
So we want to have the ability to provide any questions we want.
You can see here that it's.
broken into different, sections, right?
The first section is this one, the inputs.
Then we have the outputs.
Then we have the lookup.
Here is the lookup section where we can, of course, integrate other LLMs,
extend the ProFlow or add our own code.
We have the ability to.
To extend the whole template to our own liking, but we're going
to keep this one for our case.
So in the lookup, we need to provide a mechanism to the flow to
be able to understand the context.
Okay, so the lookup function will set, I'm sorry, will set the index,
which index I'm going to use.
we're waiting for the Computer session to start.
So let's give it a moment here.
Okay.
The index is completed.
So that's a good start.
And let's wait for the compute session.
Yes, it is starting.
Let me see.
We can go here in the front flow menu.
You can see other front flows as well.
The compute session used to be this one, this runtime, but it has been removed.
And it is renamed to Compute Session.
So all we have to do is go to the Prompt Flow and just click
on the Start Compute Session.
Of course, we can create our own here.
As you saw from the dropdown, we can select different details
for our Compute Session.
All right, let's continue.
Now, here you can see that we have already the Python code that will
generate the context for the prompt.
Again, we don't need to intervene anywhere in this process.
It's already there for us.
Now, you're noticing this activateConfig thing.
Our computer session is ready, by the way.
This activateConfig is like a conditional function for our prompt
flow, and it says that it's If this
thing stands or happens, or if, when this thing is this one, is string or
double, okay, when this condition, let's say, exists, then you're
going to activate this section.
And if we start seeing around, you can see that it is the same for every section.
So we can say that if this condition stands, then In that case,
you're going to run this section.
Okay.
I'm just, highlighting this because you're going to find it here.
You're going to wonder what this activate config.
For now, we don't need to have this conditional.
Configuration activation.
Every section is needed.
So let's continue.
Again, let's, because the computer session started, it resetted our initial question.
Again, let's set it to, double quotes with an empty string.
The output remains the same.
We need to select our index.
Let's go.
What's the type of your index?
It's the question.
It's an Azure AI search index.
Which service are you using?
This one.
What is your index?
Which one is your index?
It is created a few moments ago, so it needs to search for it.
it is,
how did we name it?
I don't remember the name.
Let me find out.
Yeah, we can leave it.
The files.
GLR.
Okay.
our index is the files.
GRR.
This is what we created.
The
content.
The vector.
I think we are good to go here.
Semantic configuration.
Default.
It's Azure OpenAI.
And here we are setting, the configuration for the index.
We have the ability.
I've seen that it works.
Let me see if we can just say.
Okay.
I need you to select this index for me.
And yes, we can just select it as you can see here and
not add anything else matters.
It's everything there for us.
So the easiest thing is to select the register index.
I was not sure if this worked every time, but it works now.
So register index, it's the easiest selection and you're just
selecting the index we just created
and save.
let's go back and see what we have selected.
We set to the lookup function, where we need to look for the index
to be able to answer questions.
And because we have that template.
Q& A on your data.
In that case, it needs an index, right?
So we selected the registered index, and it is registered because we already did
it, and it is already registered into the machine learning workspace for us.
the inputs, the queries will be the inputs question, the query type, we
can say hybrid vector and keyword.
We leave the default top k, configuration, setting.
Yeah, the code, no need to touch anything, as I told you
earlier, non deactivate config.
The search results, of course, is the object that you're going to, utilize
the previous section of the lookup.
And now the system message.
Show you some details here.
Now we have the system message.
You're an AI assistant that helps users answer questions given a specific context.
You will be given a context and ask a question based on that context.
Your answer should be as precise as possible and should only
come from the context, right?
Please add citation after each sentence when possible in a form source citation.
So for every answer that we're going to Take back, we're going to
be given also the citations where they, this data came from, right?
and there we have the variant.
So let me go slowly.
You can see here that this section, where we read the system message, has the
name variants, because we can use three different variants, and we can have three
different flavors for our PROM flow.
And if I go to show variants, you're going to see that I have the variant zero.
The variant one and the variant two.
Now I can change the system message here, play a little bit
with the context or if I want to provide something else I can do it.
And this allows me to have three different flavors either for testing
or if I want to be able to have a more flexible deployment for my prompt flow.
I don't want to use variants.
I would like to use only the default one, but for the testing
purpose, we can use them.
You will see that I have the ability to say, okay, in the testing, just use
the, the first variant and that's all.
So let's ask a question here for the testing purpose.
What are the licensing options for.
Windows 10.
Let me save
and let me run it.
Ah, we need the question.
We need to configure the question and the answer question with context.
Yes, of course.
Question.
Here, the connection with our OpenAI.
Okay, I have the deployment here.
I can use GPT 4 or GPT 4.
0.
Let me use the GPT 4.
I can use chatter completion.
Thank you for joining.
Chat is now what is the most, the latest deployments are, chat.
So I can use text for the response format or JSON.
It depends on how my application is going to take this out.
Again, let me save.
So for the answer the question with context for each variant, we need
to select our OpenAI connection and select our model, right?
Now it's saved and now I can press run.
And now you can see that, do you want to use all three
variants or just use the default?
So since we didn't went that far to have different variants for our
flow, we can use just the default.
So submit.
It's
going to take some moments to run.
You can see here that every section is activated.
I hope the question is there.
Yes, it is.
It is going through each phase.
The lookup is completed, the generate prompt is completed, the
variant selection is completed, and the answer is, generated.
Let's see what we have here.
Yes, you can see here that we have already our answer.
The output is that Windows 10 can be licensed through several
options, sorry, and we have also the citations that are just references
to the PDF files that we have.
Here is the output.
I can just click it.
And be taken there on the screen.
That was it.
We already tested it.
And we are very happy with it.
The answer is very, constructed, as you can see.
And yeah, we can view the full output for you to be able to understand.
Okay.
And here is the,
maybe assigned to any device with no requirements.
Yes, everything is there.
So let's go back and see everything in action.
Let's say we're happy with our, prod flow, since we tested it
and returned our, output here.
And let's see again the output.
This is the output.
Windows 10 can be licensed through several options.
One option is, and blah, blah, blah.
Okay.
Now what's next?
The next phase is to deploy this prod flow.
Now, because it's going to take a little bit of time, and We may
not be able to see it end to end.
I have already deployed it, but let's go and see how we're going to deploy it.
So let me change now this question again to an empty string.
So we have the ability to pass the data through our web app as
a dynamic content and save it.
And then I can press deploy.
It needs.
An endpoint, a deployment, and of course it needs a virtual machine.
Now, I can select a very strong virtual machine or I can select something that
it is a little bit cheaper, for example.
But we want to be a little bit fast here, so I would leave this one with 10 cores.
I just need only one instance.
I don't need three of them.
Okay.
That's it.
So let's say that we have the end point that it is licenses
q and a and the deployment name will be licenses q and a one.
Next authentication is key based, and force access.
Yes.
I want, if I want, I can create a description here that will be
added as metadata along with any tags that I want also to have.
Also, I can use deployment tags for my deployments and I can use a
customized environment or environment of current flow definition.
I can have, if I am utilizing another LLM or I want to utilize my own model, I can
create another environment and have this.
Environment serve my prompt flow per my requirements.
In our case, we are happy with the default and what we already have in our hands.
The application insights diagnostic is the thing that you need to have already
connected this resource with your, AI resource and application insights
is a resource in Azure that provides Monitoring, you can have deep monitoring
from everything that happens, to your deployment, to your function, to your
code, the user interactions, everything.
It's a very, structured service for, deep insights in your application.
We don't have this need right now, and I already don't have this option
enabled, so I can't do it right now.
It just warns me that you have to do it beforehand.
Let's go.
Specify what flow outputs to be included in your endpoint response
and what connections to be used.
Yes, I want the output to be included in the endpoint response.
Answer the question with context.
Yes, everything as we, you created it in our port flow and create.
Now you can see here that we have, A new job that will create
this endpoint, let me show you,
I have already my first endpoint here and now it's going to create this one.
It's the real time endpoints as we say.
If I click on this, you will see that the provisioning state is creating
and we don't have any other data here.
It needs to be able to create it and then we're going to have data here.
Okay.
now, yes, the endpoint creation is completed and the state is accidental.
Let me refresh very well.
What we are waiting now is the deployment, which is going to take some time.
It's going to allocate 100 percent of the traffic here.
And now you have.
A wide range of options.
You can have, for example, two protflows dealing with different
data and based on how you build your application you can direct different
people to different protflow.
You can start allocating less people to a testing protflow.
You can do a lot of stuff.
But since this is going to take some time, I have already created
another, flow and another endpoint that has already a deployment ready.
And I'm going to show you what I mean.
This is the one if I click on it, you will see that the deployment is ready.
It's allocated 100 percent of the traffic coming to this endpoint
directly to this deployment.
The provisioning is succeeded.
this is the environment.
Again, it's exactly the same as you saw earlier.
We have this rested point and now let's go to the fun stuff test.
I can test it.
I can test it again.
What are the license options for Windows 10
test
and we have our response back.
So before deploying our code and that means consuming our deployment.
And.
Which is this part, and you can see here that you have the code ready in
four different types of consumption as it is named, but in fact it
is different language frameworks.
So you can have it in JSON, C sharp, I don't know why JSON
is empty, I don't remember why.
Anyway, you have C sharp, Python and JavaScript.
So all you have to go to do is take the code snippet and
integrate it into your own app.
And then let me show you what is happening here.
I have my VS code, my workstation, I have created a very simple, web application.
You will see how that looks.
And what I did is I integrated my code taken from the portal into my application.
So I have created a flask, okay, web app.
This is the code that I was given from The prompt flow screen.
This is the one.
So just take it.
Copy.
That's all.
Copy and paste to your, VS code or your ID.
You need all the only detail that you need is the API key, which is not
included here, but you can take it from the consume menu or you have two keys.
Of course, you can utilize any one of them.
Microsoft is using this logic.
Always two keys, primary and secondary key for endpoints that
need access through, through keys.
So you can have one key to the primary developer team, secondary
key for you or for any service principle for automation and that
stuff, they do exactly the same job.
Since you have the key and the API endpoint, you can interact
and consume the service.
Let's go back to our code.
So we took our code from Python.
We integrate it into our app.
All I had to do is, create these, environment variables.
So to add a little bit more of a business flavor here, you don't
add sensitive details like an API key or the endpoint into the code.
So I created these variables and by variables.
I mean that, we have, a web app in Azure,
let me show you,
we have here a web app.
Okay.
So for me to avoid storing sensitive data into my code, I
created environment variables, the API key, and the Azure endpoint.
What the code needs.
Okay.
If you see in the code, it is provided to us by the consume code.
Okay.
I'm given the URL.
I have to insert the API key and purposely I have left the code like this.
So you can see that I commented out this one and created these
two environment variables that are declared into the Azure app service.
Which is hosting this application and enough is enough.
Let's go and check it out So all we have to go and see is this URL and of
course as this is a web app You can have your own custom domain, right?
You can add authentication and this is the easier thing that you can do.
You can take my code from my Blog and utilize it as you like I think
it's for anyone that is already into, very basic web deployment.
You can understand easily how to create a web application in flask with flask in
Python in your favorite hosting provider.
In our case, it is Azure web apps, as it is called.
And let me also show you something while this is loading.
Come on.
Yes.
Let's go from here.
Maybe it's a little bit faster because the other one is a,
is a workstation in the cloud.
This is my PC directly.
So
let's see how that went.
The end point is here.
There is a good chance that it has been, no, it is, that it takes some
time to finalize the deployment.
Okay.
This one takes some time also.
I don't like it anyway.
Let me see.
Okay.
This one is open.
Okay, the app is stopped.
I have to start it.
Now, for cost management, you can stop a web application and
you are not charged that much.
You are just charged for the storage.
in my case, I had it stopped and now I started it.
The way to deploy your app from VS Code to Azure is very straightforward.
You just need to create your virtual environment, add your data.
compile the code, run it locally, test that everything is all right.
And then it's just a simple command to deploy your app in Azure.
It needs some time to load.
in the meantime, let me show you something very important that you need to know.
So remember that starting with this, deployment, we talked
about resource groups, right?
In Azure, the first level is the subscription and the second
level is the resource group.
Resource group is the logical separation of resources.
You can have different resources into a group of it.
It doesn't play any operational role other than that you can apply
policies and be inherited into the, resources that are contained
into the resource group and so on.
What I wanted to show you is that when I created my.
Prompt flow.
I have these two new things here and you are going to see them
also for our own deployment.
The one is that we have a machine learning online and point and the
machine learning online deployment.
So this is the deployment.
And here you can apply scaling rules based on specific rules.
I can have custom of the scaling and say, okay, one traffic is.
85 percent for five minutes, then deploy another instance.
So this is very helpful when you go to production and you want to
serve hundreds or thousands of users through a single endpoint.
And this is the online endpoint where you will see that it has assigned
an identity with specific roles.
Everything has been done for us.
This is amazing.
You can see here that we have one, two, three, four, five roles.
And the roles are, I need to be able to write in the machinery workspace,
I need to be able to read my secrets, read the storage account, pull the
image from the container registry that it is stored for the deployment
purpose, and also be able to read and write into storage for logging.
let's see this one, how it goes, it will start eventually.
And in the meantime.
Let's return
to, let me see if this is faster, to the deployment here.
It's not ready yet.
We have to wait.
Let's see how that goes.
It seems that it needs a little bit of time to start to fire up, as we
say, but it will be up eventually.
Let's see this app service, let
me stop this,
yeah, it timed out.
I hope we don't fall in a problem here.
The deployment is in progress.
Yeah, it needed a restart.
This is our very simple web app.
you can integrate it, of course, into a larger website of yours.
It is, the prompt flow.
But, the same exact thing will happen when we take the code that we created here.
Okay?
From the consume, when it is open.
Because it's still open.
Creating the deployment.
But, if we take our code, make some nice additions as you saw here in my VS Code.
I just added some visualization, a little bit of coloring, and there we are.
We have this nice thing.
Q& A web app, where we can ask our question, what are the license,
license options for Windows 10.
Now the format, formatting of your response.
It's up to you.
You can work with anything you prefer to, work with.
It could be.
Any type of response.
Okay.
it could be JSON.
It could be whatever you want to be.
Let me see why we're not having a response here.
Now this is the favicon.
I think it should be there.
Or it's taking some time to answer.
Okay, the deployment is ready so we can use also this one for this.
Yeah, we got the response.
No worries It needs some time, especially if the web app is stopped.
You need some time to Recover and at the same time I was deploying
another Resource intensive deployment.
So it took some time.
So here is the response.
I did my best to format it, the, in the better format I could, considering
that I'm not a developer, but I'm trying my best to be, or try to be one.
Anyway, this is the response coming back from our prompt flow, which is
now an actual web app deployment.
A little bit of a recap, since we have, no, we don't have, I hope you like
this presentation and this end to end tutorial of how to create our prompt
flow and deploy it as a web app endpoint.
We integrate it into a web app, a very simple Python flask web
app, and here it is how it works.
Thank you everyone.
Have a great one.
Bye bye.