Conf42 Prompt Engineering 2024 - Online

- premiere 5PM GMT

Optimizing AI Interactions: Techniques in Prompt Engineering for Enhanced Model Performance

Video size:

Abstract

I would like to share my experience and knowledge regarding prompt engineering with different GenAI tools.

Summary

Transcript

This transcript was autogenerated. To make changes, submit a PR.
Optimizing AI interactions. So techniques in prompt engineering for enhanced model performance. So in coming to introduction to prompt engineering is a new area where we should know how to ask questions, how to instruct an AI model. So it is called prompting. Prompting is an art. So to get a best response, we should prompt effectively. The more focused we are into prompting, the more effective we get out. Practicing prompt engineering will enhance the AI model performance and usability. It will improve the response that we are supposed to get. There are different AI models, in market right now. The OpenAI, started with GPT 3. Now it is with GPT 4. And we do have image generation model called Dell, E. We are using multiple generative AI models like Meta, Chad GPT, Microsoft Copilot, et cetera, and everyone has started using now these AI models. So how to use these AI models effectively? So we need to know about prompting, which will get the response from the AI models. Every AI model generates outputs based on patterns in the data. They have been trained on prompt quality, directly provide the user quality results. Roles of prompt engineering in working with AI models. So basically, a prompt is an input or query that a user asks an AI model. Say for example, what is the temperature today? What is the temperature in California today? More specific query if you want. it should have variable prompts which will give us various results even if the question is same. So the prompt should be clear, specific, it should have a context. Then we will get the best response. Structured prompts produce best results. Now, how to structure a prompt for getting a best result from an AI model? Let us see. For that, we should know about different prompts, different types of prompts. There are instructional prompts. Instructional prompts, we can request AI to produce a specific format of style. Say, for example, we can, ask to write a poem with 10 lines or write a poem on this context, write a poem on about flowers. Some instruction we can give to AI. AI will respond us. As we need, it can write stories, it can write essays, it can write poems, it can write cover letters, resumes, depends upon the input we are giving, that are all the instructions we are giving. In some cases, we might need to give a context. Say for example, some strategies of business growth based on current economic trend, the context we are setting as current economic trend. So the contextual prompts basically provide the background information, So that AI can, shape its response according to the context. Next one coming is the interactive prompt. So to generate various perspectives interactively. We can talk to AI like a human in a discussion mode. For example, about the current election. What do you think of the current election? Who will win? What is the possibility of winning? We can keep asking questions. They can keep answering. So they will give multiple responses. So there are, these are the three different type of prompts. Instructional prompts, contextual prompts, and interactive prompts. Now we come to know about different prompts. Next question is, What are all the techniques for optimizing this, prompt more effectively? The first thing is, it should be more specific, be specific. More details in your prompts will provide you more accurate results. For example, asking for a diet plan to change the PT Generate UI model. What is the best diet plan for, for a day? That is a general question. If you are asking for an adult, for 36 years of age, or for a kid, or, South Indian, or, American dishes, likewise, if we give more specific, items, we will get more specific answers. Apart from that, we can give examples also. I will be coming to the example section so that I can explain them in detail. So after that, the next step for effective, Prompt is setting context. We have to ask prompt, say for example, about the election in the current election context. If we are asking, chat GPT or any AI model about what do you think about election? They will give general answer only. But if you are asking a query, what do you think about election, current American election, then we will get the exact response which we are looking into. And also same like tone, set the tone level of formality. Let us validate some examples, some case studies, how prompt engineering works. I'm asking a query, give me a balanced diet for a day. what is the response? See, the response looks like a balanced diet, given some explanation about the balanced diet, and they are giving breakfast, multiple options, mid morning snacks. and multiple options they are giving. That's a big response. Now I'm planning to give more clear and specific prompt. Give me a balanced diet for a day for a six year old girl child. Now the prompt answer is different because I really want a six year old, girl child diet, but I'm asking a balanced diet. It's a general question. It will be common for everyone. Now, if the question is more specific with six year old child, It is giving about, the, the kid's explanation and the breakfast option with morning style. The option got changed. The option is more specific to a six year old child. let us give more context also. Say, for example, the same, give me a balanced diet. For a 10-year-old boy with an issue in eyesight. So issue in eyesight can, can be with, nutritional deficiencies also. the GPT model, the AI model, has taken the eyesight issue as a context, and it is considering the 10-year-old boy and also after calculating all the, details from internet, it's, giving a, response. specific to the context which we need. as we discussed, in the previous slides, there are different type of prompts, instructional prompts, interactive prompts, setting examples in prompts, etc. Let us see how instructional prompt works. Now I'm asking, the AI model to give me the response in bullet points with maximum 10 lines. I already have a, response in the previous slide. If you see in the previous slide, I already have some response with some explanations, tip, et cetera, but I want the response in lists. I'm giving the response and I am setting a limit of 10. and I want everything to be in bullet points. So what happens? It has reduced, the explanation description, but all the points it has given in bullet points. And if you count, it will not go beyond 10 lines because I have limited them to responding 10 lines. Same way. They say, let us try with interactive prompt option. So now I have a, a diet plan for a 10 year old, boy with eyesight issue. Now I am thinking whether the eyesight is only because of the nutritional issue or any other issues are there. I don't know. I just want to interact with someone. So I'm interacting with the AI model. I'm asking him, what do you think that causes issues in eyesight of 10 year old boy? Now I'm giving some instruction also here. Give me the response in a paragraph of five sentences. Now the AI with all the possible issues. Excessive screen time, genetic factors, nutrition, lack of proper nutrition, bright light exposure, UV rays, smoking, all the list, all the possible list it is giving. And also it is limiting to a paragraph of five sentences. Now I can ask again on top of this. On top of this, like what about the genetic factor? What about excessive screen time? How should be the excessive screen time? It will give, keep giving the answer. So in advanced prompt engineering, techniques also. Now this interactive prompting, we, it can be done iteratively. For example, what are all the genetic factors can be, the cause of eyesight, a model can respond with all the genetic factors and the user can compare it with their own, requirements. Accordingly, it will, ask again the questions. And also for lack of proper nutrition, if we can ask on top of this interactive prompt, if we can ask about lack of nutrition, then it will explain more about the, what are all the nutrients like vitamin A, B, what are all the nutrients which are required for a proper nutrition. Then we can think about, the diet plan. Okay. So this kid is having a vitamin A carats every day. So we don't need a vitamin A, again on this plan. or we can ask how much vitamin A is required. So that, how this iterative process will go on. It's like we can interact with the normal human beings. And for the further questions, we don't need to ask the same question again, we just need to ask about what are all the vitamins required. That's it. If we ask what are all the vitamins required, the AI model will understand the context from the previous. Prompt it will set the context that this user is asking about a 10 year old boy and the issue is eyesight and they are thinking about improving the nutrition. So this what are all the victims this question is related to that context. It will automatically assume that is the beauty of this AI generative AI models. We can set examples also in the prompt so that GPT will, GPT model will understand that example. and give the result in the same way. Give me the same response. I'm asking. I already have a response. But I, it doesn't look good for me. Now I'm asking. Give me the same response in different way. For example, I'm giving an example. For example, genetic factors need to be a heading. Give an explanation. Excessive screen time need to be a heading. Give an explanation. Now, see the surprise. The breakdown. They are giving genetic factors and given all the list. excessive screen time, given all the descriptions. The same way they are responding. This is how we will set the examples in the prompt. Apart from that one, there are multiple advanced prompting techniques also called few short learning, chain of thought prompting, tree of thought prompting, etc. In few short learning, we can give few, we already saw in that, context setting example in the previous, slide, setting examples in the prompt, setting examples in the prompt. This example I have given only one item. If we are able to give more and more examples, AI will understand the context. And gives the more and more specific response, which exactly we need To provide a few more examples for desired outcome to help the model generalize And also show in chat gpt how they will arrange output in list bullet point, etc. This is just an example and What about chain of thought prompting? Chain of thought prompting will be encouraging the model to explain its reasoning step by step. So for example, they're telling genetic factors, the, they're mentioning some reason like if the, if there is a family history of vision problems such as myopia and astigmatism, child is more likely to in inherit these conditions. This is what they mentioned in the genetic factors and the genetics play more role in determining child's psych health. Now, we are thinking, so whether any family member is having myopia, whether the kid also will get it. we are thinking about the, AI models reasoning. We can question it, we can question it, why are you thinking if a family, history of a vision problem is there with myopia, the kid also should, what is the possibility, what is the percentage of myopia? getting that one for the kids. we can ask for more reasoning. If we ask for more reasoning, they will get more specific, we will get more specific answers of that. model will be encouraged to explain the reasoning step by step. They will give more examples. So there are this much population in such a place and in that place this much people had myopia and their kids has been identified with myopia. So this is a reason why they are telling the family history will impact their eyesight. Likewise, model can explain it like a human being, like a doctor, like a, advisor. And, this AI models can act as our supervisor, our juniors. We can ask them to write coding. We can ask them to write a program, develop an application. This text, it's amazing. It's amazing. Ask question again from the response, which is already given. It is possible. This interaction can go for hours. It will, they will never get tired. They will, we will get more and more specific answers. To get an AI model in its maximum effective way, the only possibility is to send an effective prompting like chain of thought prompting. Now the next advanced technique is tree of thoughts interacting the model to share a different perspective of the response. in a single response, you'll be having different, ways of approaching it. So that one, a response, AI models can give different perspective of the response and are from those perspectives. A user can read and identify which perspective they are looking into it and that perspective we can ask them again. this, GPT models, they are, AI models can help us help our, help us for, writing codes, any code, like we can, writing codes reviewing and, and trouble shooting issues, possible trouble shooting issues. We can, it's. It's covered. It depends upon how much data set they have covered in their AI model, that much coverage if we can get from them instead of, searching in Google, the same data is getting trained by the AI model means what we are asking effectively in the prompt we will get in the most effective way. Now, we got a response. Can we believe it? 100 percent right. Of course, we cannot believe it. Why? Only because. Every AI model is trained for a set of data. how far, how vast is that data? How see if it is, from like Google covered the details. The same VA model covered the entire, Wikipedia or entire Google and entire internet depends upon the coverage of the data, which they have done. They will, get us the response and also from their training data, they will get us the response. we, it is our responsibility not to relay the response 100%, instead validate the response whether it is matching. Whether it is relevant. If we are asking about election and is it is relevant to what I am asking instead of considering the result as such. Is it is relevant? Is the response relevant to the problem? Whether the response is logically structured and is it understandable for the user? Whether it is easy to follow? And A double confirmation is required, whether it is accurate. This, if we are asking for some financial figures of the growth of a company, then the model will give a result. And if we are searching it in the latest result in internet, we might be getting different results. Why? Because sometimes, say for example, if model is trained for the data until 2023, Then 2024 details model will never fetch. In that case for getting the latest result, we might rely on the other sources. So accuracy need to be validated. We need to know about the model, how they are trained on what set of data they are trained, but still we have all these apart from this, all these pros and cons, we can use this as a, guideline. Even though the accuracy, we need to validate coherence, relevance, everything we need to validate and we need to do the , we need to test the same, prompt, the different versions and validate and compare the output because, the MA model should be able to provide the, same relevant answer if you are asking the same prompt with different, wordings or different versions. We can try adjusting the phrases context and examples based on the feedback so that and finally we can get the, refined model performance. coming to conclusion and best practices. Prompt engineering is essential for maximizing AI model performance. Prompt engineering is one of the future items which all technology people and every common man will be looking into. Specificity, context, And clear instructions will drive better results from an AI model. Experimentation and iteration are key to optimize the outputs. Of course, we need to try multiple prompts instead of relying on the single response. We have to provide more and more experiments. examples and context on what we are thinking in my mind because the magic behind here is they will understand, AI models will understand the context and examples whatever we are setting. Continuous evaluation and refining the prompts is required for more and more accurate results. So this is coming to a conclusion. And what are the best practices required to, have an advanced prompting to get a, best AI performance. So happy prompting, get the response effectively, keep looking into, but kindly reach out to me for any further clarifications. Thank you. Thank you for, today's session.
...

Reetha Vadakkekkara

Azure Solutions Architect @ Walgreens Boots Alliance

Reetha Vadakkekkara's LinkedIn account



Join the community!

Learn for free, join the best tech learning community for a price of a pumpkin latte.

Annual
Monthly
Newsletter
$ 0 /mo

Event notifications, weekly newsletter

Delayed access to all content

Immediate access to Keynotes & Panels

Community
$ 8.34 /mo

Immediate access to all content

Courses, quizes & certificates

Community chats

Join the community (7 day free trial)