Conf42 Prompt Engineering 2024 - Online

- premiere 5PM GMT

A Large Language Model Took Me on Vacation

Abstract

Discover how AI vacation planning unlocks advanced prompting techniques for coding and problem-solving. Learn to master context, explore “Indirect Conversation,” and elevate your AI interactions for smarter web applications and seamless user experiences.

Summary

Transcript

This transcript was autogenerated. To make changes, submit a PR.
Hello. Welcome to this talk about how a large language model took me on vacation. My name is Erik Lijsma and I'm a senior front end engineer at Soap Rasteria. Today, I'm going to take you on this story. This is a story of my own, all of some core insights I had working with AI. Yeah, we're going to see what I've learned from this interaction, how it improved, my AI usage. So it all started over a year ago, when I started to use, JetGPT to plan a holiday, a holiday trip for me. it was not much travel experience I had at that moment. So I started to collaborate with Chachipiti and tell me about some options. And in the end, we went to Porto in Portugal. I always see this trip as the start of my personal life with AI. It has some impact there. I had one of the best holidays I've had so far. And like I said, it was very nice because we didn't have much travel experience. And this Chachipiti collaboration really took a lot out of my hands figuring stuff out and giving us a carefree trip. And I started to realize that AI can really make a big impact on the non digital world as well. And not only for me, but for every person's life as well, if they want it or not. During this trip planning, I found out some, steps and realizations, which I'm going to share with you. So the first thing is about understanding the technique. while planning this holiday, this trip, you start to notice that on the trip, things work differently. First of all, we were going to a country with a different language, but also how you approach getting your dinner, go over your activities or your daily schedule. These things work different than they work at home. they can be so fundamentally different that you have to adapt to that. If you do the things how you do them at home, it's not going to work. this is one of the understanding parts for me that AI you should also use, how it was meant to be. How does an actual large language model work? And for me to understand how a large language model worked really did a big benefit on my, quality of output. and we have already so many times, but it's a language model. And I don't think we can emphasize it enough. It's even in a name, a large language model, but we are so easily deceived by the power that AI has, that our conscious filters, are easily bypassed. First in the beginning. And we probably all know and remember this, the situation. And. We started to be very polite to our interactions with AI and with large language models. Mainly because somewhere in the back of our minds, we thought they might gotta take over the world. I rather be nice to them. So hopefully they. Be nice to me as well. Then the other way around where we would be direct and not as nice, but getting good results. But AI is still a language model and context can change, but we are easily deceived by that. we often got the feeling that I, that large language models have a more, have more understanding, more knowledge, but it has no absolute knowledge. As an example where. When we ask an, a JGPT who is the son of Marilee Pfeiffer, it's going to give us a pretty convincing answer. Only it's not true. Knowledge wise, this isn't correct. It's no, it's not a fact. It's the other way around, where, Daniel Baldwin, this is him, is not the son of Marilee Pfeiffer. And by changing the context, because, Marilee Pfeiffer is not that famous, but her son is, and when we ask her who is the mother of Tom Cruise, which is the actual son of Marilee Pfeiffer, it's telling us the right answer. So this change in context already, determined the knowledge it like the AI has and how easily we are deceived. We had the same with image and with image. We have a natural filter. When we see something, we are eased more easily. I'm noticing things are off. So with images, it started to use, very, weird and non logical ways of showing fingers and hands. And that's not nice. So we started trading on those, on those, things to make it look wet better and have a nicer. representation of a hands. and we got better at it or the AI got better in it and it got to understand it. But in the meanwhile, for example, roller coasters don't nearly get as much, attention as hands and fingers, do. So you see these images look very natural, like very realistic. only to see that they are not real or possible or, based in reality. And we see that and we feel that more naturally that this is not reality. And with text and it's harder to get that context to see how true to, to, to reality, this is. And it's a particular problem for text models. And that's different than image, video, or sound, sound models, because this text is not natural. image and the other things, we naturally perceive them. But text is something we had to learn. and this is why this is so big of a difference. So in my preparation, I asked AI to help me prepare a packing list. here I got a reply that had some formatting in it. And this is one, learning for me that understanding, to use formatting to, give context or emphasize certain parts of an text are, is really helpful. It gives the model some structure to work with and understand what's being asked. And we can use this building blocks, to give context to formatting. Let me show you a couple of them. to distinguish certain sections of your prompt, we can use different formatting tools. For example, backticks to highlight snippets of code, triple quotes to distinguish certain parts from other parts. this is, an attachment or like the text, I'm talking about a line separator to, to distinguishing sections between your prompt and heading with a column to show. the next thing has some, is related to this heading and list. If there are unor or ordered. They also give some context to, to a section. And this also goes for inline, context and syntax context. So here we're using a couple of the same, items in a more inline, version. So b can emphasize a path or a certain variable. quotes can highlight certain sections of a sentence. We can use placeholders like a name. We can later on. I fill in with the real person's name or capitalization to really emphasize that word or even special characters to, to maybe get some comments or clarification on certain parts, like giving a key value relationship. and XML text. These are really, nice, especially in Cloud AI by, by Anthropic XML text can really emphasize what different sections are about and label those accordingly. Next to the formatting and, the way of writing, you can also give some context to, The certain approaches or prompt engineering techniques you can add to this, you can use to communicate with AI. And here are some well proven prompting techniques that, at this moment, during that phase where I started to, go on this trip, got to learn and, know what they meant and how they work. and let's go for a couple of them. You're probably familiar with the role prompt. It's maybe one of the most famous prompting techniques. this is where you tell the AI to act in a certain way by imitating a certain role. And it looks like this, where you say, Act as a fill in the blank and then ask for a task that this, role would, fulfill. I think it's a really nice, prompt, but I think there can be some improvements on this. I will show that late later on. Another prompting technique is the unfinished chat prompt. this really interesting, fun and can be surprisingly useful. I even got it by accident. Some, sometimes. this prompt focuses on building towards a question and then by simulating a chat with another person and then stopping right at the moment where the answer from the other person would be. And then have the ai, continue your, your chat message, with, with the ex, with the answer. So the next example here you can see is, me wanting to, know how to change my, dark and light mode on my phone. And I'm submitting this chat with an Apple support employee stopping at the moment where the employee is going to give me an answer. and here it provides a clear answer on, it has the context of a chat. And in this context, it will give me the right answer. Another known one is syrup shot prompting. This technique is just handing over question without any examples. For example, translate this into that. and then it just go goes along. but here, it's. pretty obvious that with zero shot prompting, you will also have a few shot prompting. And this technique, you'll have a little bit more control over the output by showing some examples. So it continues that, that feel and aesthetic of those examples. but it can also be more fixed on too much stuck on the examples that were given. So at one of the mornings of our trip, we wanted to get some pastries and we asked the employee of the, of the hotel to guide us to a local bakery, which where we could get the local, pastries. And he told us to go to the boulevard and we went there on the sunny, summer day. there was some music playing outside and we entered the shop and the shop was full with all kinds of pies, cakes, and these different, pastries. behind the counter was a bakery and we started to pick like which pie do we want to get? And they all looked very tasteful. this bakery was looking at me often What do you want? which pie do you like? and then I ask him, give me the recipe of your best apple pie. And this feels weird, right? Because if I ask a baker to act like something, it's a different context. and this context is for us very understandable. It's not suitable in this bakery. It is not fitting, but in language, it doesn't show that. and this is what I like to call implicit conversations. So this sentence, give me lyrics for a road trip song in the summer. It is not as clear. What is the context? I can ask this to a child of 11 years old and don't get really good results. Or I can ask this to a big, artist, a music artist. Who can give me a really good lyrics. so this sentence doesn't convey that context. Now, when I have this sentence, the music you make is amazing, for listening to while being on a road trip. After five number one hits, what will the lyrics of your new road trip song be? And this will get a different result of, lyrics and probably more high quality result because this has way more context inside this sentence. so this is what I like to, do is to have the context be part of the conversation instead of always mentioning the context or mentioning in context that you maybe do not want like an actor. it's not as good in the end result is that the actual, profession he is acting about, would do that. And I hope this talk made you aware of how important that context is and some guidelines to use that context to, engineer your prompt. I'd like to thank you for your attention and hope to see you next time.
...

Jorrik Klijnsma

Senior Front-end Engineer @ Sopra Steria

Jorrik Klijnsma's LinkedIn account Jorrik Klijnsma's twitter account



Awesome tech events for

Priority access to all content

Video hallway track

Community chat

Exclusive promotions and giveaways