Conf42 Prompt Engineering 2024 - Online

- premiere 5PM GMT

AI Orchestration: Getting Super Agents to Complete a Mission

Abstract

I will discuss how AI agents can be orchestrated to manage a business, particularly for individuals with side hustles. The focus will be on leveraging AI to “multiply efforts.” I will also address safety concerns, challenges, and potential solutions to overcome these obstacles.

Summary

Transcript

This transcript was autogenerated. To make changes, submit a PR.
Securing multiple income streams is something that has always been on the rise in South Africa. We even have a very scientific name for it, side hustles. Students, corporate employees, and even those who have retired have side hustles. According to the 2021 Brand Map Consumer Insights Survey, 30 percent of middle class adults have side hustles such as running small businesses, home industries, and jobs that are completely different from their main employment. While side hustles allow one the opportunity to financial freedom and creativity, they can lead to burnout due to the demands on time and energy. This can lead to some people feeling discouraged from having side hustles or even performing poorly at their main jobs due to the growing demands of their side businesses. Now, the traditional approach to solving this has always been to turn to labor leverage. That is hiring people to do some of the work so you can focus on the more important stuff. The issue with this approach is that not everyone can afford it. This can also lead to financial stress when the business is not doing so well in, in revenue. Now, we as the current generation of young people are rather blessed because just recently The side hustle community or gig economy has come to realize the potential of AI in solving this problem for us. With AI, we can multiply our efforts as if we have hired people. We can scale our businesses to 150 people and only have one that is human. today we are going to talk about a topic that has a lot of people rethinking how they do side hustles. AI Now, as many of you might know, my name is Matandwa, and I'm a junior software engineer at BBD. Throughout my life, I've always been fascinated with the integration between business, people, and technology. So much that, last year, I graduated with a BSc in Computer Science and Business Computing degree from the University of Cape Town. Most importantly, I believe that technology is about people, and that LLMs are the closest we have ever come As a civilization to getting the machine to speak the human language. Now, an LLM, as it is a black box that we can feed a prompt in the form of text and have it cough out an answer for us. And sometimes media like images and audio. This is your chat GPT, your Gemini, your cloudy llama, et cetera. We, we can take the output of one LLM and feed it as input into another. This will create what we call chaining or. Layering, it also turns out that we can take a chain of layers and encapsulate them under an identity, which can include a name and contact details like email and phone number to create what we call an agent. This is what will be your employee in your business. Now, because agents have names and contact details, they can communicate with, the external world and with each other. Now, when this happens, you have an orchestration. AI agents working together to achieve a mission. And this mission could be anything from marketing and consumer service to product recommendations and financial management. The possibilities are endless. What's important to note here is that we have delegated work to something you don't even pay a salary. So you can focus on the more strategic decisions for your business and personal commitments. Now, before we get to the demos, there are a few ways we can think about orchestration. On an agent level, since you have multiple layers, you really want to be clear as to what each layer does. An agent is an AI being. It needs rules to make decisions by. It needs to know what is allowed and what is not allowed. We also need to recognize that overloading it with instructions and rules in a single prompt could lead to confusion and hallucinations. Hence the need for layers. One way we can implement layering is to go from abstract to concrete. That is, the layer at the top does not necessarily need to know, about or how to pull data from the database. It just needs to know, which database to pull from and whether or not that database has the information it needs. The actual query or SQL query generation can be delegated to the more concrete layers below. Layering also allows us to add security middleware. For example, we can add a middle layer between the top layer and the layer below that looks at a case in isolation and decides whether or not to allow the request to proceed to the layer below, which in our case will be the one connected to our database. Now this is powerful because we can implement morality, ethics, and ensure the agent does not deviate from the mission. Which in our case is, again, to make the business owner more money. to demonstrate this, or to demonstrate the idea of layering, I will show two examples. One example will be a simple agent with just a single layer, and the second example will be an agent with three layers, where the second layer is the security layer. but before we get to that, just to give you an idea as to what you need to create agents, the first thing you need is an orchestration platform. I'll be using something I built for my own ideas, which is OAI, but you can also use LearnGraph, you can use Streamlit, or you can just go with plain Python and VS Code and just write your agents from scratch, right? It's all up to you. The second thing you need is like an idea of what agents you need and what each agent is supposed to do. and lastly, you need patience. That is, patience to test. going to the demo, just going to open my first example here. DevFest 2024. if I come to, sorry, Is it the single layer example, which is this one here. And, so what you're going to see here is just a simple, agent that, if you look at the prompt, It's name is Andy, and it's a nice salesperson for a small apple farm based in South Africa, yada. And it's supposed to reply to all customer queries with a bit of humor. So it tries to be funny, right? the model I'm using for that, or the LLM I'm using for that is just ChatGPT 3. 5. And, oh, and I also added the voice service to get the agent to talk to me. So services, or at least in the context of OAI, are tools that the agent has access to, that it can use to go about completing whatever task you want to give it. In our case, I gave it access to the voice service, so it can just speak to me. So now, obviously this is a simple prompt, there's nothing special about this, it's like ChatGPT prompting and all of that. So if I open that, I can say, Hello, what is your name? Sorry, there we go. Hello. My name is Andy, the friendliest apple aficionado at our farm. How can I help you today? Okay, hello, my name is Andy, the friendliest apple, whatever that word is, at our farm. How can I help you today? I just realized I'm not sharing my audio, so you can't really hear, but this text that is written here, it actually reads out to me. but that's fine. Um, that's just a simple prompt or a single agent with a single layer. so the second example that I want to look at is the important one. So let's close this. it's not this one. It's customer queries, security layer. Let's see. So this is the agent with the security layer. So I go to layers. And like I mentioned earlier, you have, again, from abstract to concrete. So that is, you have the layer at the top, then you have the middle layer that implements security, and then you have the data layer. So how this agent is supposed to work is that the data layer has access to a SQL database, as you can see, OAI SQL Server. So this allows it to connect to a SQL database. and that database has information about customers and sales, right? So we can ask this layer. information about let's say, for example, how many customers did we have for the year 2024 or how many of our customers, bought from us, last year, December or something like that, right? It's all up to you. And for that time, I'm using chat GPT 4. 0 and yeah, that's that. So that's the data layer. But now the data layer needs to be protected because we don't want anyone to have access to this information, right? So what I did here, I added a second layer called security, like I mentioned earlier, that, We'll take in a request from the top layer and it will look at the email that request is coming from. And currently it's connected to Gmail. I'll show you how that works, but actually I can't show you, but we'll see. So how that works is that you have a request coming from the layer at the top or the top layer. And this one will look at the email and compare it to what we have here in the allow list. So that if that email is coming in, is in the allow list, then we want that request to proceed to the layer below. If it's not in the allow list, then obviously we want to block that person from having access to this kind of information, right? Which only makes sense. And then here, I'm also using GPT 4. 0 just to show you, you can use any model you want, actually. I haven't loaded my Anthropic, API keys, so I don't have the, your Cloudy or those models. And then, here at the top, now you have the top layer, which takes, the information, or which receives the emails, and it takes those emails and forwards them to the layers below, which in our case would be, where is it, the security layer, right? And then obviously the security layer applies its security stuff, and then the request can proceed if it's in the allow list or not. really nothing special here, just another prompt. And again, Chachapiti 4. 0 so how this is supposed to work is that so you, I'm sorry, so you have This agent is connected to an email Um, to, to an email account on Gmail So it, it's using OAI mail So I can actually go to my Gmail and send it an email But the issue with that is that it's going to take at least 15 minutes for The service I'm using to, you know Get that email to my agent. So what you're going to do since we don't have that much time We're going to fake emails coming into the agent so what I'm going to do I'm going to chat to the agent and I'm going to say new email from me and Sorry, just to make this bigger. Let's say Who are our customers? Who are our customers? And then I run that So what's supposed to happen is that I'm supposed to get a list of all our customers because this email is in the allow list, right? Here is the information on our customers. One Green Valley Grocers. Two Sunny Acres Market. So there we go. Three Hilltop Organics. that's our customers. And it's the customers that are coming directly from the database. It's not just made up names, right? which is really cool. it's just really, it's a really cool thing. But now, let's say that the same email came from, just gonna modify this, copy it, say the same email came from, what's the most fraudulent email you can think of? Um, the same it came from, what's, I can't think of a name, abc, mandy, right? Or let's say it came from mandy at gmail. com. And. Mandy wants information, wants the same information that I asked for. And let's see if it's going to allow Mandy to get that information. I'm sorry, but I cannot provide access to customer information because your email is not on the list of authorized users. If you have any other questions or need further assistance, please let me know. Okay, so as you can see, it did not allow that request to go through because Mandy is not on the authorized list of users, which is super, super cool. so basically that's how you would go about, implementing security with using layers or using layers rather, right? So back to the presentation. Of course, like I mentioned earlier, agents communicate with each other. And by breaking down a complex problem or task into multiple agents, we apply the divide and conquer strategy often used in computer science. This means that no matter how complicated the task, we can decompose it into smaller, manageable tasks, continuing this process until we reach a level where each part is straightforward. Each smaller task can then be assigned to an individual agent. And once the agents have completed their tasks, they can bring their solutions together, gradually building up to solve the larger problem. An example of this is what I call the proxy to expert approach, where you have a main agent that receives requests and forwards these requests to agents that are better suited to provide responses. Now, the beautiful thing about, proxy to experts is that we can provide one interface to the user. The main agent and have a hundred or even a thousand experts that it can refer to for advice. So the next demo that I want to show you is an example of this where we have Just going back to the demo Get out of here come back here Proxy demo And yeah, so this is the orchestration with three agents. So one of the agents is a customer expert. So this agent knows everything about customers. again, oh, sorry, actually, you, we can pull this information from the database, but here I just hard coded it into the prompt just for simplicity. so that's, this is the customer expert, it's, it's a customer expert for a small apple farm. and it answers questions relating to customers, and these are all our customers. And then we also have a sales expert, also for the small apple farm, actually they're all for the small apple farm. And the sales expert stores information about, the orders that customers made with the date and the number of units that they bought. And By units, we mean apples, right? Again, we could have, taken this from the database using the SQL service, but I'm sorry, I'm hard coding it here for simplicity. So that's that. And then the last agent to the, yeah, the important agent we have here is the proxy agent. So the proxy agent basically has, it knows all it knows about, the customer expert and the sales expert. And what it does, when it receives requests about sales, it forwards them to the sales agent, or sales expert. And when it receives, requests about, customers, it forwards them to the customer expert. I can show this to you with, who am I chatting to? I'm chatting to the main agent. So if, let's close this one. I'm going to say. Tell me about, or who are our customers. Who are our customers. So what's going to happen here is that, it's going to reach out to the customer agent, ask it for the list of customers, and then reach that list out to me. this is cool. this is cool. So what, another thing that we can try, is say for example, this is cool. Three. Asterisk, cast risk, fresh apples. Limited. Asterisk. Asterisk. How many customer id? Three. What is JJ stores? If you need more details about their purchases or interactions, just let me know. Have, for the year 2024. So how many orders did JJ stores have for the year 2024? So what's going to happen here? I'm sorry, but I don't have any information on orders or customer activity for the year 2024, as my training only includes data up to October, 2023. Please contact the sales department or check your current Customer Relationship Management, CRM, system for the most updated order information. Okay, I think it hallucinated there. but basically what it's supposed to do is basically reach out to the sales agent, sorry, and ask it about, sales for JJ Stores for the year 2024. And I think it's because of the year that it actually gave that answer. Let's say, how many orders, let's say sales orders, how many sales orders, Orders do we have for JJ stores? Okay, let's try that. We have a total of four sales orders for JJ stores. Okay, there we go. So it gave us the answer the correct answer for all sales orders for JJ stores. I think here what took it off was the year 2024. And it was only trained for after October, so I could have maybe modified my prompt a little just to be clear as to how to handle years, for example. but yeah, that's an example of hallucinations, eh? cool example. But yeah, we have four sales orders for JJ Stores, and if we actually go to the sales expert and we count the orders for 2024 for JJ Stores, that's the first one. So that's one, that's two, that's, sorry, that's three, and this is four. So it's correct. We have four orders for JJ stores for the year 2024. So that's how you would go about implementing, proxy to experts and which allows you to have, a main agent that can refer to as many experts as it can possibly refer to, to give you the information that, that you need. You just need to be clear, in your prompting to, to make sure that you really want to reduce hallucinations. Which is like the next part of my talk, actually. a conversation about AI running a business does, of course, come with a few concerns. And one of which is hallucinations. As we saw earlier. how can we trust our agents are doing the right thing? When we can see, for example, with chatGPT, that it can sometimes give the wrong answer. Now, LLMs hallucinate due to the nature of their training. The data they were trained on can contain biases, inaccuracies, and misinformation, which can be amplified during prompting, especially when there is not enough context to drive the LLM to a more accurate response, like we saw earlier. there are a few things we can try to reduce hallucinations. One of which is to choose the right model. That is, if you're going to be, if you have a problem that involves coding, for example, you really want to choose a model that is good at understanding code. If you have a problem where the agent needs to analyze pictures, you want to choose a model that is better suited to deal with, pictures and media, etc. So choosing the right model is important because you don't want to use a model for something that it's not really supposed to be good at. Because now you're just going to be wasting resources and money, right? So that's the first thing you can try to reduce hallucinations, choose the right model. Secondly, we need to learn good prompting techniques. Thanks. And there are two things we can try here. The first one being the chain of thought method. That is, you have, or when you prompt your, when you write your prompts, you also ask the agent to break down, the steps it's going to take to get to the solution. Now this works really well, even with humans actually. The minute you have to think about how you're going to get to a solution is the minute you tend to think deeper about the solution. Which in our case, in the case of LLMs actually, it can drastically reduce hallucinations. And secondly, we can look at few short prompting. That is, you prompt the LLM with examples. so basically, you ask it for green apples, but also you tell it what green apples look like. so what's going to happen here is that it's going to take the framework that you give and combine it with the facts that it has for that particular moment. particular problem to give you a response that is closer to, to, to the true response, right? And the third thing we can try, the second last thing we can try is reg models. So that is every time we prompt a model, we also give it access to documents that it can refer to for more information about this specific task that we're asking it to do. For example, if we are, if we prompt in the model around refund policies, we can give it access to, to our website or the page on our website that speaks about refund policies. So now when it tries to answer or provide a response for us, it can look up stuff on, on, on our website to make sure that it's actually giving information that is relevant to our problem. So that's basically regmod models. You prompt with documents, right? retrieval augmented generation. Beautiful thing. The last thing that you can try is to fine tune the model. as we know, actually, with every machine learning model out there, there's parameters that you can change that can make the model better suited for the kind of task that you want to use it for. One of the parameters that LLMs have is temperature, at least in Langtrain. temperature allows you to control the creativity of the model. I think in Langtrain it's a value between 0 and 1. So the closer it is to one, the more creative the agent is. And the closer it is to zero, the less creative the agent is. So that's one parameter, but there's a, there's more parameters that you can change to fine tune your models to reduce hallucinations. Now, as you all might know, hallucinations are a risk and like any other risk, we can reduce the likelihood of it happening, but we can never be a hundred percent sure it will never happen. another approach we can take is to reduce the consequences or the impact for when the risk happens. This will answer the question, if my agent does hallucinate, how much could they lose? Now, there are a few ways we can go about this. I'll only mention two. The first one is to adopt hybrid models. That is, you have the human and you have the agent working alongside each other. So the human will, or the agents will handle the routine tasks and the human will handle the more high stakes decisions. So an example of this is let's say an online store, a customer wants a refund and the agents are going to be the ones handling the process of refunding the customer just up until we send the money to the customer. And now when we have to send the money to the customer, the human is going to come in and be the one to log into their bank accounts and Do the clicking to send money to the customer. So that's hybrid models, So if our agents do hallucinate and they want to refund a customer that is not supposed to be refunded, then obviously the human is going to catch that and say, no, we can't do that. Right. secondly, you want to add monitoring and observability systems like Langsmith to catch hallucinations before they spread through, through our orchestration or even worse to customers. The beautiful thing about such tools is that. They also allow us, they also allow for continuous learning and improvement of our agents to meet the evolving needs of our business. that's the solicitations. And obviously there, there are, a few more concerns around, LLMs, especially in the context of putting an LLM to work to handle part of you, running your business, And because of time, we can't really talk about all of them. But, through, through my interactions with people, in the field of AI and machine learning, I've come to learn that hallucinations are the biggest one of them. That's why I really spoke about hallucinations, but if there are more issues, we can always chat. I don't know where, but yeah, I think there's going to be on, on, on the, on the YouTube thing, there should be something below there that could, that can allow us to chat. yeah, so just to close off, I'd like to say, as we stand at the threshold of a new era in business management, the opportunity to leverage AI has never been more accessible or more impactful. Imagine a world where you're able to scale your side hustle without sacrificing precious time, where you can focus on creativity and strategy while your AI driven systems handle the routine. This. This isn't a vision for large corporations, it's a reality you can start building today. AI is here to amplify your efforts, not replace them. By thoughtfully embracing these tools, you're reclaiming the hours and energy that were once lost to the daily grind. You're not only growing a business, you're creating a more balanced life. One that allows you to focus on what matters most. let's embrace this future together. Start exploring. Take the small steps and watch how AI can help you multiply your impact. Magnify your success and give you the freedom to thrive. Thank you.
...

Batandwa Mgutsi

Junior Software Engineer @ BBD

Batandwa Mgutsi's LinkedIn account Batandwa Mgutsi's twitter account



Awesome tech events for

Priority access to all content

Video hallway track

Community chat

Exclusive promotions and giveaways