Transcript
This transcript was autogenerated. To make changes, submit a PR.
Today, I would like to talk to you about prompt automation.
We all know about creating prompts, but using command line tools and automating
them can bring us more efficiency.
the rise of AI in automation.
AI is obviously everywhere.
We are here on this conference.
We want to hear about it.
That's a no brainer.
However, there are certain challenges in integrating AI tools.
There's a proliferation of AI tools.
There's quite a few of them, and I always found command line based
workflows to be the most efficient.
They lend themselves very well to be automated, and I feel that integrating
AI in this way into your workflow is just the best bang for the buck.
this talk will be about simplifying AI integration.
We'll explore how we make, how we can make this command line
integration more accessible.
Easier and more efficient.
But before we start a little bit about me, my name is Piotr.
I work at Loft Labs as a head of engineering enablement.
You can reach me via my web page or an email.
I typically, I work with cloud native technologies, so if you have questions
around docker, kubernetes, vcluster, linux and so on, I'd be happy to chat.
Otherwise, in my spare time, I do a lot of open source work.
I tweak my dot files as a hobby.
So this is called perpetual erasing, I believe.
So if then you know.
If you don't, that means that I'm simply a geek that likes to tweak with my computer.
All right, so let's talk about how to create prompts easier.
There's a few ways.
how I found.
That prompts can be created in an easier way.
First of all, if you're an Antropic user, and you have access to Claude, they
recently introduced a prompt generator.
So let me show you how it works.
You have this generate prompt command here, and we can simply say something
like, summarize my daily activities.
activities.
So that's just a simple sentence, and we can click generate.
And what it will do, it will try to generate a pretty solid prompt
using best practices in prompt generation, such as using tags and
separating the prompt and so on.
So this is a good one if you have access to Cloud and you want
to generate prompts via WebUI.
That's one way of doing it.
However, we are focusing today on CLI.
But there's also other ways.
We can use Fabric.
We'll talk about Fabric in a second, but you can use Fabric, to essentially
help us improve or create our prompt.
So you can see here this in action.
I, input the command here, echo summarize my daily activities, and
I piped it into Fabric command line.
So now it helped us also generate a slightly different prompt,
not as elaborate as Cloud, but it's pretty good as well.
providing us some kind of, springboard so we can work with it easier.
And the third way I found very useful is just maintain your own prompt snippets.
I'm using a program called AutoKey on Linux.
I believe it's AutoHotKey on Mac or Windows.
This is a very simple program that enables you to have key bindings or
abbreviations, and then you can just input in any text box, your prompt snippet.
So I use this very often when I want to summarize my discussion with ChatGPT.
I have a prompt which starts with double exclamation marks and then summary.
And then it just, inputs a prompt right then in the chat box.
So those three ways are very nice to start with prompts and start creating
them, start working with prompts.
So I mentioned briefly Fabric and Fabric is a really cool project.
It's created by Daniel Missler and it's fully open source,
has a lot of stuff on GitHub.
If you don't know about it, definitely check it out.
And it's really democratizes and simplifies prompts in fabric speech.
Those things are called patterns.
Those are simple prompts that you can use.
It is essentially manages your prompts for you.
There's quite a few other enhanced capabilities that we will see in action.
And in and of itself, fabric is already quite powerful.
And, I will show you how it works in a second.
So a few features.
Fabric uses patterns, so those are reusable prompts
that you can define yourself.
And they are going to drive your agent behavior or your LLM behavior.
It's very modular, in the sense that you can pipe prompts into each other.
So the output of, one prompt or one LLM, output would be passed
as an input to another prompt.
It supports multiple interfaces and is very highly extensible and
maybe, Not necessarily extensible as the project, but extensible because
of the fact that this is a CLI.
So since this is a CLI, it works with text interface, then we can
extend it with all other tools.
So this is just a graphical representation of how it works.
A user would input a request.
Typically, it would be either an echo from command line, or maybe you
want to read a file, or you want to type an output of an actual command.
Then you select a fabric pattern, and then the model would output the result.
So what are the benefits of using Fabric as a command line?
obviously it simplifies the integration because there's
a lot of pre built patterns.
We'll see some of them in action in a moment.
You can add your own patterns, and this is already great.
On top of that, it's very customizable and modular.
You can create your own stuff and, as I mentioned earlier,
integrate with other things.
For me, it improves efficiency because, I created some automation
on top of it, which I will show you.
And it just gives me an ability to quickly select the pattern, pass something
through it, and then quickly have an output without leaving my terminal.
And it also enhances accessibility.
I believe the idea of having prompts in such a way where you can share them
with your colleagues or just have them for you stored in a github repository.
It's really cool and that's definitely a very useful, trait in and of itself.
So I mentioned custom prompts.
So those simply are prompts like other prompts at Fabric, but simply
you can create them yourself.
So here you can see a Fabric folder and I can have my custom prompts.
Every prompt It has its own folder, and inside of the folder, you have a system.
md usually, which is a markdown file.
if we go to scripter prompt, you can see it's, quite heavy.
This is prompt I created for helping me write bash scripts.
If I need something super quick, I don't usually bother writing it myself.
I just generate it real quick and then maybe I'll fix it or, improve it further.
So that's just a prompt as you would expect to have also for yourself.
All right.
So those are custom prompts.
However, since this talk is about automation, we want to take
the tools that are available.
So one of them is Fabric and make it better, make it more usable.
So the goal when I created the automation around Fabric for
me was to inject more context.
As you saw earlier with the command where I piped the echo output and then
instructed Fabric to improve prompt.
I used the improve prompt pattern.
It was one command.
You could imagine me piping more and more.
fabric prompts, but I found it a bit cumbersome.
So I added, a tool called wipe.
If you don't know, vibe, it's part of a more utilities package.
And it essentially enables you to edit text before passing it
into pipe to another command.
So think of it as a month produces output.
And instead of passing the output and changed to another command, you
with a pipe, you can essentially slot a wipe in between, and then you
will be able to edit your output.
So I found it very useful in this specific case.
I also enabled prompt chaining.
So Fabric already enables prompt chaining, but you have to chain them upfront.
So you have to know what you want to do.
And I might not know because an agent can, or LLM can give me an output that
I want to slightly modify, or I might change my mind where I want to prompt
it or where I want to pass it over.
Also, the prompt selection fabric doesn't really allow you
to dynamically select prompt.
So I just used FZF, which if you don't know, it's a super awesome
project for command line users.
It's just essentially a fuzzy finder that you can use in various ways.
And at the end of this process, we are ending up with an output of the
last fabric prompt in a Neofim buffer.
So let's see how it works in action.
So let's close this file and I can have orchestration or orchestrator script.
I also have a shortcut, I just wanna call it like that.
And you can see here first we can have all the fabric patterns.
So you can see in the main part of the FCF output, this is the
fabric pattern content, which is a content of the system empty file.
And here are all the fabric pattern names.
So what I want to do for this demo, I want to use the pattern called
scripter that you've seen earlier.
So we're just going to go to scripter.
When I hit enter, opens NeoVim buffer, which is a wipe command,
which I mentioned earlier.
And now it essentially wants my input.
So why I am opening NeoVim buffer to Collect user input rather than just
simply collecting it from command line.
First of all, NeoVim gives me much more control over what input
I give and how I, format it.
I can also use NeoVim external commands, to essentially, pipe anything from
those commands directly into the buffer.
So I could, for example, pull a command, like list files or anything I want.
This gives me much more flexibility.
Also, as multi line editing is simply nicer.
So let's say I want a simple, system info script.
So script that will just give me a system information and you
can see fabric is, executed.
And this is the command that we are executing fabric, minus P, which is for
prompt, we are calling the scripter.
And then here at the, in the, in the background fabric is creating a session.
A session is very important.
This is essentially a chat, session if I want to use multiple prompts.
So you can see that this specific LLM came back to me as it was instructed,
pulling some more information from me.
It says what specific system information would you like and some more details.
And this automation gives me a choice.
I can either run the same pattern again and I can specify this for our LLM,
or I can select a completely different pattern and pipe it, or I can finish.
In our case, we want to actually select the same pattern.
So I'm going to select one.
And now you can see the same Vibe application opens.
But this time, this is actually what the LLM looks like.
wants me to provide.
So here I'm just going to say simple uname output would type life.
That's always wrong.
Simple uname output and then maybe, and false name.
Why not?
So I'm providing this additional input to the LLM and then the LLM generated script.
So we generated the script, but I want to.
Actually pass it to another specialized LLM or prompt.
This is the same LLM behind the scenes.
I'm calling a GPT 4 in this case.
But this script is of course super simple, but just for the demo purposes,
I want to select a new pattern.
So I'm selecting two and I want to select the pattern tester.
So this is a specialized from that essentially analyzes the
shell check output and tests.
So now, thank you for providing more details.
And then I'm essentially, want the tester to test it so I
can change it and please test.
So this is the prompt for tester and you could see that I was able to
modify the prompt so I can add my own context to the prompt and every time
this is being kept very interactive.
So now the tester output something in the script.
Pass the shell check results, script executed successfully, and then the
tester suggests some improvements.
So I'm going to call the script pattern again.
So I'm going to select a new pattern.
So again, two, and then scripter.
And you can see that this time scripter has, this is a feedback.
So I can say, please incorporate.
feedback.
You can see the github copilot suggestions.
It's one benefit of using vibe and neovim, for collecting feedback.
So now we are calling the scripter again, with the additional feedback.
And now the script looks a little bit bigger.
There's some additional feedback from the tester, and the script looks a bit nicer.
So we want to finish.
So when I hit three, we are finishing the whole process
and we are left with a script.
So we don't want this E here, for example, we can, of course, review everything.
So far, that looks all right.
So I want to copy it.
So I'm copying the whole script.
I can quit it and let's, paste it.
So we're just going to paste this to a file called system info.
a stage I pasted this file.
We can.
make this file executable and then let's see if it's working and indeed
we have our system information here at the bottom running on Pop!
OS and then my host name is also Pop!
OS.
That was fabric automation in action.
We were able to take Fabric, which is already an amazing tool, adds
some open source tools on top of it and make it very usable and very
nice for the command based workflow.
So just a reminder, we use NeoVim, pipe in between Fabric and FZ
IF to string all this together.
Another tool that I want to show you is called GPT script.
This is a tool developed by, guys from a company called Acorn.
they were previously, developing like a Dockerfile alternative.
And they were also the same people, that developed Rancher
and other cloud native products.
And this is simply pivoted, to doing GPT script.
GPT script is really a framework and more like a system of tools, that
enables you to have more structure in how you're writing your prompts.
So instead of just writing pure prompts, you're essentially writing
like a scripting language for LLM, with very lightweight syntax, and
bringing the prompts in this way.
So the purpose of this is really to simplify the creation of assistants.
It's really like agentic type of workloads where you have various tools
and you have various, GPTs that are capable of executing different tasks.
So the key feature of this is that you obviously can script in natural
language, so you incorporate prompts.
But, differently to Fabric, it actually has a very extensible tooling system.
So you can use all external tools and scripts and so on.
We will see this in action in a moment, but I just wanted to mention.
So how the workflow looks like.
So first of all, we create a file with the GPT extension, and
then we provide some header data.
Then this header data and our instructions are being translated
into executable actions.
And then we execute commands on specific tools and systems.
Those tools and systems are really, interesting.
You can use local command line tools.
You can call an API.
You can call completely custom tool.
You can authenticate to API.
Maybe you have a, API key or something.
So it's adds this additional layer of programming that we all know, brings on a
little bit more determinism in this sense.
And the benefits of this tool, it's also accessibility, it makes
the interaction with LLMs very flexible and very efficient.
Slightly different than Fabric.
Fabric was all about prompts, and this is a great initiative in and of
its own, but GPT Script is more about really scripting agentic workflows.
So the structure of a file.
It's very simple.
We're going to see this in a moment, but it's just a header
and the body is a prompt.
And then, as I mentioned earlier, you can also integrate various tools.
So the first demo, we're going to do a little city demo about the weather.
So you can see here.
We have GPT script command line, and I am calling a GPT script, called weather.
I will show you the script in a second, but when I hit enter, it's running
the tool and there's a lot of output.
Let me scroll a little bit up here.
You can see it used a, tool, or weather.
This is a curl command.
It's a really cool tool, and it shows me the weather now.
where I am located.
So in Hanoi.
Okay, great.
so this is what it did.
But on top of that, what it did, it also gives me some additional information.
So the current weather, it's like more described the weather, then
showed me the forecast, and then I asked it a tool for recommendation.
So it tells me what kind of clothing should I wear and what kind of activities.
So it's a great day for outdoor activities, which is definitely true.
It's really nice weather.
So how did it all work?
Let's see the weather script.
So this is the header data I mentioned.
So we have name, description, tools, and whether we want to be in chat mode or not.
So the tool is actually the key.
So the weather assistant have its own prompt.
So we simply prompt it to interpret the weather based on the tool.
And we Tell the system to give us some recommendations for activities
and clothing and here in the same file I have a weather tool.
So the weather tool is actually a bash script.
You can see this inline I'm actually inlining a bash script and I
am using curl to call the Wtdr.
in which is a weather service and Appending the my location.
So That's as simple as that.
And then GPT script can use this tool and decide when to use it.
And, in our case, provide the weather output.
you can, of course, have much more elaborate tools.
You can use Python scripts.
I believe also Go binaries are supported.
You can create a really elaborate and complicated, b not BGPT scripts.
This was just a fun example, how you can, in a few lines of pseudo
code, including actual code, have your own personal weather assistant.
So you might wonder, since we have those two tools, Fabric with its
own strengths and GPTScript, which you've seen me using now, is there
a way to integrate those two?
And the answer is yes.
You can actually treat Fabric as a repository of prompts that GPTScript
can use as a tool, or set of tools.
So you can instruct GPT script saying, Fabric, commands.
This is something that you can use to help user with various requests.
And this brings us to this kind of do it yourself agentic workflow.
when you integrate GPT script and fabric, you will see in a demo in a second, I did
it in like a pass through manner, but you can do, all kinds of crazy stuff with it.
without using any actual, agentic workflow framework, mostly in Python, you can
actually have your own quasi agentic, workflow, which is really powerful,
built from your own patterns or prompts.
So in the demo that we see in a second, GPT script suggests prompts and it
decides what kind of prompts to select based off of user query and input.
As previously, it can also use additional tools.
We are not doing this for this demo, but as I mentioned earlier,
you have a lot of possibilities of using external API or tools or CLI.
So let's try Fabric Assistant.
So again, GPT script, simple command line.
Okay.
And then the name of our fabric agent.
So let's see it in action first, and then we are going to, check the file itself.
how can I assist you today?
If you have a specific task, just query.
All right.
And it tells me I can provide the best possible prompts for you to choose from.
Okay, I want to, let's say, summarize something in a short form.
So I'm instructing it and then GPT script selects for me, possible fabric
patterns, or prompts that I can use.
So it says, summarize, create five sentence summary, or summarize micro.
And I can tell you from using fabric, those are exactly the
prompts, that I would use to summarize something, in a short way.
So let's say.
Okay, so summarize,
summarize the current state of JavaScript framework.
So this will definitely confuse it.
So now I'm saying, summarize the current state of JavaScript frameworks.
So without the current data, of course, this will be a futile task.
But, GPT script soldiers on and it tells me I'm going to execute this command.
So you notice that it equals my request and it pipes it to a fabric
and it's selected to summarize.
Since I didn't specify that I want to want it to be super
small, it's selected itself.
What pattern should be applied.
So I'm hitting yes for please execute the summarization.
Call and then it called out and it also summarizes it for me.
And here is the output one sentence summary JavaScript framework continue
to evolve rapidly which react dominating but view and felt gaining popularity.
That's probably correct I'm not a JavaScript expert, but Definitely a
quick summary and you can imagine that you can input here a file content of
a script or a program that you want.
You can use additional features of Fabric I didn't mention,
such as pulling transcripts from YouTube videos and other things.
And you can type all of this into GPT script, and it will be able to read files
content and then pass it on to Fabric.
So those are two ways how you could interact with command line based
tools and how you could make your terminal centric workflow way more
interactive and more responsive.
So the way I use this is mostly through typing the output of
various commands that I do.
So if I see an error somewhere, I pipe it.
Or if I want to, if I work with prose, or if I edit a file or a
blog, this is also often something I do to improve writing and so on.
And I can do this directly with a file without needing to go to web, to WebUI
and do this right inside of my terminal.
So Best practices, after doing it for some time, it's very critical to start
with a clear and concise prompts.
this is super important.
If your prompts are clear, then you get better results.
And one thing to make prompts clear is to clearly separate the parts prompt
that you have or the sections of it.
Whether by using markdown headers or XML tags work also very well, so that the
LLM understands, what needs to happen.
Modular design, it's better to have more but smaller prompts, rather than very
big and elaborate, and just string them together with the tools I showed you.
Those things are definitely easier to do.
And also the prompts in and of themselves are self documented,
but the process that you come up with should really be documented
because it's very easy to get lost.
why did I create this prompt?
Or what was the purpose of what I was thinking?
So more from the process perspective, I started creating simple documents
showing me what I meant and what was the idea behind a certain
prompts or set of prompts.
Common pitfalls in, designing both, GPT scripts and prompts,
overcomplicating scripts.
That's a big one.
with AI now we have proliferation of code.
and if the code comes from non programmers, then the quality bar
is low because those people simply do not know what they do not know.
And, AI LLMs is not really helping always.
So Definitely it's better to keep the scripts, simple, and make sure
that they just do whatever you want and then move on to the next part.
Neglecting error handling, that's a big one as well, for the same
reason that I mentioned a moment ago.
A lot of things can go wrong because you are not anymore in charge of
orchestrating what you're doing.
There's an LLM non deterministic output in play here.
You should be able to always narrow down to the part that you're
interested in and have a plan.
proper error handling to help you.
And also do not ignore updates.
those tools are being rapidly developed.
They are command line tools, depending on how you install them, make sure
that you keep them up to date and they are getting security patches
and improvements really quickly.
It bite me a few times, that I ignored.
I'm not really ignored, but forgot to update.
And then I had some issues.
So definitely something you, you should do.
a few more tips for.
effective automation, first start small, just create a very simple prompt, very
simple script, and just see how it works.
do not add, resist the temptation to let AI completely generate everything for you.
Try to write it yourself because it's really hard later to decipher
and untangle all the things that AI can put on top of it.
Leverage community resources, to support.
Quite a few, great blogs and forums, definitely worthwhile to
share, as I am sharing with you.
You can check GitHub and see what other people are doing.
my GitHub repository is, public with all the scripts that you've seen today.
You can simply go and grab them.
Most of them are in my files.
interactive development, so definitely keep on refining, And again, try to,
limit the AI here, just the other day, I was, Improving a script and I just,
I was lazy, I threw it to AI and say, Hey, change whatever x and it went
essentially and did it, but it became a rabbit hole because what I really
wanted wasn't what I asked AI for.
I really wanted to, essentially I ended up doing, changing three lines of code.
And I just thought about it a little bit more and then what I asked
AI wasn't actually what I wanted.
So definitely, it's, a temptation that we, we all have to resist,
to try and develop, ourselves.
question, collaborate with others, in the same way as leverage community resources.
So here, just a few resources, you can screenshot it, or just, it's pretty
easy to search, so Fabric, Repository, GPT Script, I mentioned VIPE, here is
a more utile package that the command comes from, NeoVim, which I used for
displaying the output, and also VIPE uses it, and the SCF, command line
that, a lot of my scripts are based on.
So that's it for today.
if you have some questions, I would love to connect with you.
You can connect with me on LinkedIn.
Just simply snap a photo of this QR code.
It will take you to my LinkedIn profile.
otherwise, thank you again for your attention.
Thank you for your time.
Enjoy the rest of your conference.
And let me know what you think about automating AI with command line.