Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello, everyone. My name is Tommy. I'm a technical writer at Semaphore,
and today I'm going to talk to you about technical writing in the era
of AI. If you have any questions, here's my contact information.
With that said, let's jump to the talk. I'm going to try
to answer the question, is technical writing a worthy career
path to pursuit? I think the answer is yes. This is short answer,
and the long answer is, it depends. It depends on
what type of writing we do and how much we are willing to
learn and adapt. The world has had many technological
leaps over the centuries. At every stage there have been people who cried
they would destroy jobs. And to some extent it's been true.
We don't have the typesetter job anymore. But the bottom line
is that new technologies have created more jobs
and given voice to more people. My question
is, is AI just another leap? I don't think
so. I think ais and language models are a new
class of things. They're a new type of things. What makes
AI different is that they can produce text at scale and
they can kind of think so. I will use the
term AI and language model interchangeably, even though
they are not technically the same. So technical writer is
not immune. There are a ton of new products dedicated
to automating that skill.
These are a few examples. They can take code and output documentation.
We even have Synthesia that can produce talks with a virtual
avatar, so even that can be automated.
And the future is kind of grim, at least
in the numbers for forecast. This is Forester's 2023
forecast. And in the top right corner, there's technical
writer. So we are very much exposed to AI
influence. So should we give into fear?
I don't think so. Let's pause and think through. I believe
there are many reasons why technical writers will be
around for a while. I have come up with five of
these reasons, and we'll talk about them now. So,
reason one is the hype cycle. This is
how things appear to be according to the news.
There's like an explosion first, then there's a
drop in the hype as things began
to fail and technology does not fulfill
the proposes initial promise. And finally, there's a plateau where
things are more balanced and we know the limitations.
I think in crypto we already passed the hype cycle.
AI is just starting. At least language models
are just starting. So we have to be mindful of this
because this tends to make news to be very
negative. They will print
the most extreme cases or the
most negative outcomes, because the
truth is that years sells. So reason number two
is that AI has already failed to replace know.
One of the most high stakes businesses,
I think, is Hollywood. And this year, there was one of
the longest strikes of the Writers Guilds of America.
These are the people that write screenplays and films.
I don't doubt for a second that Hollywood tried to use AI to
replace them, at least during the strike, because they already used
AI for other things, like replacing their
dead actors and things like that. But the fact that they
settled with the guild, I think,
says that AI is not as good as
a writer, as a human writer.
Going back to tech, a few days ago, Google released
the state of DevOps 2023. To me,
the most surprising fact was on page eight.
It says that AI was the lowest contributor to team
performance. So either companies are not
using AI or they are not being as powerful
as promised. So reason number three is that AI
has, like any technology, its limitations.
They are trained on Internet data, so there's
a lot of misinformation there. There's a lot of bad
text from the writer's perspective,
text that's full of clutter, or very verbals or not
to the point, and has a lot of faults.
There's another big problem with these engines,
that they do not offer the same level of privacy that some companies
need. So maybe you cannot use AI at all for
compliance or protecting sensitive data. And there's
also a few technical limitations, like context
size, and how companies make changes to
the models without much transparency.
But there's one problem that outshines all this. And in
my opinion, the biggest problem with AI
is the Ankani Valley. So, in a few
words, this is how we respond to things that are human,
like things that try to appear human but
do not achieve it. They create a sense of aversion
and negative emotion.
So here we have the same robot in a bigger size.
This is Hiroshi Ishiwuro, who is a robot inventor.
And you can see Dankani Valley quite clearly here.
So why I bring this? Because I think text
produced by language model suffers from this.
It's not as obvious as a picture or as a
robot. Maybe if you are casually reading,
you might not notice. But if you read enough content,
you start to spot things. You have a feeling there's
no one at the wheel, that there's no voice, there's no iness
in the text. And as a reader, it's very difficult to
me to make a connection with AI
generated context that's unedited,
that's wrong. And I feel this is a bigger
problem, the biggest problem, because you need to connect with your
readers, even in technical writing. So reason number
four is that we can adapt, knowing the limitations of
language models, we can work along them.
Here's what I call the adoption spectrum. There's full
adoptions on the left and zero adoption on the right.
So to the left we have the writer. I call these
writers the cyborgs, because they use
the raw output of the language model.
So they're kind of mixture through robots
and humans. And these
are people that usually need to write a lot of content in a short time.
And I think it shows because the output will suffer for
the problems language model have.
So you can publish an accurate information or
text that's too long, does not go to the point,
takes too long to reach a conclusion, or makes no
sense. On the other side, maybe we have a
person that's not able to use AI, or don't want to use AI
for some reason, maybe privacy reasons,
or there's some compliance that prevents using
these tools. I think the optimal place
for us is in the middle whenever possible. So we
have two more roles. One is the director.
The director is a person that guides
the AI very carefully, and it's
constantly prompting
the language model to generate one paragraph.
Paragraph. Another paragraph is going very slowly. You're going
to rewrite as needed, adjust, go back,
go forth, and it will be like an actual director in
a film, telling the actors how to show emotion, how to
do their lines, and keep things the
bigger picture making sense. The other
category here is the swiss army writer.
I feel I'm more in this camp,
but we use the tool for things
that we feel the tool is better than we. For example,
maybe we use them for outlining,
for moving past a writer's block, for research,
for summarizing, for translation, for maybe editing
or brainstorming. When we need ideas,
or when we have some blogs, some difficulties,
we rely on the AI to help us.
We can think of AI like another writer that's next to us,
and we can talk and interchange some
ideas. But we rely a little
less on the AI because we still write maybe 50%
or more of the text by hand. I think we need to
be in this middle zone because we don't want
to be over dependent on these technologies
and we don't want to lose our skills. The final reason,
and with this I finish the talk, is that writing is
only 20% of the job.
I think there's so much more to editing than writing.
You have to interview people, you have to talk to SMEs,
to developers, to product managers. You have to
be in the middle of all this storm. You have to put yourself in
the shoes of the users, prioritize information,
make all kind of decisions that maybe we're not even aware of.
And these are things that the language models cannot do
or struggle a lot to do. They're not good at this.
So until we have a language model that can have a
creating with a developer and ask the right questions,
I think we're safe. So I started talk
questioning if technical editing is a worthy pad. My answer was,
it depends. It depends on the type of writing you do.
Do you need the human touch? I think
you do in all writing, even on technical writing.
Do you need to connect with the audience? I think you always need to connect.
So these are things that language models cannot
do on its own. I think language models are good
for a certain amount of things, and we should be
good at the other things, the things that the language models cannot
do on its own. So it's up to us to figure out
how to fit in this new scheme. And if we
do, I think technical editing is a very fulfilling
and worthy career. Thank you so much for watching.
Since we don't have a chance for a QA, I will
leave my contact information. You can dm me or drop
me an email. In my homepage you will
find my personal blog and links to my work on
semaphore and my YouTube videos. So if you want to
talk, contact me. I will be happy to discuss
and talk to you and hear from you. Thanks again and have
a great rest of the conference.