Transcript
This transcript was autogenerated. To make changes, submit a PR.
By the end of today's session, hopefully you'll have a working understanding
of how AI is changing the hiring landscape, particularly in technology.
And then we'll learn some techniques to battle hard and people centric processes
such as talent acquisition using AI.
Now just understand, I chose talent acquisition because of this
personal mission that I was on, but it could also be useful for other
human centered processes as well.
Alright, so today's agenda is going to look a little bit like this.
First, I'm going to start with some anchoring bias.
I'm going to talk about the circumstances that kind of led
me to take on this experiment.
And then we're going to talk about my personal mission a little bit here.
Because if you're going to risk it all, you might as well make it personal.
We'll talk about the process that was followed.
Now, because of the magnitude of research that was required for this exercise,
we're only going to cover a couple data points associated with this experiment.
And then finally, we'll talk about the outcomes.
Alright, so first of all, I need to anchor you guys in some bias.
You guys need some context, because this wasn't just
initiated out of nowhere, right?
There was a reason why I started this process.
So we're gonna start with a little bit about me, and I know it's a
little cliche, but it's important to understand my context and the
things that I want to solve for and how it relates to this exercise.
So my name is Daniel Preece.
I'm an engineering manager of digital experience teams
for a super regional bank.
Now, I'm an engineer first with 20 plus years of experience
across various industries, ranging from bare metal computer chips
to cloud native architecture.
And everything in between.
I'm currently on a personal mission to improve the developer
experience for the teams that I oversee, and I love growing others.
I like to watch the light bulb turn on as people learn new skills and help
people through their soft skills and as they navigate their career journey.
So when I started this experiment, there was a hiring climate
that predicated all of it.
I was hired at my current organization in 2020, in July 2020.
Now, prior to that, there was this thing going on, in the 2019 into
2020 called the great resignation.
And this is where people decided that they wanted to do something
about their quality of life.
A lot more people wanted to work from home.
They wanted to be with their families.
They were tired of commuting in every day.
Many people quit their careers.
They were experiencing burnout and decided, you know what?
It's time to move on.
To something else, and we can see that in the data.
when we look at February, 2019 is February 2020.
There was a 47 percent reduction of applications across all industries.
Now, some of you may think, we also had this thing called the coven 19 pandemic.
Not sure if you heard of it.
actually.
When the United States shut down that was March of 2020.
So this is a year prior to any of this even occurring.
We're starting to see a significant reduction in the number of applicants
and people in the job market because they're looking for something
more in that quality of life.
So the talent pool itself is electively decreasing.
Now, while that's going on, while the talent pool is electively
decreasing, we also have big tech going on a hiring spree.
And we're also seeing that in the data.
Now, first, I want to acknowledge here that this data covers a period from 2019
through 2022, that 20 to 19 to 2020 period is that period of the great resignation.
And then beyond that is when we get into the COVID 19 years, right?
We're all separating, social distancing together and big tech
is actively hiring like crazy.
The other piece of data I want to acknowledge here just because of
its bias a little bit is Amazon.
They are part of the supply chain and the COVID 19 pandemic, right?
They are part of the improvement of the supply chain.
And so they did ramp up significantly in that process.
But the rest of big tech like Microsoft, Apple, Alphabet, they
grew by thousands, if not tens of thousands from 2019 through 2022.
This is very frustrating because, again, we have this great resignation going on,
people are electively not wanting to work anymore, and also Big Tech is drinking
our milkshake, because at the end of the day, the talent pool is only so big.
Pulling in talent from the same talent pool around the globe and
prior to that, there was a big push for digitization and recruitment as
a solution for talent acquisition.
so as part of, the way that talent acquisition is working, they want
to move from very manual toil based processes into automation so that it can.
Focus on better set of behaviors and better outcomes.
So what does that mean?
What does the digitization of talent acquisition mean?
we really want to automate those data heavy tasks that are highly repetitive.
In the SRE world, this would be analogous to things that we would call toil, right?
So we want to automate those processes away.
They're also a little error prone.
we also want to take things like resume screening, any of the scheduling.
So if you've ever done any scheduling, we select your own schedule with
Talent acquisition, or maybe an HR representative or a conference
speaker or whatnot, they're scheduling tools that are out there.
Technology screening right to understand as a person understand
a particular technology set, or does a resume actually satisfy a
particular technology needs, chat box, which are used for relationships.
And this is prior to generative AI.
When we were working more with natural language processing,
just a small segment of.
What we now use as part of generative AI and RPA is a robotic process
automation to manage those leads.
Now, all this automation was really there with this promise that it
would free up the recruiter's time for more strategic tasks, right?
building out your talent network, as well as building those
relationships with the candidates.
All right, so let's talk a little bit about a hiring process, right?
Because we now understand, we've got this great resignation
going on, talent pool shrinking.
We've got big tech drinking our milkshake.
We've got a lot of automation in the process.
But what is the process?
for most organizations, it looks like this.
We have talent acquisition and HR deciding that they're going to have a strategy.
They're going to do either open up roles as needed, or there's going
to be some sort of recruitment effort to build a talent pool.
They might reach out to career fairs or job fairs, right?
They really want to build up a network where people can source candidates.
It may also include external sourcers as well, whose job is just to collect
these networks for you, for these organizations and set HR up for
success in terms of their hirings.
Then we may take a candidate through a technical interview, right?
Which is like a technical screen.
if you're in technology, you've gone through this is where you're
going to ask them simple questions.
They make it harder and harder over time, but they're really
trying to understand what your tech technical acumen looks like.
It's really hard to do in a short period of time.
So it tends to be pretty quick though, like 30 minutes to
an hour at most, to get, to understand your technical literacy.
And then to offset that a little bit, what you can do in practice, more organizations
are starting to use automated tests.
Now this is a thing that I had a personal gripe with.
The automated testing thing is part of my personal mission.
Because while they're useful in most contexts, in some
contexts they can be used poorly.
And so I had some concerns, so I really wanted to address that.
Then there's also a manager meet and greet.
And this might not be somebody who's directly invested in the process.
Maybe a senior, more senior person or a person outside the process.
They're really just looking for red flags, right?
Cause you can have a great candidate and you're like, Hey, this person
did great on the interview.
They did great in the technology screen, but, you may have missed the
red flags and all of your excitement.
So this person is really there to help.
sell the organization, maybe sell the geography where they're going
to need to move to if they need to relocate, sell the culture of the
organization and really look, just look for red flags and the candidate.
but if everything's great, then we move on into the next piece, which
is the competency based interview.
And we'll cover that shortly, but we really just want to understand not the
technology itself that you know, but.
and there's a lot of great questions and competency based interviews that
really help you understand a person and if they're the right team member
for the job, not just do they have the right skill set, but do they have
the right behaviors within that skill set to be able to do their job and
execute and bring the right kind of values and culture to the organization.
And then lastly, we compare the notes, right?
We take all of these things and we say, okay, does the person,
is there skilled enough, right?
Okay.
Do they seem to solve the problems well enough?
Do they have no red flags?
Do they really want to be part of this organization?
Do they match our values?
Do they solve problems in a way that is meaningful to the organization?
If everything is great, we give that candidate a thumbs up, we make an offer.
And that's typically how most organizations work, at least the
organizations that I've worked with, as part of the hiring process.
So anyway, that's a background about the hiring process we want to work through.
So at this point, I need to tell you a little bit of a story, right?
So I'm getting a little frustrated with different pieces of the hiring process.
And at one point I was doing a lot of interviews.
I think over a course of, Three or four years.
I was doing hundreds of interviews a year, and I wasn't just doing them for my teams.
I was doing them for other teams in the organization.
I work for a large organization, and so I got a kind of got a
reputation for being good at them.
I like doing them.
I think it's fun.
I like picking apart people's brains.
And so I was doing an interview.
First of all, there's this person on your screen, right?
Kind of looks like the Lorax, big bushy mustache.
you frowned upon to talk about somebody's physical features, especially
as it relates to an interview.
But this is important.
I was talking to a candidate and I was doing a technical screen and I
think I was asking a simple set of questions, something like, what is SOLID?
And SOLID is an acronym, it means like Single Responsibility, Open
Closed Principle, Live Scoss Substitution Principle, Interface
Segregation, and Dependency Inversion.
Now, most candidates, they will say, something like dependency
injections, set a dependency inversion.
and we'll navigate that a little bit.
So I'm a few minutes into this conversation and as the conversation is
evolving, then the facial features aren't really lining up with the mouth at all.
Like the movements aren't quite there.
Now, I wasn't sure if it was just the auto audio buffering issues or if it was
like, just The latency over the network or whatnot, but it didn't seem right.
Something was off.
And so I get to this question, right?
what's the difference between dependency injection and dependency inversion, right?
And really the answer there is dependency injection is a way
to perform dependency inversion.
It's not the only way, but it is the most common way, typically
the way we do things today.
So I get to this question in the audio cuts.
I'm like, okay.
Okay.
This is not abnormal, right?
Sometimes the audio cuts, right?
And screens cut, whatever.
So the candidate starts typing on the screen.
I can't hear you.
I have an audio issues.
Okay, it's great.
And but then something registered in my head.
You see the candidate as they were typing.
I could hear their keyboard clicks in the microphone.
So that's when it registered.
The person that I was talking to in audio was not the person
on the other side of the screen.
The person on the other side of the screen was mouthing the words because
their microphone was working just fine.
Oh my goodness.
This is crazy, right?
So they said, Hey, look, I'm going to call back in a few minutes.
I'm going to rejoin the line.
okay, that's fine.
And while that's happening, I start texting.
I am in the, the HR department, our recruiter and say,
Hey, what's going on here?
I think our candidate might be fraudulent and they say, stay in
the line, start asking questions.
And so went ahead and continued the rest of the interview.
Five minutes later, the person shows up again, they're on the visual, their
audio comes through and they asked where we were on their questions.
And so we regroup again and they give me a, not a great answer, but
we continue through the interview.
But the voice.
was completely different.
The accent was completely different.
The tone was completely different.
And the person on the other end was still mouthing the words and things
weren't lining up, but they were using their bushy mustaches to cover
so that you really couldn't tell that it was necessarily the mouth.
But I'm a musician and I could pick up on these audio cues from the
way the lips are moving and the way it's supposed to be interacting.
I knew something was off here.
So after everything said and done, we talked to our recruiters.
we really came back and said, hey, we were, we think there might be an issue,
particularly with the sourcing agency and, we might need to do something about it.
But it was providing some evidence that, hey, there was some fraudulent
activity that was going on in the system.
Now, if you're in a hiring role, there's some questions.
have you ever experienced.
fraud and hiring, whether it's, the person's audio is different.
They won't go on video.
I've had situations where people are typing in the questions.
I can see them on video typing the questions and they're reading it.
it's great that they're resourceful.
I think it's important software engineering, but it's also important.
You have a good foundations and you're able to express
the answers pretty readily.
as a person who showed up on day one, not been the person you even interviewed with.
That's another thing that we commonly see.
So it's very interesting.
So I'm getting frustrated because while we got, talent pools are decreasing
because of the great resignation, right?
We got the big tech, trying to get everybody they possibly can.
We've also got to deal with fraud and this is a really challenging climate
to work through, which the reality is with the situation is if you're
not cheating, you're not trying.
And for a lot of people who are looking for jobs at this time,
they are cheating like crazy.
All right, so now we got to get into our personal mission a little bit.
So I had some concerns about the automated testing that we were using.
Some of the questions seemed simple, some of them were ridiculously hard.
And they weren't testing whether you knew the language or not, they were
testing could you solve the puzzle.
Which on some level was not great because these were like Mensa
level puzzles that you could solve with any language using for loops.
And I thought that was really not the right way to solve the problem.
yes, you could screen out certain things, but I'm looking for things like code
quality, maintainability, cyclomatic complexity, cognitive complexity.
All sorts of things that are important in the maintenance of software over
time, not just did you solve a problem.
And so I had some concerns and I started elevating them to the organization.
So in spring of 2022, I started voicing the concern about the
automated code assessments.
I didn't like the way that they were being used as a blanket utility.
I wanted to make sure that we're using them in the right context,
maybe with more junior engineers who didn't have a professional background
because it really didn't make sense.
If you had somebody who wrote clean code, To write it in this way, when I took this
test, I wrote clean code, and I honestly just needed a for loop, but that's just
how I write, and that's just, it wasn't, didn't seem like the right set of problems
that we were solving for, because it really wasn't solving for engineering,
it was solving for the puzzle.
So in summer 2022, I was asked to be part of a panel that reviewed
questions for fit and application.
We wanted to improve our questions because we thought that might be a problem.
just, again, we didn't want super hard questions, but we wanted
them to be enough where you had to, understand the sequencing.
And so after that, I started to ask if, has anybody tried
to run these through copilot?
Because it was in beta and chat GPT, which I think was in an early beta.
It was crickets.
Everybody said that the process was fine.
Like, why would we ever be concerned about AI?
It was, it was new, and I don't think a lot of people really cared,
and it was part of our strategy.
So in fall of 2022, ChatGPT 3.
5 is released to the general public.
And this, at this point is when I started taking these questions and
bringing them through ChatGPT to see which ones could be easily figured out.
And as it turns out, a bunch of them, not all of them, could be easily figured out.
And that kind of signaled the ones that we needed to start retiring.
We need to not use them anymore.
But the challenge is when you retire something, you create new
work because you now have to Create new questions and those also that
also have to go through the system.
And so it's a little bit challenging to deal with.
Now, when I started to present some of these findings, just
about the questions themselves.
Now, this is just technical questions like solving the code puzzle.
that's just one lens of this.
And I was getting frustrated that I wasn't getting enough attention around it.
So in winter of 2022 into, January 2023, the plot starts to thicken as
I'm spreading this knowledge around.
All right.
So let's break the context before we get into the thickening of the plot, right?
So we got like the great resignation.
We got big tech, drinking our milkshake.
We got these fraudulent candidates.
We got some automated testing that's really not so great.
it's good, but it really could be better.
And then we got this challenge of also being on prem, right?
we as an organization that one I work for really want people
in the locality that we're in.
and that's important to us to build up the community that we're operating in.
And so these things are just filtering out really good candidates and it's
really becoming problematic to find the right people for the organization.
So now it's time for some creative problem solving like I'm expressing the
frustration and I happen to be speaking at a conference in the middle of Ohio
in the winter, middle of nowhere, right?
Snow all around and we're at this convention and it's a
big technology conference.
four day conference and on one end is the conference hall and the other end
is like a resort side of hotel rooms and in between is a bunch of restaurants.
And so I'm walking from the conference center to my bedroom.
It's late at night.
It's 10 something at night and I'm just done.
We have an early day the next day.
And so one of my, one of my buddies who actually, is part of this,
he was a speaker and he said, Hey Dan, I want to talk to you.
And I'm like, no buddy, like I'm tired.
I'm ready to go to bed.
And he's Hey, I'll buy you a drink.
I'm like, okay, fine.
So let's go ahead and have a talk.
And so he was talking about my frustrations and he said, look,
Dan, I've got a proposition.
I think you've done an amazing thing with this test, and I think we need to build
some content on this and start talking about this more and help bring better
awareness about fraud as a vector and how or how generative AI can be a vector
and fraudulent candidate hiring and how we can improve the overall process.
He's a man of science and I'm a man of science.
He said, look, let's create a process and let's talk about this candidate.
And so we started figuring out names for his candidate.
And we called him Chet Gupta, which is a play on ChatGPT.
And that was the name, Chet.
It's like a Texas chat, and so we were like, we like that name.
And then, couldn't figure out a whole name for GPT, so we got Gupta.
And we, we started plotting this out over a napkin.
At a conference, and we figured we chop this around and figure
out what is this worth doing.
And as it turns out, this turns out to be a really good set of
materials over the long term.
A lot of people have had interest in this.
I've spoken this particular talk at multiple conferences nationally,
and it's been a great talk to bring awareness, but with that, we wanted
to find a process that's repeatable.
So you can bring it to your organization and try it.
for different human centered processes, not just talent acquisition, but
we wanted to create a persona.
Now, when we initially started this, we looked at using, LinkedIn
and creating a fake profile.
It turns out it's frowned upon.
So we didn't do that.
We did, generate some AI bio pictures that we wanted to be a part of that
persona as we were pregaming before we made that decision to not use LinkedIn.
So we're going to go through that in a little bit.
We want to run the test problem.
So I hadn't done a lot of this research.
I wanted to run some more problems through, through, chat TPT, as
well as co pilot that was the new hotness at the time, See if it
can solve the problems for us.
And then I want to prompt some of the technical screen questions.
These are the things like your, your data structures, your algorithms, what do how
do you apply a particular object oriented pattern versus a functional pattern?
What not right.
At this point, we're, we're, it's getting late and we're talking through
the process and we said, Hey, look, maybe we want to start bringing
talent acquisition along for the ride.
Now, as it turns out, somebody from our talent acquisition department was there.
And so we're able to pull them in, collude with them a little bit on how
we're going to go about this process.
And we, we wanted to have the shock and awe with a larger group, but
we really wanted to understand the process a little bit more and bring
them to our side of the table.
Because we didn't want to attack talent acquisition.
We just want to make sure that we're addressing a conflict in the
process and how to manage that.
And then we were we decided after that we would perform the competency
based interviews questions, which is Are a little different
than our technology questions.
They're more situational.
They're less technical.
And we wanted to see how chat TPT would respond to that.
And then ultimately, we wanted to do is bundle this up together as a
nice little package and present it to our talent acquisition department
to see if we can make meaningful change in the organization.
All right.
So these are the chats.
So all of these were generated, I believe, using Dolly, just
like the Lorax picture before.
and so the prompt here was, Create using dramatic lighting, create a
LinkedIn profile bio picture for a software engineer named Chet Gupta.
That was it.
Now, when that happened, we got 300 gentlemen who are
represented on the screen.
And I said, Hey, look, at some point during these conferences.
I started to realize that Gupta introduced some bias into the
system, and so I removed the word Gupta, and then I got, these two
Caucasian gentlemen here as well.
now if you notice, again, there's still bias in the system because
all of the pictures that here that are generated all have.
Assume that software engineers have shallow stubble and, and glasses
and they pick white people and Indian people based on the names,
but, as a Hispanic man, right?
I could be a chat too, right?
So why not have a Hispanic chat?
Anyway, there's bias in the system.
It's there.
It's something we need to acknowledge at some point.
But this is a system that we're operating in.
So we started to select the pictures here.
And, so if we go from like the top, like in the middle
there, we said, Hey, this chat.
This Chet Gupta, he looks like he's a musician, like he plays jazz,
music only drinks IPAs, maybe a little pretentious, maybe he's not
the person you want to deal with.
so what about the guy on the far right?
he looks like he runs, like he's the CEO of a company.
So we said, no, maybe we don't want to use him because I'm inspired by him.
He doesn't look like an employee.
So what about the guy in the bottom left?
as it turns out, he looks like a lot of our other engineers as part of our group
and it'd be too stereotypical there, but he does, and it looks like just a guy we
would hang out with on an everyday basis.
Now, these were the three candidates we had when we
initially started this process.
The two Caucasian gentlemen were just an afterthought as we were given this.
So we're really focused on these three people.
And so ultimately we selected the guy in the bottom left.
He's going to be Archette.
And again, these pictures are all generated.
And When I first gave this talk, we also had synthesized audio and
now they're synthesized video.
So the way that you can present this is interesting because you don't need to
have video that's rendered over time.
It can be rendered in real time and that's very interesting.
You can have profile pictures that are generated and send in LinkedIn profiles
or even send in on resumes, right?
They aren't the real candidate themselves.
Also very interesting.
But the bot guy in the bottom left is the one we chose.
Now the competency based interview.
I want to talk about this one for a second because we talked about the technology
screen and technical assessments, but a competency based interview is one where
we're following a star shaped pattern.
So we're looking for things like your situation, your task,
your action, and your result.
We're not necessarily looking for how you, for what technology you used, although
it is important, but how you used it.
Or maybe how you dealt with a difficult situation or maybe challenging individuals
on teams or maybe if you're late for a project, but those kind of situations
and how you overcame them, we want to make sure you clearly define the
problem set in the situation, the task and the action and the result.
So the framing is really important because communication is very
important when we're hiring people.
All right.
So we asked chat or chat GPT.
when were you on a team that face challenges?
And so the response that we got back was, one of the main challenges
we faced was communication.
We had a feature that involved multiple components and the
interactions between them.
And it requires close collaboration between the team members to ensure
that everyone is on the same page.
However, due to different work schedules, time zones and communication styles,
Some team members found it challenging to keep up with updates and changes, leading
to confusion and misunderstandings.
Another challenge we faced was technical.
The feature requires integrating with a third party API that's poorly
documented and difficult to use.
And some of those team members lacked experience with the API and
struggled to understand its nuances, leading to delays and errors.
To overcome these challenges, I, Chet Gupta, decided to take several steps.
First, I organized daily stand up meetings to ensure that everyone is
aligned on the process and any blockers.
Second, I introduced task management tools like JIRA or Trello to track
progress and maintain transparency.
And third, I paired up team members with varying levels of expertise with
the API to ensure that everyone has a solid understanding of how it works.
And then by taking these steps, we successfully overcame the
challenges and helped the team achieve its goal of delivering
the new feature By the deadline.
Now, some of you are going to have two different answers to this, right?
Some of you are going to say, this is a great answer.
Wow.
This is well thought through and it's amazing.
And actually it really is.
This is a really great situation.
Task action result of this.
This question is prompting.
The other ones may say, Hey, look, this person invented agile.
It's already been invented, which is also true.
But believe it or not, we get this a lot.
A lot of candidates come in and we'll say.
They're not familiar with Agile, they're working with organizations that have
only used parts of it, or parts of maybe a larger framework, and, so they
don't have maybe a total experience or a total lens on the things that we're
necessarily looking for, and That's okay.
And it's okay to have that background just because Agile
is supposed to be appropriate for the context that you're in.
And so it's okay to not have like maybe a larger understanding of it
or maybe a different set of tools.
Not everybody uses Jira or Trello.
So it's totally okay.
So let's ask maybe a technical question.
Right now, technical interviews are, what we want to understand is the breadth of
experience across relevant technology.
So this is how T shaped you are.
And so for a full stack engineer who needs to understand like front end,
back end, maybe networking and gateways and whatnot, databases, this is
important, but they're very T shaped.
they're wide in their acumen, but they might not be super deep, but we
do want to understand that as well.
So we're looking for the depth of any experience in any one technology, because
we want to really understand, how deep or why they are in a particular area relevant
to the role that we're trying to hire for.
And then we want to make sure that they're exposed to various patterns and practices.
We're going to cover concepts, like object oriented patterns or design patterns.
And we really want to figure out, what is their total acumen, which is very
hard to do in 45 minutes to an hour.
And then the way that we do that is through adaptive testing.
So we start in an area, like maybe, like a language like Java or C sharp, which
the questions will get harder and harder or design pattern or something like,
solid, single responsibility and whatnot.
We'll go through those into harder concepts, like into like domain
driven design, general responsibility, assignment principles and whatnot.
But the goal of that technical interviews to understand against the rubric, how
this candidate sits, how deep and why they are, because, it's very challenging
to do, and we have to move quickly.
So the adaptive testing model works well for us.
So one of the, one of the questions we have here is an algorithm.
This is actually a retired question, which is, In Java, write a function that
determines if a given string is balanced.
Basically, you want to determine On it, if a string has, open parens
and closed parens that there's, if you send eight open parens,
do you have eight closed parens?
And the chat GPT response or the chat Gupta response was the one that's
posted here, which uses a stack.
It's an algorithm, sorry, it's a design, data structure that
allows an object to come in first.
and as you add more things, the top most thing on that stack comes out as
opposed to the first item coming out.
And so a queue is the opposite of a stack.
And that's first in, first out.
Stacks are last in, first out.
And so this is measuring the open paren.
So every time an open paren comes in, it adds something to the stack.
And every time a closed paren comes in, it removes something.
And if you're at the end, then the stack is empty.
Now, is it a perfect answer?
Probably not.
Is it a good answer?
Yeah, it's not bad.
it solved the problem.
And I think this is pretty fantastic in terms of the response that we're getting.
When we look at our rubric answer, the answer is pretty similar,
except we're using an integer based accumulator to do that, calculation.
So instead of maybe a more memory intensive object like a stack, we're
actually just using an accumulator to say, every time an open parent comes in,
we add something, and every time a closed parent comes in, we remove something.
And if we're at zero by the time we're done, instead of negative or
positive, then everything's fine.
Effectively chasing the same results, right?
And so these are good answers are good enough answers, especially if you're
dealing with more junior engineer.
So this starting to look like chat GPT is not a bad candidate, right?
We got our questions that we're asking about competency.
We got our algorithm question that's being asked very well.
All right, maybe let's ask some slightly harder questions or maybe
some data structure questions that are a little different.
So one of the things that we wanted to ask for was, could you explain
what a concurrent hash map is?
And this is a Java centric, data structure, for the most part.
And a concurrent HashMap.
As he responds, it's a data structure that allows multiple threads to read
and write key value pairs simultaneously without inconsistency or race conditions.
In a typical hash map, when two threads try to access and modify the same
key value pair concurrently, a race condition occurs, leading to data
inconsistency or even program crashes.
However, a concurrent hash map uses a mechanism called locking
or synchronization to ensure that multiple threads can access and modify
different parts of the hash map at the same time without conflicts.
And in a concurrent hash map, The data is divided into buckets.
Each bucket has its own lock.
And when a thread wants to read or write a key value pair, within that,
it first locks the corresponding bucket so that no other thread
can access it at the same time.
And then once one thread is done, it unlocks the bucket so that
another thread can access it.
And this mechanism allows multiple threads to access and modify different
parts of the HashMap concurrently.
So this is a great explanation so far.
Very wordy, and I'm sorry for the wall of text, but that's how it is.
What's interesting is what's happening next.
So Now it starts negotiating trade off.
So it also responds.
However, it is important to note that locking and synchronization mechanisms
can introduce some overhead and decrease the performance of the data structure,
especially when the number of threads and the size of the hash map is large.
Now, when you're dealing with the software engineer, like this is important
because we want to understand, can they look at the design, Patterns of
the way that they're going through the creational process and say, is
this the right tool for the job?
And so the ability to evaluate a data structure and say, it's the right
tool for the job is super important.
And it's growing more and more important as we look at the kinds of algorithms
and structures that we would use for AI and maybe a B tree for one kind
of problem or a KNN, algorithm for another and how we want to manage
our grouping and our correlation.
So there's a lot of different, just because you know an algorithm doesn't mean
that it's the right algorithm for the job.
There's a lot of different ways to apply them, so it's important to
understand how they sit in context.
ChatGPT is providing a contextual based evaluation of a particular data structure.
This is valuable as an engineer.
It's valuable for a candidate to understand.
as I got into this, there were some things that started to
get pretty stinkin interesting.
So if we look at the results that we're getting, so the one on the
right is the organization's kind of rubric response that we're looking for
something along this lines and the one on the left is the one that chat GPT
gave us and as I'm starting to evaluate this, I'm looking for common themes.
So I've color coded them, like for to say, these are the common themes
where we have something going on.
So in our answer, we segment or internal data structure for locking
where they use the word buckets.
And I think it's analogous, right?
but what gets interesting is how, what happens next.
So as I started to dive into this, We can see the last piece that we were looking
for is that their answer says it locks a corresponding bucket and in ours it
says it does it without locking at all.
So I decided to go ahead and do some research.
Which answer is right?
I need to know, right?
I'm not necessarily a Java guy.
I really focus on different languages.
So I needed to go do a little research to figure this out.
And as it turns out, chat GPT, check Gupta was more right than we were.
This was starting to turn out to be a very interesting thing.
Our candidates were better at development and being a better
gatekeeper than we were for engineers.
So some concerns and we started to make some changes as a result of that.
So cool.
We've got this candidate now who's answering questions very well.
He's answering situational based questions for teams very well.
there was a lot of research about this.
some of a lot of the questions when they were being asked, they were It
took longer to write the question than it did for ChatGPT to respond to them.
And that's crazy because now it creates these opportunities where a person can
read over it, gloss over it, provide some human interjections, maybe some
ums, or let me think for a second and just natural inflection so it doesn't
sound like a robot's reading it.
And that's interesting.
Now, when we think about prompting, right at the time of this writing, it
was a lot of it was based on just text input, but you can also take audio
input, do some transcription, live transcription, which drops it into an
API and does the same thing as well.
So you don't need to have somebody actually handwriting these promptings
to give you the results of the feedback that you're looking for.
So let's talk through some of these outcomes.
All right.
So first of all, we just, this we're presented and
bundled all this information.
We decided that, we really need to have a better awareness about, cheating that's
occurring during our technical screens and our competency based interviews.
And we wanted to provide some education and uplift to the organization to
make sure that they were equipped with the tools necessary to be able to do
constructive interviews and root out maybe toxic candidates who weren't
be valuable for the organization.
We also started looking at Ongro and scrutiny of the vendors
and how things were used.
We had a sourcing vendor as a part of this like conversations, particularly
the one with the Lorax picture where that candidate, that or that the sourcing
agency, was actually we had suspicions that they were providing candidates that
weren't the right candidates for the job.
ultimately, we ended up changing our relationship with them because
we realized that they were sending us candidates that were fraudulent.
And so it closed the gap there.
That was a significant vector.
And then we also looked at our candidates.
Our testing tool to say, is this the right tool?
And so one of the interesting parts about the testing tool was that we
started asking questions about the test.
I said, Hey, look, I don't think these answers are the right answers.
And and they said, I said, everything's fine.
The people who write the test say everything's fine.
It's not going to be an issue.
So I said, okay, that's great.
I'm not really sure it's a good enough answer.
Who writes the test?
They said the vendor does.
So I was screaming in font ligatures at this point, right?
And I said, okay, so the people we pay to do their job are telling
us there's nothing to see here.
There's nothing to worry about.
oh!
Snap, we need to do something about this.
So we met with the vendor, we started talking through the tool,
and we said, hey, look, what do you guys think about generative AI?
How are you guys navigating?
And they were letting us know that they love it, and they think it's great, and
it's a powerful tool, and, it's hard to test for, because it's non deterministic,
which basically means that it's not going to give you the same answer every time,
and the facts play into the facts, so it can give you new answers based on previous
facts, and that's all very interesting.
Except we pay for a tool that can now be broken with generative AI.
And this is a problem.
So are we going to continue to pay for this tool?
So I said, what do you guys think we should do with your tool if it's
not satisfying the problems for these cases that we have today?
And they said, look, don't use it as a complete stopgap.
Why don't you go ahead and proctor your tests?
Now I told you we're interviewing a lot of people, right?
So I can't take my best engineers all day.
Take them away from valuable work.
They're contributing to the organization to sit down and make sure if somebody
is actually taking a test correctly, so it becomes a big problem in terms
of how we use our best people to solve problems for the organization.
So really take an extra scrutiny to say this is the right tool for the job.
And how are we going to approach this tool?
And so we had to look at changes to our technical testing and we're still in
evaluation for using things like public facing repositories and we're leaving
certain things up to the hiring managers to say hey, if this test isn't appropriate
for you, we might not introduce.
This tested result as a way to bias against a particular candidate.
Maybe you want to use a different mechanic or vehicle to make that happen.
And the reality is our tools, they're just not good enough, right?
So tools in isolation, such as our automated co testing tool,
it's an ineffective stopgap.
But that aside, there's also other challenges, right?
Our adaptive testing is no longer an effective litmus because you're
just one prompt away from being able to get a well reasoned answer.
And this is going to start to require more social interaction
to improve the feedback loop.
So we can start getting better cues about whether a person is, a bot or
whether they're actually, a different person in the audio or, there's somebody
who's manipulating in some way, right?
To get hired for whatever reason, right?
So some things like, after giving this conference a few times, I've
heard certain different talent acquisition folks say, I've seen this.
This is crazy.
And so what I've started doing is using instructions on a post
it note and having the candidate.
Read the instruction on the Post it note and provide the same response or just
give a response to the note or provide a visual cue on the Post it note or maybe
some sort of like a notepad to say, I want you to do this and then maybe
look to the left or to the right, just to see if they were the person who was
actually seeing the content on the screen.
And so that was very valuable feedback.
And I think, we've started to consider what it means to have.
Better social interaction is cameras on visual cues and whatnot in our
feedback loop, but the tools themselves in isolation are not good enough And so
we're gonna have to get more creative about the way we you know, we validate
the legitimacy of a candidate We have to consider that competency bit
interviews at this point to as well.
They can be framed with some light prompting It doesn't take much For
a candidate to drive contacts or create some context behind the scenes.
And as we were creating these competency based questions, I think we went through
for the CBI, probably 7 to 10 questions.
and each one was pretty stinking good, to be honest with you and, very long winded
responses, which is, CBI typically only has, It's an hour long conversation,
typically has three to four questions.
You want them to be long essay kind of questions, and so you're
getting a lot of rich, deep context.
With light prompting, and that's really challenging to navigate and quite frankly,
we have seen the automated code test can be easily solved with plenty of lead
time to introduce those human elements.
making mistakes or doing things that kind of look close enough.
these code tests typically will run and they'll run with a bunch of unit tests
and you'll find that some of these tests, maybe 99 percent test coverage because
they introduce a human element by design.
Because they add the time to that would allow them to be able to.
To look human.
They're not perfect solutions or close enough solutions,
and that's very interesting.
And the text screens again are just a prompt away from
providing well reasoned examples.
So what it would look like if you wanted to try this.
I think the first thing you need to do is assume positive intent.
hiring talent is not a frictionless process.
There's plenty of opportunities to reduce the friction, but some
of that friction is important.
But just because it can be automated doesn't mean it should be automated.
And that's something we need to figure out as we navigate a gen AI world.
I once heard that AI in general should be the best thing that
happens for humans, not the worst thing that happens to humans.
And so I think it's important to just be, pragmatic and appropriate when
we apply automation and artificial intelligence to our process.
The next thing to do is to lead with empathy.
it can take months or years to bring in tools that will help reduce
this friction or things like toil.
And in our case, it took a very long time to bring this automated testing
tool, which at the crest of its mature implementation became highly
exploitable using generative AI.
And lastly, you are a second.
Thirdly, you want to be able to bring talent acquisition with you.
It's better to work together to address concerns than to attack a
process or people group who created that process such as a department.
You really want to bring them along for the ride and say, Hey, look, I
think we might have an exploit vector.
I want to suss this out with you.
Can you help me understand where your context is?
And I'm going to show you where my research is and we're going to work
together to create a better process.
And lastly, seek win outcomes.
ultimately, you're looking for an effective process.
The person that you're going to hire, especially if you're dealing with the
talent acquisition process, the person you're going to hire is going to spend
about 1, 800 hours a year with you.
That's the average amount of time a U.
S.
candidate will spend in office or working with somebody, hybrid,
remote, whatever, with your peers.
And so if you're building a team, you want to make sure you protect your team.
That's a lot of time that they're going to spend with them.
You want to make sure that they're the right candidate for the job.
So if you do have any questions, you're want to look at some of
the research that we've done.
I would, feel free to reach out to me.
You can reach out to me by email at DanielMprese at gmail.
com or you can reach out to me on LinkedIn at in slash Daniel Mprese.
Thank you for taking the time today to sit through this conversation and learn some
more and hopefully you had a great time.
Thank you again and have a great day.