Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello. Hi, I'm Alex Harris and I'm the founder of Adelaide.com dot. What we
do is that we are a developer tool. We empower high performing
developer teams to reach their full potential with analytics, insights and recommendations.
I'm going to take you through today a bit of a presentation as to how
you can use data to help developers and also
become a better developer yourself. And I'm going to start with an example from a
completely different field. So developers actually have
a lot in common with athletes. Now, if you are like me and
you grew up in the nineties and you've seen enough movies from the, the US,
you'll be thinking that actually, you know, the jocks and the nerds
are fundamentally different. They sit in different tables and
they don't really talk to each other, but in reality, they have much more in
common than you think. And we're going to start with the athletes.
In fact, now, if we talk about athletic performance,
most people think that athletic performance, you know, starts poor when
somebody's a beginner and they just start a sport.
They then over time become intermediate and after
that they become advanced in what they do. So essentially it's a curve, but it
follows a pretty kind of straightforward curve at that.
However, that's a very simplistic view. And in reality,
performance looks a little bit more like this. There is
actually a huge amount of ups and downs,
which tends to happen. And this isn't just for
anybody kind of average doing sports,
but even professionals like Serena Williams,
top athlete in her field, are actually experiencing
something very, very similar to that. So you can see here there's
been a lot of up and ups and downs. You can see where, you know,
she hit her peak, but you can see that after that
there's been some other wins as well and some
kind of big drops too. Now,
the similar thing can actually happen if you look at performance
for developers, and specifically if you look at the way they
are committing code, but also how much of that code actually
makes it into production error free. And this is
really interesting because if you ask most developers, they will assume they
have a very kind of steady flow, but in reality they have massive
ups and downs. So you can say, for example,
is it even the same person practically,
when we're looking at, you know, I think what's September here versus
the bottom down there? The drops,
it's practically, you know, you might be talking about ten times, you know,
a ten x developer when you're hitting those tops.
So why does this matter? Well, if you've
seen the movie moneyball data
is really king in understanding performance,
but also driving said performance. And this is something that
has revolution sports. So why not the development field?
Right? Let's go back to how data is being
used in sports. So just as a quick example, over 50%
of coaches will use performance analysis tools to provide video clips for other
coaches in their support staff. About 68%
of them collate quantitative game data and just
under half would use some form of live coding analysis during
games. About 40% roughly would also
receive a written match report including game statistics. These are
some quite large numbers. So if we use data, as I said, to revolutionize
the athletic field, why not the
development field? So why is this a problem and an opportunity
as well? Now it's an important problem
to solve because there's about 60 billion of engineering
productivity wasted in the US alone yearly. And whilst
everybody else, pretty much every other kind of profit driving
department, marketing sales, have data
that live in tools like Google Analytics and Salesforce.
Engineering teams are flying blind now in
the same way that you wouldn't expect to go into a sales meeting
and not know why you're hitting or not
hitting your numbers, so do engineering teams
when they're running a retro, you know, what did we do well? What did we
not do well? What makes us faster? What made us slower? A lot
of that data at the moment is based on perception and impressions,
whilst we need them to be based on fact.
So why hasn't this problem been solved in the past?
There's three reasons why this is very, very difficult to do. The first
thing has to do with a breadth of factors that go into what an engineer
does. So most people outside of the field would think that engineering is all
about coding, but actually it's way, way, way more multifaceted.
So a huge amount of importance goes into preparing,
planning, scoping properly, understanding the requirements. And that's
a lot of the work that is actually invisible most of the time. And especially
when it comes to data capture, it's extremely difficult to try and understand how
did we collaborate, how did we scope, how well did that
process work for us? Secondarily, you have the complexity
of the engineering frameworks and the number 21 is,
for example, how many engineering frameworks like agile,
scrum, Kanban, et cetera are out there? And 21
is the kind of broad number. Like there's about 21 broad frameworks,
but actually, in reality, every single team that
has engineers will have a different flavor.
Nobody implements those frameworks in their purest forms.
Everybody would be working on something slightly different. The third
thing is adoption. So it's really important if you're making anything for
developers that developers need to love what you're making.
And it doesn't just, you know, it can't just be like a
monitoring draw or something like that. It's super important that it's
embraced and it helps the developers,
the people in the trenches, the individual contributors, not just management.
So how do we go about solving for these challenges?
First and foremost, for the challenge of breadth, we take signals
by integrating into a broader set of tools. So we
look at code versioning, GitLab, GitHub, we look at task management
like Jira. We also look at how people are managing their
calendars, Google workspace for example. Outlook is
the other tool. And we'll also look at how people collaborate over
slack. Now, why do we get such a broad view?
Primarily because a lot of the things that cause bad
code or a lot of the things that cause trouble for engineers don't
actually have to do necessarily with coding. It might be, as I mentioned earlier,
scoping, it might be the fact that, you know, they don't have enough focus,
time to focus on writing clean code. And these things are very,
very important. Now, what do we do with this? First and
foremost, we create insights. These can be anything from
understanding where the blockages are in between reviewing
code and committing code, and potentially understanding
as well how the scoping work that I mentioned a couple of times,
and once we have that data, we do a
couple of different kind of processes to it. First and foremost,
we run seven different models per engineer,
per developer. And the purpose of running this model is to understand what
drives positive performance, what drives positive momentum. So think about
this almost like as a coach, looking at the stats of
the back of a game and understanding what did we do right.
So after understanding this, we take the three best performing models,
we average them out, and this process happens daily. What that means is
that for each developer, you have a personalized idea and
also a personalized recommendation off the back of it as
to what drives posted performance and how to improve said performance.
Then finally, once we have their accommodations, we allow individuals
and teams to create their own missions and to
actually basically hit those
goals that they've set for themselves and for the teams.
Off the back of that, the teams and engineers can win rewards,
which just makes the whole process a little bit more fun. So what
does that mean for those developers?
So the individual contributors actually benefit from
being, for example, able to justify their
promotion path or quantify some of the data they might need in
order to have those conversations. This is not to say
that this can be used as a monitoring tool. It's actually the exact opposite.
We give power to the developer themselves to go
into those meetings with that data, but their boss can never look down downstream
and actually basically say,
you know, Alex hasn't been doing any work. That's not how it works. But it
does give the power to the developer to have these conversations and drive them.
Other things like, you know, if I'm a junior developer, am I building the right
habits or am I committing massive amounts of code? How quickly
am I, for example, ramping up after
I've been newly hired? And then how well am I managing my
time? Am I, you know, setting out focus time and keeping that focus time,
or am I struggling with that focus time and I'm getting
distracted? And then finally, as far as managers go,
are we investing the right amount of effort on the right things? It's a very
kind of common threat of questioning.
And then secondarily, what is impacting our
velocity? Thirdly, what is the cost and resource allocation
on the roadmap and technical debt? These are some of the questions that you
might be asking. All right, so after all
this information, we're off to seeing a demo.
Hello, I'm Alex Harris, and this is an overview of the ADA dot
platform. So we're going to start with
this dashboard view. What you might notice here is that we
have three pillars of data. We have work, we have collaboration, and we have well
being. Now, these numbers might not look familiar
to anybody seeing this for the first time,
but there's a good reason why we have them. First and
foremost, you'll see. I'm going to show you in a bit. There's a lot of
detail that hides between all of these different kind of data sets.
And oftentimes what you want to be able to see is how
directionally, how better we are one day after the
other. But also the interplay in between those three factors. Because,
for example, if you're working 24/7 obviously well being is not going to be as
good and you're not going to have enough time to collaborate. So let's take work
as an example and take you through some of the detail that exists behind that
number. So the data here comes from GitLab and
GitHub as well as Jira and in the near future notion.
So as far as the sprints breakdown, here is,
you can see information around. For example, have we re scoped
in the middle of the sprint? How are we spending our time and effort?
Are we investing in building new features? Are we, you know,
having bugs? How are we actually investing into
our engineering? As far as tickets assigned as well, you can see if they're equally
distributed within the team or people have basically an
uneven burden of tickets. Tickets are stuck.
You can also see all of these things here. There's a
lot of data on commits, you have data on deployments which
you can actually configure. So there's a level of customization that
the platform can afford you, as well as pipelines
as well. Just making sure that you can see things like execution time,
overall duration. And these charts are created automatically to fit
exactly your process. Now you
can see where bottlenecks exist in progress, under review, approval to
merge, and you can also look at the health of your essentially
review process. The important thing to note is
that this version is for the individual.
There's also one for the team as well. So you can actually click here
and look at that data. For at the team level, the individual
cannot be monitored by the manager and micromanage.
And what that means from an information architecture point of view
is that the individual has access to their own data, the manager
has access to the aggregate team data. Now the
other thing that we do here is because obviously you can see there's a lot
of numbers, there's a lot of little detail that sits under collaboration.
For example, things like are we communicating publicly? Are we
sharing our knowledge into public channels? In wellbeing, you can look at
things, for example, like focus time. Are we setting that focus time
in the calendar? Are we keeping it? Is our attention, you know, really fragmented?
Do we have a lot of content switching? So there's a lot of data that
sits here. But going back to the kind of original insights
view, how do we know what's making the most impact? And this is
where the recommendations come into play.
So here you can see basically the system will create
recommendations per developer and per team. What will
help them get faster, get better, give them a bit of
a justification here as to why this is important, but also help them set
their missions, which is, you know, a type of goal you can set on the
platform. There's three different levels of difficulty and once you
do that, it's, they're super, super easy to track. You also have another
broader collection of those, of those goals. So those
missions. So for example, if as a team, you always want to make sure that
you're merging, you know, with approval, you can kind of
put that here and you'll be able to set it.
Same thing for developers in the individual level. They can set their own goals and
they can basically drive their own kind of career
and their own development path. In a way, when these
missions are completed, you can win those badges and you
can play that little game in that way.
Now, finally, there's a really interesting new feature
that we are releasing. It's called lead time, and that helps teams to
understand what are the best ways to become faster,
to essentially decrease their lead time.
And this might not have to do with, for example,
purely coding aspects. It might have to do with things like, for example,
are we spending too much time in meetings?
Is our review time or review pickup time helping
us? Or is it holding us back? And if you click generate model, this creates
a completely personalized model for each team. So this
is unique in the market right now, and it's
basically run by a causal AI model which
was first published in 2022. So a
very, very recent kind of scientific evolution in this field.
And with this, I'm going to stop. If you have any questions,
feel free to message us.