Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello, my fellow tech enthusiasts.
I am Triffin Reddy, and by profession, I am a software development manager.
Today, I'm going to talk about green code, a possible process to reduce your
carbon footprint in the dev and ops arena.
So what is green code and how can it be integrated into operations to
reduce the overall carbon intensity?
Just as code red signals a security issue, should code green signify?
The need to prioritize carbon emissions and sustainable software practices.
This picture shows the carbon intensity of electricity consumption around the world.
So what is carbon intensity?
carbon intensity is a metric that measures the amount of carbon dioxide and other
greenhouse gases per unit of activity.
The map is a carbon intensity map of the power grids across the world.
Obviously, the greener it is, the greener the operational carbon created
in the power generation spaces, and the redder it is, the more of that
energy is derived from fossil fuels.
No data was found for the gray color regions.
Real time data of the carbon intensity can be derived from electricity maps.
And also from other providers such as wartime.
If you are the curious type, I would strongly recommend working on a personal
pet project to check and validate the carbon emission across regions and times
to verify the carbon intensity, especially if you have access to multi cloud data.
and run processes across the globe.
So what are we trying to accomplish in this presentation?
As IT personnel, we are often oblivious to our contribution to greenhouse
gases via the programs we write and processes and methodologies we follow.
So it is my intent to try and highlight some practices to reduce and minimize our
carbon footprint via sustainable software development and operational practices.
Like any other architectural principles, such as performance improvement,
scalability, security, cost, availability, et cetera, that are commonly known to us.
We've seen other industries, such as the airline, that disclose their carbon
emissions, and many of them offer carbon offset options, where you can
contribute to environmental projects to compensate for your flight's emissions.
However, we have not seen a similar initiative.
and wider option in the software development industry, especially by IT
management and others in the IT workspace.
In this presentation, we will delve into some important concepts and ways
to control, reduce carbon emissions by bringing changes to our day to
day software development habits.
All of what we've talked about so far makes sense, but the real big question is,
how can you and I as software developers DevOps personal solution architects, be
responsible and curb carbon emissions.
What tools, techniques, and methodologies do we have at our disposal to reduce
our carbon footprint and measure the carbon intensity in the software?
thankfully, several organizations have built and are continually enhancing
tools and frameworks to measure software carbon emissions, the list of which are
provided at the bottom of this slide.
In this presentation, we will talk briefly about Green Software Foundation or GSF.
GSF is an organization that came up with a way to measure carbon intensity in your
software via Software Carbon Intensity, SCI, which is a specification, which
is also by the way, an ISO standard.
You can find more information at sci.
greensoftware.
foundation.
Software carbon intensity is the amount of soft, of carbon
emitted per functional unit.
A functional unit is any work, for example, a batch job, or it
could also be invoking an API.
So how is SCI calculated?
it is calculated by taking into consideration the carbon per functional
unit, where E represents the energy consumed by the software system, L is the
location based marginal carbon intensity.
M denotes the embodied emissions of the hardware needed to
operate the software system.
And R stands for the functional unit, such as calling an API.
This comprehensive approach underscores the importance of
considering both the direct energy usage and the indirect emissions.
from hardware in assessing the overall carbon intensity of software programs.
Some of the steering members of GSF include Microsoft,
Accenture, Intel, and UBS.
The slide on the right is the carbon footprint based on
regions and cloud providers.
This information could be used to calculate your SCI
score, which is important.
By the way, on a side note, at the GSF USIN chapter, I am a member
of the leadership group and the organizer for that meetup group.
It Is the fastest programming program the best and the most energy efficient,
or is it a misconception continuing from some of the architectural principles
I mentioned in my previous slide, what I have here is a research paper
that compares energy, time, and memory consumption between various programming
languages that we use to run the exact same algorithm on the same hardware.
Configure configuration, by the way.
To measure the energy consumption, they used Intel's RAPL tool, which
is the Running Average Power Limit.
And RAPL is capable of providing accurate energy estimates at
a very fine grained level.
While there were several other algorithmic benchmarks that were performed across
various programming languages, for the sake of this presentation, I narrowed
it down just to two to prove the point.
The benchmarks I chose here are The regex redux and the binary trees
algorithm, the figures below denote the energy and memory consumed by each
algorithm based on the different type of language, out of which some are
compiled, interpreted, and VM based.
While C consumed the least energy in the binary trees algorithm execution,
TypeScript consumed the least energy when the regex redux algorithm was executed.
Where are we going with this?
In the previous slide now, we discussed performance programming languages.
does improving performance alone solve our issue?
What if servers and their resources complete tasks rapidly?
Does that solve our CO2 emissions problem?
Unfortunately, this would lead to underutilized CPUs and
resources causing idle states.
The research article PowerNAP Eliminating server idle power highlights that in
typical deployments, server utilization is below 30%, yet idle servers consume
60 percent of their peak power drop.
Should we then consider containerizing applications
to maximize their performance?
Resource efficiency?
that seems like a viable option.
However, according to a, according to CAST AI study, only provisioned
CPUs and 20 percent of memory are actually utilized in cloud computing.
So it's back to the drawing board then, huh?
We might consider implementing some techniques derived from the
research in the previous slide.
Let's take a look.
Table 1 lists the various benchmarks performed.
The table beside it presents the data and their rankings based on energy,
time, and memory consumption, while the table on the right offers a visual
representation of different combinations.
Based on the optimal sets for different combinations of objectives shown on
the right, we can select our languages.
This approach may help utilize resources to their fullest potential
at certain times, but not consistently.
Thus the question, is the fastest, the most energy efficient, that still remains?
And the answer is no.
The fastest language is not always the most energy efficient.
Server utilization varies significantly based on workload and the level of
virtualization, typically ranging from 30, from 5 to 30 percent in enterprise
environments, while cloud service products often achieve higher utilization.
You might wonder why we're delving into these details in a DevOps presentation.
Remember, our focus is on green code.
within DevOps, aiming to identify coding practices that will
lead to more sustainable and efficient software solutions.
Understanding these utilization metrics helps us see the impact of our code on
resource usage and efficiency, guiding us toward achieving green code outcomes.
So what can you do to minimize carbon footprint?
To improve server utilization, one strategy is resource density.
which refers to the amount of computing power and storage capacity packed into a
given physical space within a data center.
High resource density means more efficient use of space, power, and cooling, leading
to better performance and for the C suite folks, lower operational costs.
And some possible ways to alleviate these problems are by going serverless.
Serverless functions can contribute to high resource density Efficient
resource utilization through dynamic scaling where serverless functions
dynamically scale in response to demand, ensuring that resources are
utilized efficiently without needing dedicated servers running at all times.
Serverless functions execute in response to specific events
and terminate once the task is completed, minimizing idle resource
usage, thereby reducing idle time.
Serverless functions often run in micro VMs or containers or custom runtime
environments, which are lightweight and can be densely packed on physical
servers, optimizing space and power usage.
Cloud providers manage the underlying infrastructure, ensuring optimal resource
distribution and reducing wasted capacity.
By maximizing resource density and minimizing idle time, serverless functions
help reduce the overall energy consumption and carbon footprint of data centers,
thereby making your infrastructure more efficient and sustainable.
While serverless functions scale actively based on demand, thereby reducing idle
time to an extent, it still suffers from the concept of cold starts, where the
environment needs to be initialized.
Also, according to our research, many Azure functions are infrequently invoked,
yet they stay running constantly, leading to high costs and inefficiency.
Additionally, Serverless solutions often suffer from vendor lock in
and inadequate support for local development and CI CD practices.
WASM, WASI.
Before that, Solomon Hikes, the founder of Docker, once said, and I quote,
if WASM and WASI existed in 2008, we wouldn't have needed to create Docker.
That's how important it is.
Web assembly on the server is the future of computing.
A standardized system interface was a missing link.
Let's hope YAC is up to the task.
End quote.
WebAssembly, or WASM, is a portable binary code format that allows developers to
run code written in multiple languages on the web at near native speed.
It's designed to be efficient and fast, making it ideal for
performance intensive applications.
Look at the start speed of a Wasm application compared to
an AWS serverless function.
52 milliseconds versus 250 milliseconds.
Now we've seen some great advantages of serverless, but we also saw
that serverless functions require a cold start from time to time.
This is where Wasm comes to the rescue.
Wasm does not have a concept of a cold start.
WebAssembly system interface.
WASI is a modular system interface for WebAssembly.
It provides a standardized set of APIs that WebAssembly modules can
use to access system resources like the file system, network, and time.
This means you can run WebAssembly code outside of a browser on
any platform that supports WASI.
In a sense, WASM is about running high performance code on the web, and
WASI is about making the code portable and able to interact with system
resources across different environments.
Wasm also provides a stringent security environment by running
code in a sandboxed environment.
Wasm implementation can thrive on edge devices such as smart thermostats,
security cameras, wearable health monitors, smartphones, and tablets that
run apps and processes data locally.
So what's next?
Coming up now is GraalVM, which has several similarities to Wasm,
except that GraalVM uses JVM.
Whereas WASM uses the WASM runtime.
Grand
VM is a high performance runtime supporting multiple languages and
execution modes designed to enhance application performance through advanced
optimization techniques by compiling Java applications into native executables using
the ahead of time compilation strategy.
Grand VM ensures instant startup and reduced compute resource usage.
It minimizes CPU and memory consumption leading to lower power consumption
and reduced carbon emissions.
Additionally, GraalVM's native images eliminate the need for JIT,
just in time compilation, allowing applications to run efficiently from
the start without the warm up overhead.
Moving on, we have looked at some possible application development
and programming strategies.
Here in this slide, we will discuss some of the practices
from an operations standpoint.
DevOps can play a significant role in reducing carbon emissions by promoting
efficient and sustainable practices in software development and operations.
Here are some ways DevOps can help.
While there are many strategies that are helpful, constant monitoring of resources,
tweaking the build and deploy via CI CD pipelines to be scheduled at the time when
the cloud providers consume energy from greener sources would definitely help.
Using tools to track and report carbon emissions through automated solutions.
efficient resource utilization by automated scaling and load
balancing, thereby preventing over provisioning and waste.
Also, cloud selection plays an important role.
Maybe consider shifting to a cloud provider that uses renewable energy
and have energy efficient data centers.
By incorporating green software practices, where we follow principles
to minimize carbon emissions and energy.
Use for example, as an example, checking the SCI score during the
build and deploy and suggesting changes to better reduce the SCI score.
It could also be collaboration and innovation where collaborating
with suppliers and customers on joint initiatives and continuously
seeking innovative ways to improve energy efficiency and
reduce environmental impact.
By integrating these practices, DevOps can help organizations reduce
their carbon footprint and contribute to a more sustainable future.
In conclusion, I would like to say that while not a single language or approach or
operational style will render a complete 100 percent carbon free footprint, a
combination of what we've discussed in this presentation will surely reduce
the carbon emission and our footprint.
With that, I conclude and thanks for your time.