Toward Carbon-Driven Development
It’s no secret that the Internet uses a huge amount of electricity. By 2026, data centers alone are expected to consume over 1,000 TWh (1,000 billion kWh) each year. [1]
In Ireland, new data center projects have been put on hold to protect the national power grid — a clear sign of how serious the issue has become. [2]
To respond, hosting providers are working hard to improve efficiency and switch to low-carbon electricity sources. But despite these efforts, the industry as a whole is still far from sustainable — and things are only getting worse.
We don’t bother the developers (yet)#
It’s easy to blame servers and data centers. But, as developers, we know that they’re not powered for no reason.
The pressure we put on infrastructure is a direct result of the software we build. Sooner or later, the world will expect us to design more energy-efficient applications.
Carbon footprint reports aren’t enough#
Most major cloud providers now offer monthly carbon emissions reports to help users understand their environmental impact. These reports are based on solid methodologies, but they’re not detailed or frequent enough to help developers actually optimize their service.
More insidious, these reports lead their readers to the following conclusion: applications should be migrated to data centers with lower emissions. But this doesn’t solve the problem of power shortages and may even amplify it. This advice also overlooks critical factors like data sovereignty and network latency which often require services to be deployed close to users.
In reality, we often have no choice but to reduce how much energy our applications use in the first place. That’s why watts — the unit of power — should become the key metric for identifying inefficient areas and improving overall system performance. But how do we access this kind of data?
The primitive watt#
As far as we know, measuring raw energy is not a common practice among cloud teams. It’s more usual to use costs as an efficiency indicator and we thought it would be a good start.
Our first idea was to convert the cost into an estimate of energy consumption. This approach seems elegant but has several drawbacks. First of all, estimating watts based on bills would mean knowing which margins are applied to which managed service; this data is obviously confidential.
On top of that, various financial savings can further complicate the relationship between cost and actual energy consumption. Commited Used Discounts or Spot Instances may reduce your bill, but the underlying servers still consume the same amount of electricity.
Lastly, a server is charged as soon as it is powered on, despite being used or not. However, it can consume 3 to 4 times as much electricity, depending on the workload.
A refreshing approach#
One of the first benefits of this energy modeling approach is that it gives us a unified view of different types of cloud resources. Whether it’s a load balancer or an S3 bucket, all signals are condensed into a single metric — helping us understand the system as a whole.
Tracking power usage is like monitoring a human body’s temperature: even small changes can signal a problem. This makes it easier for operations teams to detect and respond to issues quickly.
Power fluctuations also reveal how well our infrastructure scales. By correlating energy use with business metrics (like revenue or transactions), teams across the company can instantly see how efficiently their services are running.
“We’ve tripled our energy use per active user since this morning, is that normal?” — Bob from Accounting
When developers pay attention to raw energy usage, they uncover new areas for optimization that cost-based models often miss. This is why Energy monitoring really complements existing practices like FinOps, and aligns perfectly with the core principles of DevOps.
Hardware manufacturing#
A lot of carbon is emitted during the manufacturing of hardware — this is called embodied emissions. On average, about 80% of a data center’s carbon footprint comes from the electricity it uses [3], but that greatly vary depending on the cloud service. Due to their large scale, some managed products like S3 can be very energy efficient and might have a much larger share of embodied emissions.
When it comes to servers, underused units have a higher proportion of embodied emissions. If a machine just sits idle for five years, the embodied emissions could make up most of its impact.
Taking embodied emissions into account gives us a better sense of how our apps run under the hood and where their carbon footprint really comes from.
Toward Carbon-Driven Development (CDD)#
We believe that every production application should monitor their impact. But there’s still a long way to go. Current estimation models only cover a small share of managed cloud services. Most tools aren’t built for large-scale production environments. And a standardized method hasn’t yet emerged.
Inspired by the work of organizations like Boavizta and Cloud Carbon Footprint, we’ve started building an OpenMetrics exporter that estimates the real-time power consumption of a cloud project the hard way — using technical monitoring data. The project is available on GitHub, we welcome feedback, reviews and contributions from the community.

Hey, I’m Yann!#
After many years working in tech as a cloud architect, I now focus on helping fellow developers to build software that’s less energy-intensive. If you’re interested in these topics too, I’d love to connect. Reach out at yann@dangofish.com or linkedin.