Player FM - Internet Radio Done Right
Checked 3d ago
הוסף לפני three שנים
תוכן מסופק על ידי Asim Hussain and Green Software Foundation. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Asim Hussain and Green Software Foundation או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
Player FM - אפליקציית פודקאסט
התחל במצב לא מקוון עם האפליקציה Player FM !
התחל במצב לא מקוון עם האפליקציה Player FM !
פודקאסטים ששווה להאזין
בחסות
N
Now On Netflix


We're trying something different this week: a full post-show breakdown of every episode in the latest season of Black Mirror! Ari Romero is joined by Tudum's Black Mirror expert, Keisha Hatchett, to give you all the nuance, the insider commentary, and the details you might have missed in this incredible new season. Plus commentary from creator & showrunner Charlie Brooker! SPOILER ALERT: We're talking about the new season in detail and revealing key plot points. If you haven't watched yet, and you don't want to know what happens, turn back now! You can watch all seven seasons of Black Mirror now in your personalized virtual theater . Follow Netflix Podcasts and read more about Black Mirror on Tudum.com .…
Remembering Abhishek Gupta: How does AI and ML Impact Climate Change?
Manage episode 445547444 series 3336430
תוכן מסופק על ידי Asim Hussain and Green Software Foundation. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Asim Hussain and Green Software Foundation או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
In this episode, we honor the memory of Abhishek Gupta, who was an instrumental figure in the Green Software Foundation and a Co-Chair of the Standards Working Group. Abhishek's work was pivotal in the development of the Software Carbon Intensity (SCI) Specification, now adopted globally. His tireless efforts shaped the future of green software, leaving an indelible mark on the industry. As we remember Abhishek, we reflect on his legacy of sustainability, leadership, and friendship, celebrating the remarkable impact he had on both his colleagues and the world. We are airing an old episode that featured Abhishek Gupta, Episode 5 of Environment Variables. Where host Chris Adams is joined by Will Buchanan of Azure ML (Microsoft), Abhishek Gupta; the chair of the Standards Working Group for the Green Software Foundation and Lynn Kaack, assistant professor at the Hertie School in Berlin to discuss how artificial intelligence and machine learning impact climate change. They discuss boundaries, Jevons paradox, the EU AI Act, inferencing and supplying us with a plethora of materials regarding ML and AI and the climate!
In Memoriam: Abhishek Gupta:
Learn more about our guests:
- Chris Adams: LinkedIn / GitHub / Website
- Will Buchanan: LinkedIn
- Abhishek Gupta: LinkedIn
- Lynn Kaack: LinkedIn / Latest Paper
Find out more about the GSF:
Episode resources:
- ClimateAction.tech [3:44]
- Green Web Foundation [3:49]
- Green Software Foundation’s Standards and Innovation Working Group [4:14]
- Montreal AI Ethics Institute [4:43]
- Hertie School Berlin [5:50]
- Aligning Artificial Intelligence with Climate Change Mitigation [6:32]
- The IPCC [7:11]
- Paper: Green AI | Roy Schwartz, Emma Strubell, Jesse Dodge [8:37]
- Project: Pachama [9:33]
- Montreal Institute for Learning Algorithms [10:34]
- Project: This Climate Does Not Exist [10:48]
- Austrian Institute of Technology | Infrared [11:32]
- Jevons Paradox [20:19]
- The GHG Protocol [23:27]
- Legislation: The EU AI Act [25:08]
- Paper; Measuring the Carbon Intensity of AI in Cloud Instances | Will Buchanan et al. [30:08]
- ONNX Runtime [37:02]
- TinyML [37:09]
- GitHub: Dynamic Batch Inferencing - Taylor Prewitt & Ji Hoon Kang of UW
- GitHub: NVIDIA Triton server on AzureML & Model Analyzer
- Green Software Foundation Summit
If you enjoyed this episode then please either:
- Follow, rate, and review on Apple Podcasts
- Follow and rate on Spotify
- Watch our videos on The Green Software Foundation YouTube Channel!
Connect with us on Twitter, Github and LinkedIn!
TRANSCRIPT BELOW:
Chris Skipper: This week on Environment Variables, we have some sad news.
TRANSCRIPT BELOW:
Chris Skipper: This week on Environment Variables, we have some sad news.
We have to report the untimely passing of Abhishek Gupta. Abhishek was a key part of the Green Software Foundation. He was the co chair of the Standards Working Group from the GSF's formation to his passing on September 30th of this year. I would like to read out an in memoriam that was posted by Asim Hussain, honoring the great man, his legacy of leadership, his collaboration, and the heart that he put into the Green Software Foundation.
When the Green Software Foundation was formed over three years ago, Abhishek offered to help. The standards working group was looking for a co chair, and he jumped in to help without hesitation. He led the standards working group for over three years, with the full support of everyone involved. Leading a work group isn't about imposing your will on others, It's about finding common ground, nurturing discussions, and searching for the truth.
Abhishek excelled in all of those areas. He leaves a powerful legacy with the Software Carbon Intensity SCI specification. His tireless efforts over the years led to a consensus on the specification, which was later published to ISO in late 2023, and is now being adopted globally.
The impact of the SCI is truly global, with academics and enterprises worldwide adopting it. This widespread adoption is a testament to Abhishek's vision and dedication and his influence will be felt in every software product that bears an SCI score. His legacy is not just a memory, but a living testament to his work.
He has left an unforgettable mark on the community. And will be remembered for his contributions for years to come. Aho brother, I'll see you on the other side, Asim Hussain. To honor Abhishek, we're going to revisit an older episode of Environment Variables from the very start of the podcast, episode five, How Does AI and ML Impact Climate Change?
In this episode, host Chris Adams is joined by Abhishek Gupta, Lynne Kaack, and Will Buchanan to discuss these topics. So, without further ado, here's the episode.
Abhishek Gupta: We're not just doing all of this accounting to produce reports and to, you know, spill ink, but it's to concretely drive change in behavior. And this was coming from folks who are a part of the standards working group, including Will and myself, who are practitioners who are itching to get something that helps us change our behavior, change our team's behaviors when it comes to building greener software.
Asim Hussain: Hello and welcome to Environment Variables brought to you by the Green Software Foundation. In each episode we discuss the latest news and events surrounding green software. On our show you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software.
I'm your host Asim Hussain.
Chris Adams: Hello there and welcome to the Environment Variables podcast, the podcast about green software. I'm Chris Adams filling in for Asim Hussain, the regular host while he's on paternity leave with a brand new baby. I met Asim on climateaction.tech, an online community for climate aware techies. And I work for the Green Web Foundation where we work towards a fossil free internet by 2030, as well as working as the co chair for the Green Software Foundation's policy group. Today, we're talking about climate change, AI, and green software. And I'm joined by Lynn, Will, and Abhishek.
Will Buchanan: Thanks for having me. My name is Will. I'm a product manager on the Azure Machine Learning team. also a member of the Green Software Foundation's Standards and Innovation Working Group. Within Microsoft, I foster the Green AI community, which now has a few hundred members, and I'm also a climate activist that's focused on pragmatic solutions to complex environmental issues.
Recently, I shipped energy consumption metrics within Azure Machine Learning, and we are about to publish a paper titled Measuring Carbon Intensity of AI in Cloud Instances, which I think we'll touch on today.
Abhishek Gupta: Well, thanks for having me. I'm Abhishek Gupta. I'm the founder and principal researcher at the Montreal AI Ethics Institute. I also work as a senior responsible AI leader and expert at the Boston Consulting Group, BCG, and I serve as the chair for the Standards Working Group at the Green Software Foundation.
So I've got a few hats on there. Most of my work, as it relates to what we're going to talk about today, It runs at the intersection of responsible AI and green software. In particular, what's of interest to me is looking at how the intersections of social responsibility, the environmental impacts of software systems, in particular AI systems, can be thought about when we're looking to make a positive impact on the world while using technology in a responsible fashion. I also, as a part of the Green Software Foundation, help through the standards working group, come up with the software carbon intensity specification, where we're trying to create an actionable way for developers and consumers of software systems to better assess and mitigate the environmental impacts of their work.
Chris Adams: Okay. And Lynn, last but not least, joining us from Berlin. Thank you very much for joining us.
Lynn Kaack: Yeah thank you so much. I am an assistant professor at a policy school, public policy school, called Hertie School in Berlin. And I am also a co founder and a chair of an organization called Climate Change AI. with Climate Change AI, we facilitate work at the intersection of machine learning and And different kinds of climate domains, focusing on climate change mitigation and adaptation. And in my work, in my research, I am looking at how we can use machine learning as a tool to address different problems related to energy and climate policy. I'm also interested in the policy of AI and climate change. And today, actually, since we're talking about papers, I have a paper coming up. That is called Aligning Artificial Intelligence with Climate Change Mitigation, where we look at the different impacts from machine learning and how they affect greenhouse gas emissions.
Chris Adams: So we actually have some decent deep domain expertise and I'll try to keep this quite accessible, but we might drop into a little bits of like data science nerder on here, but the podcast has done that previously and it turns out to be something that we've got some decent feedback from because there aren't that many podcasts covering this. Okay so let's, uh, get into this topic of green AI and climate change. As we we know, IT is a significant driver of emissions in its own right. When we think about the climate crisis this year, the IPCC, which is the Intergovernmental Panel on climate Change in their big reports, which synthesized literally thousands of papers explicitly called out digital as a thing we, we should be talking about and thinking about.
And if you're a responsible technologist, it seems like a thing that we should be taking into account here. Now, I found it helpful to think about IT IT a little bit like how we think about the shipping industry, partly because they're similar in terms of emissions, which is around between 1 and 3%, depending on what you look at it, but also in that both of these acts like kind of connective tissue for society.
We also think of IT as a kind of force multiplier for The existing forms of activity. So if you use it, which is in line with the recommendations of the science, that's a good thing. But if you use it to do something, which is kind of rejecting some of the science, it might not be such a good thing. And within technology, AI and machine learning in particular is one of the fastest growing sectors and often seen as one of the biggest levers of all. So we're going to highlight some interesting projects we'll start off with. And out of that, we'll probably dive into some specifics about that or some other things you might want to take into account. If you're a technologist wanting to incorporate an awareness of climate into how you work and build greener software, then finally, we'll hopefully leave you with some actionable tips and techniques or projects that you may contribute to or use in your daily practice.
There's another term that we might be touching on here when you're making AI greener, and that's specifically Green AI. Is that the case, Will?
Will Buchanan: Correct. And that actually was coined by researchers a few years ago, Roy Schwartz, Emma Struble, Jesse Dodge. And it's really focused on making the development of the AI system itself more sustainable. And it's to be disambiguated from the term using AI for sustainability.
Chris Adams: Okay, so that's something we'll touch on both today. We'll talk about some of the external impacts and some of the internal impacts. We're going to start with something quite easy first because, well, why not? I'm just going to ask each of the people here to kind of point to maybe one project that they've seen that's using ML in quite an interesting fashion to ideally come up with some kind of measurable win. Well, if there was one project you'd actually look to that you think is kind of embodying these ideas of like green AI something which is really helping us essentially face some of the challenges, maybe you could tell us about what's catching your eye at the moment or what you'd look at.
Will Buchanan: I've been thinking a lot about natural capital recently, and I learned about a startup called Pachama, which combines remote sensing data with machine learning to help measure and monitor the carbon stored in a forest. I think it's really, really valuable because they're providing verification and insurance of carbon credits at scale. They've protected about a million hectares of forest. That's really when you have IoT and remote sensing and machine learning combining to help nature restore itself.
Chris Adams: Okay, cool. So if I understand that, they're using satellites to basically track forests and track deforestation. Is that the idea that they're doing there?
Will Buchanan: Yes, Yes, and and also to verify the amount of carbon that a forest can sequester.
Chris Adams: Okay, cool. All right. I know there's a few other projects related to this. If I just hand over to Abhishek, can you let us know what's caught your eyes recently, and then we'll see what other projects come out of this.
Abhishek Gupta: Yeah, absolutely. I think one of the projects, and I don't know, I mean, what the measurable impact has been so far. In fact, it's, it's something that's come out of MILA, which is, or, you know, called the Montreal Institute for Learning Algorithms, which is Dr. Bengio's lab in, in Montreal. In fact, one of the people who led that project is a part of Climate Change AI as well, who I'm sure Lynn can talk more about too, which is SASA. And she's done this project called This Climate Does Not Exist, which I think was a fascinating use of machine learning to visualize the impact climate change will have on, you know, places around you in a very arresting and visually capturing fashion, which I think when we think about.
What impact climate change is going to have around us. Sometimes it, it feels quite distant because it's a, it's a slow rolling thing that's coming our way. And this puts it in, in, in a way that's quite immediate, quite visually arresting. And I think stirs people to action. I, as I said, I'm not sure what the measurable impact of that has been yet, but I, I certainly feel that those are the kinds of creative users of AI when we want to galvanize people into action around climate change.
Lynn Kaack: I'm happy to also talk about a application, which is also kind of difficult in terms of measuring impact, but I think it's an another interesting component of what AI can do. And this is something that researchers at the Austrian Institute of Technology do on a project called Infrared. And they use machine learning to help design new districts and cities.
And especially at the moment in many countries, a lot of new urban districts are being built and how we build these has a huge impact on energy consumption in cities, both in terms of transportation, but also how buildings are heated or cooled. And by the use of machine learning, they can drastically improve design choices because now they can approximate their very computationally heavy models and run them much faster. which means that they can also have more runs and can try out more design configurations. So this is a rather indirect application, but it has huge implications also on emissions for, for many decades to come.
Chris Adams: So essentially it's using of housing policy as climate policy there, because there's just a huge amount of emissions built into how people live and whether they need to drive everywhere in a car and stuff like that. That's some of the stuff that it's doing and making that part easier?
Lynn Kaack: So it's not really looking at housing policy, but it's looking at how districts are designed. So they take a group of, of houses, like if the new district is to be built and then they simulate the wind flow going through the cities, which are very expensive simulation models. they take the outputs of their model and approximate it with a machine learning model, which makes it much, much faster.
So from hours or days, you go to milliseconds or below seconds for one run. And then you can try out different design configurations and understand better how The built infrastructure affects natural cooling, for example, in cities or walkability, energy impacts generally of the microclimate on, on the built environment.
Chris Adams: Wow, I had no idea that was, it was actually possible. That's really, really cool.
Will Buchanan: That's very cool. That's similar to generative design.
Chris Adams: design. This is a phrase I haven't heard actually, Will. Maybe you could elucidate or share something there, actually.
Will Buchanan: It's similar similar to to some software that Autodesk has built where you can try out many different iterations of a design and come up with optimum solutions. I think it's really cool that you're just consolidating it and running these models more efficiently.
Chris Adams: cool. And that's a bit like following, say, a fitness function saying, I have a chair, or, you know, I want to have something that works like a chair. It needs four legs and a seating pattern. Then it essentially comes up with some of the designs or iterates through some of the possibilities, something like that?
Will Buchanan: Exactly.
Chris Adams: Oh, wow. Okay. That's cool. All right. And so we've spoken about AI, and there's a few exciting, interesting ones that we can add to the show notes and list from, and for people to look into and see how that might relate to what they do, I suppose. I want to ask a little bit about measuring impact from these projects, because quite a few different ways that you can actually measure impact here.
And in many times, it can be quite a difficult thing to kind of pin down. And this is continually a thing that's come up. When I know that. So, people have tried to come up with specs like the Software Carbon Intensity, and I'm sure Abhishek, you've had some experiences here. Will, you've mentioned a little bit about, um, Actually measuring impact internally, and it sounds like you've just had to do a bunch of this work on the ML team right now and expose some of these numbers to people consuming these services in the first place. Could you talk about some of that part a bit, perhaps?
Will Buchanan: Certainly. And so, as I mentioned, we have shipped energy consumption metrics for both training and inference within Azure machine learning. And that's really complex when you think of the infrastructure required to just report that,but that doesn't necessarily account for the additional power that's consumed in the data center, such as the idle power for devices or for the utilization of your servers.
There's so many different factors there. So you really, you could encounter scope creep when you come to your measurement methodology. So it's really necessary to put boundaries around that.
Chris Adams: and when you use the term boundaries here, you're saying, I'm going to measure the environmental impact of the servers, but not the environmental impact of building the building to put the servers in. Is that the idea of when you're referring to a boundary here? Yeah.
Will Buchanan: Yes, that's a great example.
Chris Adams: Okay. All right. I think this is actually something we've come across quite a few times in other places as well, actually, because maybe it's worth asking about this kind of boundary issue that we have had here, because automatically that sounds complicated here.
And I know that Abhishek, you've had some issues at your end as well with defining this style for deciding what's in or out, because I think this is one thing that we've had to explicitly do for the software carbon intensity spec, right?
Abhishek Gupta: Exactly. And, and I think when we talk about boundaries, it's, it's, it's trying to get a sense for what are the actual pieces that are consumed, right? From an operational standpoint, from an embodied emission standpoint, and how you make those, you know, allocations across you know, what, what your system is consuming.
And I use the word system because I think again, when we talk about software, we're not just talking about a specific piece, but we're talking about really everything that it touches, be that, you know, network, be that bandwidth consumption, be that, you know, as, as Will was saying, idle power. Even when we're looking at cloud computing, it becomes even more complicated when you have your pieces of software that are sharing tenancy across the pieces of hardware and how different consumers are perhaps sharing that piece of hardware with you and thinking about whether you've booked the resource ahead of time or not, whether it's hot or cold in terms of its availability and what implications that has.
I mean, there are so many different facets to it. And each of those decisions, what I want to highlight here is that it comes with a trade off, right? So we also don't have any standards in terms of how we should go about measuring that and what should be included, what shouldn't be included. And so the way people report out these numbers today also doesn't really make it actionable for folks who are consuming or who want to consume these reports, these metrics in taking decisions as to, you know, whether something is green or not.
And I think that's one of the places that The Software Carbon Intensity Specification is trying to help folks is to help standardize it first and foremost, but also to make it actionable so that if you're someone who's environmentally conscious, you can make the right choice by being informed about what the actual impacts are.
Chris Adams: Okay, this is question that I'm curious here, because so far we've only been speaking internally about, okay, what is the environmental impact of IT itself, like it's direct emissions. But the assumption that I have here is that there are ways we might talk about the impact that it has on the outside world in terms of activity we're speeding up or accelerating or supporting there.
Is that the only issue that we need to think about, or are there any other things to take into account about like this system boundary part that we've just been talking
Lynn Kaack: Yeah. So the system effects are really important to look at and to consider. Maybe just to give an example, like if you use machine learning in, let's say the oil and gas sector to make a small part of the operation is more energy efficient, that first sight looks like something that could be considered sustainable and green, but you also have to realize that often then you are reducing costs as well, and that might change the way that oil and gas in this particular example is competitive, or the particular company is competitive, and that actually might shift also how much oil and gas we are able to use in the short run, how the price is changed.
So, these indirect effects can actually then have much larger impacts than the immediate effects of such an application. So drawing boundaries is really important and also opening this up to to have a broader system level view and really try to understand how does the technology also change then, then to larger consumption and production patterns. It's important.
Chris Adams: So if I understand that correctly, that's talking almost like the consequences of an intervention that we might make here. So even though we've might have reduced the emissions of say the drilling part by putting a wind turbine on an oil rig, for example, that might change the economics and make people more likely to use the oil, in which many cases they might burn, for example, or stuff like that.
Is that basically what you're saying there?
Lynn Kaack: Essentially what I'm saying is that efficiency improvements in particular, and often they can be done with data science or with machine learning or AI systems. They often come with cost reductions and then those cost reductions do something and change something. And often this is also considered under rebound effects, but it's not only rebound effects.
So there's systemic, the system level impacts that come from more small scale applications that need to be considered.
Will Buchanan: That's such a good good point, and I think I've also heard it called Jevon's paradox, too.
Chris Adams: Yes. Jevons paradox. This is stuff from like the 1800s with steam engines, right? Like my understanding of the Jevons paradox was when people had steam engines and they made steam engines more efficient, this led to people basically burning more coal because it suddenly became more accessible to more people and you end up using them in a greater number of factories.
So there's a kind of rebound, I think, that we need to take into account. This is something I think has been quite difficult to actually capture with existing ways of tracking the environment impact of particular projects. We have like an idea of, say, an attribution based approach and a consequence based approach. And maybe it's worth actually talking about here, about how Some of the complexities we might need to wrestle with when you're designing a system here. I mean, Abhishek, I think this was one of the things that was an early decision with the software carbon intensity part to not try to have an attribution approach versus a marginal approach. And if we're not diving too deeply into jargon here, maybe you might be able to kind of share a bit more information on that part there, because It sounds like it's worth expanding or explaining to people, to the audience, a bit better here.
Abhishek Gupta: Indeed, you know, the reason for making that choice was again, our emphasis on being action oriented, right? So as we had started to develop the software carbon intensity specification, one of the early debates that we had to wrestle with and, and, you know, Will and Will was of course a crucial part of that as well as whether folks who were a part of the standards working group was figuring out how.
For example, the GHG way of going about doing that, you know, accounting doesn't really translate all that well for software systems and how perhaps adopting a slightly different approach would lead to more better. More actionable outcomes for the folks who want to use this ultimately to change behavior because, you know, without getting into specifics of, you know, what marginal is and what consequential approaches are, and, and if we want, I mean, I'm, I'm sure you know, will, would, would be happy to dive into all of those details as would I.
But the, the, the thing that we were seeing was that we're doing all of this great work around, you know, talking about scope one, two, three emissions, et cetera, but it's not really helping to drive behavior change. And that's really the crux of all of this, right, is that we're not just doing all of this accounting to produce reports and to, you know, spill ink, but it's to concretely drive change in behavior.
And that's where we found that adopting a consequential, adopting a marginal approach actually helped make it more actionable. And this was coming from folks who are a part of the standards working group, including Will and myself, who are practitioners, who, who are itching to get something that helps, helps us change our behavior, change our team's behaviors when it comes to building greener software, broadly speaking.
Chris Adams: Okay. So that helps with explaining the difference between a consequential approach and a marginal approach, as in the consequences of me building this thing will mean that this is more likely to happen. And if I understand it, the GHG protocol that you mentioned, which is the greenhouse gas protocol, and this scoped emissions approach, this is the kind of standard way that an organization might report It's kind of climate responsibility, as it were, when, and when you say scoped emissions, that's like scope one, which is burning burning say that's, that's emissions from fossil fuels burned on site or in your car.
For example, scope two is electricity and scope three is your supply chain. If I understand what you're saying, there's like a kind of gap there that doesn't account for the impacts of this, perhaps. I mean, some people who've referred to this as scope zero or scope four, which might be, what are the impacts an organization is happening to. Essentially, we mentioned before, do something around this systemic change, or as Lynn mentioned, like this is changing the price of a particular commodity to make it more likely to be used or less likely to be used. And this is what I understand the SCI is actually trying to do. It's trying to address some of this consequential approach, because the current approach doesn't capture all of the impacts an organization might actually have at the moment. Right?
Will Buchanan: Yeah, that's a good summary. One challenge that I have noticed is that until it's required in reporting structures like the greenhouse gas protocol, then organizations don't have an incentive to really take the action that they need to avoid climate disaster. Something I encounter on a daily basis, and I think broadly we need to bring this into the public discourse.
Chris Adams: I think you're right. I think it's worth actually, Lynn, I think that when I've seen some of the work that you've done, you've done previously, this is something that's come into some of the briefings that I think that you've shared previously with the climate change AI work and some of the policy briefings for governments as well. Is there something that you might be able to add on here?
Lynn Kaack: Yeah, so something that comes to mind is, for example, like a concrete legislation that's currently being developed is the EU AI Act. And that's a place where, for the first time, AI systems are being regulated also that scale. And climate change almost didn't play a role for that regulation in the first draft.
So here it's also really evident that if we don't write in climate change now as a criterion for evaluating AI systems. It will probably be ignored for the next years to come. So the way that legislation works is by classifying certain AI systems as high risk and also just outright banning some other systems, but as high risk systems could, the original legislation stood, weren't really explicitly classified as high risk, even if they had like a huge environmental or climate change impact. And that's that I talked about a lot with policymakers and trying to encourage them to more explicitly make environmental factors and climate change a factor for evaluating systems. So that'd be a very concrete case where making climate change more explicit in the AI context is important also in terms of legislation.
Abhishek Gupta: So there's, there's a lot to be said about the EU AI Act, right? And a ton of ink has been spilled everywhere, I think, as, as, you know, it's, it's, it's, it's called the Brussels effect for a reason, right? Where the, whatever happens in the EU is taken as gospel and sort of spread across the world, which I think, as already Lynn has pointed out there, It's not, it's not perfect, right?
I think one of the things that I've seen being particularly problematic is, is the rigid categorization of what, you know, high risk use cases are and, and whether the EU AI act, as we'll see, hopefully with some, with some you know, revisions that are coming down the pipe is whether we'll have the ability to add new categories and, and not just update subcategories within the existing identified high risk categories.
And I think that's where things like considerations for environmental impacts and really tying that to this you know, societal impacts of AI, where we're talking about bias, privacy, and all the other areas, is going to be particularly important because we need multiple levers to, to try to account for, or to push on getting people to consider the environmental impacts.
And given that there is such a great momentum already in terms of privacy considerations, bias considerations. I think now is the time where we really push hard to make environmental considerations an equally first class citizen when it comes to, you know, thinking about the societal impacts of AI.
Will Buchanan: This is something I'm incredibly passionate about. I think needs to encompass the full scope of harms that are caused by an AI system. So that could be the hidden environmental impacts of either the development or the application. The application could vastly outweigh the good that you're doing, even just expanding oil and gas production by a certain percentage amount. I think it just must account for all of the harms for both ecosystems and people.
Chris Adams: And there's this idea of like a risk thing. Does this categorization actually include this stuff right now? What counts as a high risk use case, for example, when mentioned here?
Lynn Kaack: So I haven't seen the latest iteration. I think that there has been some update on, there's been a lot of feedback on the version that was published in April last year. I haven't seen the latest iteration. I think a lot of things changed in, yeah, in the first version, there was It's high risk systems where when, that affect personal safety, like human rights in the sense of, of personal wellbeing, but the completely overlooked environmental protection aspects aspects of human rights.
Chris Adams: Wow, that's quite a large one, especially when you take into account the human rights. Okay. We've spoken about the external impact, but I am led to believe there is also an internal impact from this as well. Like the AI has, has some direct impact that we might want to talk about as well. As I understand it, we spoke about 2 to 3 percent of emissions here, but if we know there's an external impact, why would we care about any of the internal impacts of AI we might be doing or why you might want to care about the internal impacts of AI as well, for example, like the direct emissions.
Will Buchanan: So by direct emissions, you're talking about, let's say, the scope, too, of the operational cost of the model.
Chris Adams: Yeah, there'll be things that we have, there's an external impact or there is a, we use this phrase scope 4, for example, to talk about all the other things that induce in the world. But there is a, a kind of stuff which happens inside the system boundary that we've spoken about. And presumably that's something we should be caring about as well, right? So there'll be steps that we can take to make the, the use of AI, particularly like say the, the models more efficient and more effective and more, all these parts here, this is something that we should be looking at as well, presumably right?
Will Buchanan: Totally. And so in our paper, which is going to be published, I think on Monday, we've calculated the emissions of several different models. And one of them was a 6 billion parameter transformer model. And the operational carbon footprint was equivalent to about a rail car of coal. And that's just for training. So it's really imperative that we address this and provide transparency this
Lynn Kaack: Is that Is that for developing a model or for training at once? I mean, is that with grid search, architecture search?
Will Buchanan: For for a single training run. So it does not account for sweeps or deployments.
Chris Adams: All right, so there's a, there's some language that we haven't heard for here, so, but maybe it might be worth asking, maybe Will, could you maybe talk about, just briefly, you said a rail car full of coal, and I don't actually know what that is, I mean, in metric terms, what does that look like?
Will Buchanan: A hundred million grams. I don't have the conversion handy, but we took the U.S EPA greenhouse gas equivalencies. And I should add the methodology that we applied was the Green Software Foundation's SCI. So we calculated the energy consumed by the model and multiplied it by the carbon intensity of the grid that powers that data center.
Chris Adams: Okay, cool, and that was per training run? So that wasn't the, in the, the equation of the entire model, is that correct?
Will Buchanan: Correct.
Abhishek Gupta: That's the other interesting part as well, right? When you're thinking about the life cycle is, or life cycle of the model, I should say, because life cycle has multiple meanings here, which is that once that model is out there, what are the inference costs, right? And are we, if this is something that's going to be used you know, hundreds, thousands, tens of thousands of times, if it's something, you know, if it's, if it's a large model that's, you know, now being used as a pre trained model and is going to be fine tuned on by, by other folks downstream, are we able to then, you know, talk about amortization of that cost across all of those use cases?
And again, I think what becomes interesting and, and is how do we account for that stuff as well, right? Because we, we don't have complete visibility on that as well. And, and I know Lynn's nodding here because her paper that's, I think coming out, being released in an hour and a half, actually the embargo gets lifted on our paper, actually talks about some of those system level impacts.
So maybe, maybe Lynn, you want to chime in and talk a little bit about that as well?
Lynn Kaack: Yeah, thank you so much. Exactly. So I think what's a crucial number that we're currently still missing is not what is emitted from a single model in a well known setting, but what is emitted overall from applying machine learning? So what are the usage patterns and practice? Like how often do people develop models from scratch?
How often do they train or retrain them? People I mean, of course, organizations and typically large organizations and companies. And how do they perform inference on how much data, how frequently? And there are some numbers out there from Facebook and Google and in their large scale applications actually inference outweighs their training and development costs in terms of greenhouse gas emissions.
So inference might become a bigger share depending on the application. So we really need to understand better how machine learning is being used in practice also to understand the direct emissions that come from it.
Chris Adams: An inference is a use of a model once it's in the wild. Is that what an inference is in this case? So there's an environment, so you could think of the making part, and then there is the usage part from the inference, right? So is that how that part works?
Lynn Kaack: Exactly. So if you use a model on a data point, we call that inference. So you feed in the data and it gives you a result. Then training means you sort of train a single configuration of the model once on your training data set, and then development is what I refer to as if you search over different configurations of the model.
So there are lots of hyperparameters that you can adjust to achieve better performance. And if new models are being developed, then there's an extensive search over those hyperparameters and architecture configurations that then of course gets really energy intensive because we are training the model thousands of times essentially.
Will Buchanan: Yeah, one of the figures that really resonated with me, I think Nvidia posted on their blog that inferencing for about 80 to 90 percent of the carbon cost of a model. think Lynn on one of your papers, it was, Amazon had also claimed around 90 percent. So these are really non trivial costs, and I'm not aware of any framework to measure this.
Lynn Kaack: Yeah. So that Amazon number just to be clear is costs or monetary costs that came from a talk, but there are numbers now published by Google and Facebook, but they look at some applications of theirs where inference outweighs training in terms of energy consumption. They're not exact numbers. It's not entirely clear which applications those are, but there is some data at least that shows that.
And I think it just highly depends on the application that you're looking at. And sometimes, you know, you build a model and then you do inference once and you have the data set that you, and then in other types, you build a model and then you apply it billion times a day. So, of course, that can then add up to a lot more energy consumption.
Chris Adams: Wow, I didn't realize that was actually an issue, because most of the numbers I've seen have been focusing on the training part. So, Will, I think this is something we spoke about before, that training, there's a kind of trend in the use, in the energy use from training already. Is this something, because I've seen figures from OpenAI, but my assumption was that basically computers are generally getting more efficient. About twice as efficient every two years, or so with like Moore's Law or Kumi's Law or things like that. But if you're seeing an uptick in usage here, is, does that mean that they're staying about the same? Or is there, is there a trend that we should be taking into account there?
Will Buchanan: So I think the computational costs of training have been doubling every 3.4 months or so, and so I think the trend is only accelerating. The models are just getting larger and larger, and you've got, I think, GPT 3 is one of the largest ones around at this point. We might challenge Moore's Law.
Chris Adams: Okay. So if Moore's Law is doubling every once every two years, I mean, what is the impact of doubling every 3.4 months? I mean, over a few years, what does that work out to be? Because I don't think I could do the exponential numbers, the exponential math, but it sounds like it's, it sounds like a pretty big number, basically, if something is doubling on every three or four months, right?
Will Buchanan: I also don't have the math handy, but I think it's important to note here, and Abhishek was talking about this earlier, models are very flexible, so you can train them once and then provide some fine tuning or transfer learning approach on top of them, and then repurpose these models for a number of different applications. And then you can even compress them, let's say using ONNX Runtime. You can be very efficient. You can really amortize the cost of that model.
Abhishek Gupta: So yeah, just building on Will's point, there's a lot of work on quantizing the weights of a trained network, applying distillation approaches, using teacher student model approaches that actually helps to shrink down the model quite a bit, especially with the whole push for TinyML, trying to shrink down models so that they can be deployed on edge devices has been something that's helped to manage to a great extent, the computational impacts.
One of the other things that I wanted to highlight as, as, you know, Will was talking about more models getting larger is, there's this almost fetish in the world today to continuously scale and keep pushing for ever larger models and in chasing SOTA, as they would say, so chasing state of the art, you know, which is great for academic publications where you get to show, "Hey, I improved state of the art performance on this benchmark data set by 0.5 percent or whatever," right? And in performance, I think what's being ignored is that that has a tremendous, tremendous computational cost. In fact, one of the hidden costs that I think doesn't get talked about enough is there's this statistic out there that, you know, 90 percent of the models don't make it into production.
And that kind of relates to things like, you know, neural architecture search and, you know, hyper parameter tuning, where you're constantly trying to refine a model to achieve better performance. A lot of that actually goes to waste because that stuff doesn't make it into production. So it's actually not even used.
And so there's a whole bunch of computational expenditure that is done that actually never sees the light of day, never becomes useful. That obviously has environmental impacts, right? Because of the operational and embodied carbon impacts, but none of that actually gets talked about, reported, documented anywhere because, well, who wants to know that, hey, I trained, you know, 73 different, you know, combinations to get to where I'm at.
You just talk about the final results.
Chris Adams: Okay, let's say that if you don't want to go down one of those rabbit holes, what should you be using or where would you start if you wanted to start applying some of these ideas about greener AI in your work on a daily basis? Does anyone have anything that they would lead with, for example?
Will Buchanan: Bigger is not always better. Sometimes you really should choose the right tool for the job. We've had some really great graduate student projects from the University of Washington's Information School, and they built some case studies and samples around green AI. As an example, a project led by Daniel Chin was comparing a sparse or a dense model of a green AI model to a dense model for an anomaly detection setting.
And they found that using sparse, meaning less trees and a shallow, smaller depth per tree, random forest would save a massive amount of carbon and provide the equivalent accuracy. So I think it saved about 98 percent in terms of the monetary cost and energy.
Chris Adams: Okay, wow, that's bigger than I was expecting. What would you say to people if they're in production, they're trying to do something now?
Lynn Kaack: I think a big goal should be to not only develop more energy efficient machine learning models, but then also ensure that those are actually being used. And surprisingly, even sometimes within the same company, certain model developments are not being passed on to other parts of the company. So really trying to develop standard models that then are also being used in practice is important.
So interoperability of energy efficient machine learning models.
Chris Adams: If people, someone does want to look at this stuff, and they do want to apply some of these ideas, you spoke a little bit about using some other models. would you suggest people look if they wanted to operationalize some of the kind of wins or some of the better ways to make green software greener, for example?
I realize you've got a paper coming out and you work on this day to day. So yeah, what would you point us to?
Lynn Kaack: So, I mean, as I understand, there's a lot of ongoing research in the machine learning community for more energy efficient machine learning. So I don't have any names on top of my head in terms of workshops or community resources where one can see what are the most energy efficient model types. For a specific application.
I know that there are some very comprehensive papers also that summarize all the different research approaches that are being taken, but I would encourage if you are looking for using like a deep learning model of some kind, just inform yourself quickly if there's also a leaner version of it. So much of the like widely used models like BERT, for example, smaller versions that can almost do the same thing.
And maybe your performance doesn't suffer much. If you're using a much lighter model.
Chris Adams: Okay, so light up models and looking around what we have there. And Will, is there a paper or a source you might point to?
Will Buchanan: I was actually going to talk about the Carbon Aware paper that we're about to publish. I think that's a slightly different track.
Chris Adams: That's up next week, right? So that'll be the 13th or 14th of June. That's when that'll be visible, correct?
Will Buchanan: Exactly.
Chris Adams: Okay, cool. All right, then. There's a load more that we could dive into. We've got copious, copious, copious show notes here. So what I'm gonna do is I'm gonna say thank you everyone for coming in and sharing your wisdom and your experiences with us, and hopefully we'll have more conversations about green software in future. Thank you folks.
Asim Hussain: Hey everyone, thanks for listening. Just a reminder to follow Environment Variables on Apple Podcasts, Spotify, Google Podcasts, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show and of course we want more listeners.
find out more about the Green Software Foundation, please visit greensoftware. foundation. Thanks again and see you in the next episode.
105 פרקים
Manage episode 445547444 series 3336430
תוכן מסופק על ידי Asim Hussain and Green Software Foundation. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Asim Hussain and Green Software Foundation או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
In this episode, we honor the memory of Abhishek Gupta, who was an instrumental figure in the Green Software Foundation and a Co-Chair of the Standards Working Group. Abhishek's work was pivotal in the development of the Software Carbon Intensity (SCI) Specification, now adopted globally. His tireless efforts shaped the future of green software, leaving an indelible mark on the industry. As we remember Abhishek, we reflect on his legacy of sustainability, leadership, and friendship, celebrating the remarkable impact he had on both his colleagues and the world. We are airing an old episode that featured Abhishek Gupta, Episode 5 of Environment Variables. Where host Chris Adams is joined by Will Buchanan of Azure ML (Microsoft), Abhishek Gupta; the chair of the Standards Working Group for the Green Software Foundation and Lynn Kaack, assistant professor at the Hertie School in Berlin to discuss how artificial intelligence and machine learning impact climate change. They discuss boundaries, Jevons paradox, the EU AI Act, inferencing and supplying us with a plethora of materials regarding ML and AI and the climate!
In Memoriam: Abhishek Gupta:
Learn more about our guests:
- Chris Adams: LinkedIn / GitHub / Website
- Will Buchanan: LinkedIn
- Abhishek Gupta: LinkedIn
- Lynn Kaack: LinkedIn / Latest Paper
Find out more about the GSF:
Episode resources:
- ClimateAction.tech [3:44]
- Green Web Foundation [3:49]
- Green Software Foundation’s Standards and Innovation Working Group [4:14]
- Montreal AI Ethics Institute [4:43]
- Hertie School Berlin [5:50]
- Aligning Artificial Intelligence with Climate Change Mitigation [6:32]
- The IPCC [7:11]
- Paper: Green AI | Roy Schwartz, Emma Strubell, Jesse Dodge [8:37]
- Project: Pachama [9:33]
- Montreal Institute for Learning Algorithms [10:34]
- Project: This Climate Does Not Exist [10:48]
- Austrian Institute of Technology | Infrared [11:32]
- Jevons Paradox [20:19]
- The GHG Protocol [23:27]
- Legislation: The EU AI Act [25:08]
- Paper; Measuring the Carbon Intensity of AI in Cloud Instances | Will Buchanan et al. [30:08]
- ONNX Runtime [37:02]
- TinyML [37:09]
- GitHub: Dynamic Batch Inferencing - Taylor Prewitt & Ji Hoon Kang of UW
- GitHub: NVIDIA Triton server on AzureML & Model Analyzer
- Green Software Foundation Summit
If you enjoyed this episode then please either:
- Follow, rate, and review on Apple Podcasts
- Follow and rate on Spotify
- Watch our videos on The Green Software Foundation YouTube Channel!
Connect with us on Twitter, Github and LinkedIn!
TRANSCRIPT BELOW:
Chris Skipper: This week on Environment Variables, we have some sad news.
TRANSCRIPT BELOW:
Chris Skipper: This week on Environment Variables, we have some sad news.
We have to report the untimely passing of Abhishek Gupta. Abhishek was a key part of the Green Software Foundation. He was the co chair of the Standards Working Group from the GSF's formation to his passing on September 30th of this year. I would like to read out an in memoriam that was posted by Asim Hussain, honoring the great man, his legacy of leadership, his collaboration, and the heart that he put into the Green Software Foundation.
When the Green Software Foundation was formed over three years ago, Abhishek offered to help. The standards working group was looking for a co chair, and he jumped in to help without hesitation. He led the standards working group for over three years, with the full support of everyone involved. Leading a work group isn't about imposing your will on others, It's about finding common ground, nurturing discussions, and searching for the truth.
Abhishek excelled in all of those areas. He leaves a powerful legacy with the Software Carbon Intensity SCI specification. His tireless efforts over the years led to a consensus on the specification, which was later published to ISO in late 2023, and is now being adopted globally.
The impact of the SCI is truly global, with academics and enterprises worldwide adopting it. This widespread adoption is a testament to Abhishek's vision and dedication and his influence will be felt in every software product that bears an SCI score. His legacy is not just a memory, but a living testament to his work.
He has left an unforgettable mark on the community. And will be remembered for his contributions for years to come. Aho brother, I'll see you on the other side, Asim Hussain. To honor Abhishek, we're going to revisit an older episode of Environment Variables from the very start of the podcast, episode five, How Does AI and ML Impact Climate Change?
In this episode, host Chris Adams is joined by Abhishek Gupta, Lynne Kaack, and Will Buchanan to discuss these topics. So, without further ado, here's the episode.
Abhishek Gupta: We're not just doing all of this accounting to produce reports and to, you know, spill ink, but it's to concretely drive change in behavior. And this was coming from folks who are a part of the standards working group, including Will and myself, who are practitioners who are itching to get something that helps us change our behavior, change our team's behaviors when it comes to building greener software.
Asim Hussain: Hello and welcome to Environment Variables brought to you by the Green Software Foundation. In each episode we discuss the latest news and events surrounding green software. On our show you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software.
I'm your host Asim Hussain.
Chris Adams: Hello there and welcome to the Environment Variables podcast, the podcast about green software. I'm Chris Adams filling in for Asim Hussain, the regular host while he's on paternity leave with a brand new baby. I met Asim on climateaction.tech, an online community for climate aware techies. And I work for the Green Web Foundation where we work towards a fossil free internet by 2030, as well as working as the co chair for the Green Software Foundation's policy group. Today, we're talking about climate change, AI, and green software. And I'm joined by Lynn, Will, and Abhishek.
Will Buchanan: Thanks for having me. My name is Will. I'm a product manager on the Azure Machine Learning team. also a member of the Green Software Foundation's Standards and Innovation Working Group. Within Microsoft, I foster the Green AI community, which now has a few hundred members, and I'm also a climate activist that's focused on pragmatic solutions to complex environmental issues.
Recently, I shipped energy consumption metrics within Azure Machine Learning, and we are about to publish a paper titled Measuring Carbon Intensity of AI in Cloud Instances, which I think we'll touch on today.
Abhishek Gupta: Well, thanks for having me. I'm Abhishek Gupta. I'm the founder and principal researcher at the Montreal AI Ethics Institute. I also work as a senior responsible AI leader and expert at the Boston Consulting Group, BCG, and I serve as the chair for the Standards Working Group at the Green Software Foundation.
So I've got a few hats on there. Most of my work, as it relates to what we're going to talk about today, It runs at the intersection of responsible AI and green software. In particular, what's of interest to me is looking at how the intersections of social responsibility, the environmental impacts of software systems, in particular AI systems, can be thought about when we're looking to make a positive impact on the world while using technology in a responsible fashion. I also, as a part of the Green Software Foundation, help through the standards working group, come up with the software carbon intensity specification, where we're trying to create an actionable way for developers and consumers of software systems to better assess and mitigate the environmental impacts of their work.
Chris Adams: Okay. And Lynn, last but not least, joining us from Berlin. Thank you very much for joining us.
Lynn Kaack: Yeah thank you so much. I am an assistant professor at a policy school, public policy school, called Hertie School in Berlin. And I am also a co founder and a chair of an organization called Climate Change AI. with Climate Change AI, we facilitate work at the intersection of machine learning and And different kinds of climate domains, focusing on climate change mitigation and adaptation. And in my work, in my research, I am looking at how we can use machine learning as a tool to address different problems related to energy and climate policy. I'm also interested in the policy of AI and climate change. And today, actually, since we're talking about papers, I have a paper coming up. That is called Aligning Artificial Intelligence with Climate Change Mitigation, where we look at the different impacts from machine learning and how they affect greenhouse gas emissions.
Chris Adams: So we actually have some decent deep domain expertise and I'll try to keep this quite accessible, but we might drop into a little bits of like data science nerder on here, but the podcast has done that previously and it turns out to be something that we've got some decent feedback from because there aren't that many podcasts covering this. Okay so let's, uh, get into this topic of green AI and climate change. As we we know, IT is a significant driver of emissions in its own right. When we think about the climate crisis this year, the IPCC, which is the Intergovernmental Panel on climate Change in their big reports, which synthesized literally thousands of papers explicitly called out digital as a thing we, we should be talking about and thinking about.
And if you're a responsible technologist, it seems like a thing that we should be taking into account here. Now, I found it helpful to think about IT IT a little bit like how we think about the shipping industry, partly because they're similar in terms of emissions, which is around between 1 and 3%, depending on what you look at it, but also in that both of these acts like kind of connective tissue for society.
We also think of IT as a kind of force multiplier for The existing forms of activity. So if you use it, which is in line with the recommendations of the science, that's a good thing. But if you use it to do something, which is kind of rejecting some of the science, it might not be such a good thing. And within technology, AI and machine learning in particular is one of the fastest growing sectors and often seen as one of the biggest levers of all. So we're going to highlight some interesting projects we'll start off with. And out of that, we'll probably dive into some specifics about that or some other things you might want to take into account. If you're a technologist wanting to incorporate an awareness of climate into how you work and build greener software, then finally, we'll hopefully leave you with some actionable tips and techniques or projects that you may contribute to or use in your daily practice.
There's another term that we might be touching on here when you're making AI greener, and that's specifically Green AI. Is that the case, Will?
Will Buchanan: Correct. And that actually was coined by researchers a few years ago, Roy Schwartz, Emma Struble, Jesse Dodge. And it's really focused on making the development of the AI system itself more sustainable. And it's to be disambiguated from the term using AI for sustainability.
Chris Adams: Okay, so that's something we'll touch on both today. We'll talk about some of the external impacts and some of the internal impacts. We're going to start with something quite easy first because, well, why not? I'm just going to ask each of the people here to kind of point to maybe one project that they've seen that's using ML in quite an interesting fashion to ideally come up with some kind of measurable win. Well, if there was one project you'd actually look to that you think is kind of embodying these ideas of like green AI something which is really helping us essentially face some of the challenges, maybe you could tell us about what's catching your eye at the moment or what you'd look at.
Will Buchanan: I've been thinking a lot about natural capital recently, and I learned about a startup called Pachama, which combines remote sensing data with machine learning to help measure and monitor the carbon stored in a forest. I think it's really, really valuable because they're providing verification and insurance of carbon credits at scale. They've protected about a million hectares of forest. That's really when you have IoT and remote sensing and machine learning combining to help nature restore itself.
Chris Adams: Okay, cool. So if I understand that, they're using satellites to basically track forests and track deforestation. Is that the idea that they're doing there?
Will Buchanan: Yes, Yes, and and also to verify the amount of carbon that a forest can sequester.
Chris Adams: Okay, cool. All right. I know there's a few other projects related to this. If I just hand over to Abhishek, can you let us know what's caught your eyes recently, and then we'll see what other projects come out of this.
Abhishek Gupta: Yeah, absolutely. I think one of the projects, and I don't know, I mean, what the measurable impact has been so far. In fact, it's, it's something that's come out of MILA, which is, or, you know, called the Montreal Institute for Learning Algorithms, which is Dr. Bengio's lab in, in Montreal. In fact, one of the people who led that project is a part of Climate Change AI as well, who I'm sure Lynn can talk more about too, which is SASA. And she's done this project called This Climate Does Not Exist, which I think was a fascinating use of machine learning to visualize the impact climate change will have on, you know, places around you in a very arresting and visually capturing fashion, which I think when we think about.
What impact climate change is going to have around us. Sometimes it, it feels quite distant because it's a, it's a slow rolling thing that's coming our way. And this puts it in, in, in a way that's quite immediate, quite visually arresting. And I think stirs people to action. I, as I said, I'm not sure what the measurable impact of that has been yet, but I, I certainly feel that those are the kinds of creative users of AI when we want to galvanize people into action around climate change.
Lynn Kaack: I'm happy to also talk about a application, which is also kind of difficult in terms of measuring impact, but I think it's an another interesting component of what AI can do. And this is something that researchers at the Austrian Institute of Technology do on a project called Infrared. And they use machine learning to help design new districts and cities.
And especially at the moment in many countries, a lot of new urban districts are being built and how we build these has a huge impact on energy consumption in cities, both in terms of transportation, but also how buildings are heated or cooled. And by the use of machine learning, they can drastically improve design choices because now they can approximate their very computationally heavy models and run them much faster. which means that they can also have more runs and can try out more design configurations. So this is a rather indirect application, but it has huge implications also on emissions for, for many decades to come.
Chris Adams: So essentially it's using of housing policy as climate policy there, because there's just a huge amount of emissions built into how people live and whether they need to drive everywhere in a car and stuff like that. That's some of the stuff that it's doing and making that part easier?
Lynn Kaack: So it's not really looking at housing policy, but it's looking at how districts are designed. So they take a group of, of houses, like if the new district is to be built and then they simulate the wind flow going through the cities, which are very expensive simulation models. they take the outputs of their model and approximate it with a machine learning model, which makes it much, much faster.
So from hours or days, you go to milliseconds or below seconds for one run. And then you can try out different design configurations and understand better how The built infrastructure affects natural cooling, for example, in cities or walkability, energy impacts generally of the microclimate on, on the built environment.
Chris Adams: Wow, I had no idea that was, it was actually possible. That's really, really cool.
Will Buchanan: That's very cool. That's similar to generative design.
Chris Adams: design. This is a phrase I haven't heard actually, Will. Maybe you could elucidate or share something there, actually.
Will Buchanan: It's similar similar to to some software that Autodesk has built where you can try out many different iterations of a design and come up with optimum solutions. I think it's really cool that you're just consolidating it and running these models more efficiently.
Chris Adams: cool. And that's a bit like following, say, a fitness function saying, I have a chair, or, you know, I want to have something that works like a chair. It needs four legs and a seating pattern. Then it essentially comes up with some of the designs or iterates through some of the possibilities, something like that?
Will Buchanan: Exactly.
Chris Adams: Oh, wow. Okay. That's cool. All right. And so we've spoken about AI, and there's a few exciting, interesting ones that we can add to the show notes and list from, and for people to look into and see how that might relate to what they do, I suppose. I want to ask a little bit about measuring impact from these projects, because quite a few different ways that you can actually measure impact here.
And in many times, it can be quite a difficult thing to kind of pin down. And this is continually a thing that's come up. When I know that. So, people have tried to come up with specs like the Software Carbon Intensity, and I'm sure Abhishek, you've had some experiences here. Will, you've mentioned a little bit about, um, Actually measuring impact internally, and it sounds like you've just had to do a bunch of this work on the ML team right now and expose some of these numbers to people consuming these services in the first place. Could you talk about some of that part a bit, perhaps?
Will Buchanan: Certainly. And so, as I mentioned, we have shipped energy consumption metrics for both training and inference within Azure machine learning. And that's really complex when you think of the infrastructure required to just report that,but that doesn't necessarily account for the additional power that's consumed in the data center, such as the idle power for devices or for the utilization of your servers.
There's so many different factors there. So you really, you could encounter scope creep when you come to your measurement methodology. So it's really necessary to put boundaries around that.
Chris Adams: and when you use the term boundaries here, you're saying, I'm going to measure the environmental impact of the servers, but not the environmental impact of building the building to put the servers in. Is that the idea of when you're referring to a boundary here? Yeah.
Will Buchanan: Yes, that's a great example.
Chris Adams: Okay. All right. I think this is actually something we've come across quite a few times in other places as well, actually, because maybe it's worth asking about this kind of boundary issue that we have had here, because automatically that sounds complicated here.
And I know that Abhishek, you've had some issues at your end as well with defining this style for deciding what's in or out, because I think this is one thing that we've had to explicitly do for the software carbon intensity spec, right?
Abhishek Gupta: Exactly. And, and I think when we talk about boundaries, it's, it's, it's trying to get a sense for what are the actual pieces that are consumed, right? From an operational standpoint, from an embodied emission standpoint, and how you make those, you know, allocations across you know, what, what your system is consuming.
And I use the word system because I think again, when we talk about software, we're not just talking about a specific piece, but we're talking about really everything that it touches, be that, you know, network, be that bandwidth consumption, be that, you know, as, as Will was saying, idle power. Even when we're looking at cloud computing, it becomes even more complicated when you have your pieces of software that are sharing tenancy across the pieces of hardware and how different consumers are perhaps sharing that piece of hardware with you and thinking about whether you've booked the resource ahead of time or not, whether it's hot or cold in terms of its availability and what implications that has.
I mean, there are so many different facets to it. And each of those decisions, what I want to highlight here is that it comes with a trade off, right? So we also don't have any standards in terms of how we should go about measuring that and what should be included, what shouldn't be included. And so the way people report out these numbers today also doesn't really make it actionable for folks who are consuming or who want to consume these reports, these metrics in taking decisions as to, you know, whether something is green or not.
And I think that's one of the places that The Software Carbon Intensity Specification is trying to help folks is to help standardize it first and foremost, but also to make it actionable so that if you're someone who's environmentally conscious, you can make the right choice by being informed about what the actual impacts are.
Chris Adams: Okay, this is question that I'm curious here, because so far we've only been speaking internally about, okay, what is the environmental impact of IT itself, like it's direct emissions. But the assumption that I have here is that there are ways we might talk about the impact that it has on the outside world in terms of activity we're speeding up or accelerating or supporting there.
Is that the only issue that we need to think about, or are there any other things to take into account about like this system boundary part that we've just been talking
Lynn Kaack: Yeah. So the system effects are really important to look at and to consider. Maybe just to give an example, like if you use machine learning in, let's say the oil and gas sector to make a small part of the operation is more energy efficient, that first sight looks like something that could be considered sustainable and green, but you also have to realize that often then you are reducing costs as well, and that might change the way that oil and gas in this particular example is competitive, or the particular company is competitive, and that actually might shift also how much oil and gas we are able to use in the short run, how the price is changed.
So, these indirect effects can actually then have much larger impacts than the immediate effects of such an application. So drawing boundaries is really important and also opening this up to to have a broader system level view and really try to understand how does the technology also change then, then to larger consumption and production patterns. It's important.
Chris Adams: So if I understand that correctly, that's talking almost like the consequences of an intervention that we might make here. So even though we've might have reduced the emissions of say the drilling part by putting a wind turbine on an oil rig, for example, that might change the economics and make people more likely to use the oil, in which many cases they might burn, for example, or stuff like that.
Is that basically what you're saying there?
Lynn Kaack: Essentially what I'm saying is that efficiency improvements in particular, and often they can be done with data science or with machine learning or AI systems. They often come with cost reductions and then those cost reductions do something and change something. And often this is also considered under rebound effects, but it's not only rebound effects.
So there's systemic, the system level impacts that come from more small scale applications that need to be considered.
Will Buchanan: That's such a good good point, and I think I've also heard it called Jevon's paradox, too.
Chris Adams: Yes. Jevons paradox. This is stuff from like the 1800s with steam engines, right? Like my understanding of the Jevons paradox was when people had steam engines and they made steam engines more efficient, this led to people basically burning more coal because it suddenly became more accessible to more people and you end up using them in a greater number of factories.
So there's a kind of rebound, I think, that we need to take into account. This is something I think has been quite difficult to actually capture with existing ways of tracking the environment impact of particular projects. We have like an idea of, say, an attribution based approach and a consequence based approach. And maybe it's worth actually talking about here, about how Some of the complexities we might need to wrestle with when you're designing a system here. I mean, Abhishek, I think this was one of the things that was an early decision with the software carbon intensity part to not try to have an attribution approach versus a marginal approach. And if we're not diving too deeply into jargon here, maybe you might be able to kind of share a bit more information on that part there, because It sounds like it's worth expanding or explaining to people, to the audience, a bit better here.
Abhishek Gupta: Indeed, you know, the reason for making that choice was again, our emphasis on being action oriented, right? So as we had started to develop the software carbon intensity specification, one of the early debates that we had to wrestle with and, and, you know, Will and Will was of course a crucial part of that as well as whether folks who were a part of the standards working group was figuring out how.
For example, the GHG way of going about doing that, you know, accounting doesn't really translate all that well for software systems and how perhaps adopting a slightly different approach would lead to more better. More actionable outcomes for the folks who want to use this ultimately to change behavior because, you know, without getting into specifics of, you know, what marginal is and what consequential approaches are, and, and if we want, I mean, I'm, I'm sure you know, will, would, would be happy to dive into all of those details as would I.
But the, the, the thing that we were seeing was that we're doing all of this great work around, you know, talking about scope one, two, three emissions, et cetera, but it's not really helping to drive behavior change. And that's really the crux of all of this, right, is that we're not just doing all of this accounting to produce reports and to, you know, spill ink, but it's to concretely drive change in behavior.
And that's where we found that adopting a consequential, adopting a marginal approach actually helped make it more actionable. And this was coming from folks who are a part of the standards working group, including Will and myself, who are practitioners, who, who are itching to get something that helps, helps us change our behavior, change our team's behaviors when it comes to building greener software, broadly speaking.
Chris Adams: Okay. So that helps with explaining the difference between a consequential approach and a marginal approach, as in the consequences of me building this thing will mean that this is more likely to happen. And if I understand it, the GHG protocol that you mentioned, which is the greenhouse gas protocol, and this scoped emissions approach, this is the kind of standard way that an organization might report It's kind of climate responsibility, as it were, when, and when you say scoped emissions, that's like scope one, which is burning burning say that's, that's emissions from fossil fuels burned on site or in your car.
For example, scope two is electricity and scope three is your supply chain. If I understand what you're saying, there's like a kind of gap there that doesn't account for the impacts of this, perhaps. I mean, some people who've referred to this as scope zero or scope four, which might be, what are the impacts an organization is happening to. Essentially, we mentioned before, do something around this systemic change, or as Lynn mentioned, like this is changing the price of a particular commodity to make it more likely to be used or less likely to be used. And this is what I understand the SCI is actually trying to do. It's trying to address some of this consequential approach, because the current approach doesn't capture all of the impacts an organization might actually have at the moment. Right?
Will Buchanan: Yeah, that's a good summary. One challenge that I have noticed is that until it's required in reporting structures like the greenhouse gas protocol, then organizations don't have an incentive to really take the action that they need to avoid climate disaster. Something I encounter on a daily basis, and I think broadly we need to bring this into the public discourse.
Chris Adams: I think you're right. I think it's worth actually, Lynn, I think that when I've seen some of the work that you've done, you've done previously, this is something that's come into some of the briefings that I think that you've shared previously with the climate change AI work and some of the policy briefings for governments as well. Is there something that you might be able to add on here?
Lynn Kaack: Yeah, so something that comes to mind is, for example, like a concrete legislation that's currently being developed is the EU AI Act. And that's a place where, for the first time, AI systems are being regulated also that scale. And climate change almost didn't play a role for that regulation in the first draft.
So here it's also really evident that if we don't write in climate change now as a criterion for evaluating AI systems. It will probably be ignored for the next years to come. So the way that legislation works is by classifying certain AI systems as high risk and also just outright banning some other systems, but as high risk systems could, the original legislation stood, weren't really explicitly classified as high risk, even if they had like a huge environmental or climate change impact. And that's that I talked about a lot with policymakers and trying to encourage them to more explicitly make environmental factors and climate change a factor for evaluating systems. So that'd be a very concrete case where making climate change more explicit in the AI context is important also in terms of legislation.
Abhishek Gupta: So there's, there's a lot to be said about the EU AI Act, right? And a ton of ink has been spilled everywhere, I think, as, as, you know, it's, it's, it's, it's called the Brussels effect for a reason, right? Where the, whatever happens in the EU is taken as gospel and sort of spread across the world, which I think, as already Lynn has pointed out there, It's not, it's not perfect, right?
I think one of the things that I've seen being particularly problematic is, is the rigid categorization of what, you know, high risk use cases are and, and whether the EU AI act, as we'll see, hopefully with some, with some you know, revisions that are coming down the pipe is whether we'll have the ability to add new categories and, and not just update subcategories within the existing identified high risk categories.
And I think that's where things like considerations for environmental impacts and really tying that to this you know, societal impacts of AI, where we're talking about bias, privacy, and all the other areas, is going to be particularly important because we need multiple levers to, to try to account for, or to push on getting people to consider the environmental impacts.
And given that there is such a great momentum already in terms of privacy considerations, bias considerations. I think now is the time where we really push hard to make environmental considerations an equally first class citizen when it comes to, you know, thinking about the societal impacts of AI.
Will Buchanan: This is something I'm incredibly passionate about. I think needs to encompass the full scope of harms that are caused by an AI system. So that could be the hidden environmental impacts of either the development or the application. The application could vastly outweigh the good that you're doing, even just expanding oil and gas production by a certain percentage amount. I think it just must account for all of the harms for both ecosystems and people.
Chris Adams: And there's this idea of like a risk thing. Does this categorization actually include this stuff right now? What counts as a high risk use case, for example, when mentioned here?
Lynn Kaack: So I haven't seen the latest iteration. I think that there has been some update on, there's been a lot of feedback on the version that was published in April last year. I haven't seen the latest iteration. I think a lot of things changed in, yeah, in the first version, there was It's high risk systems where when, that affect personal safety, like human rights in the sense of, of personal wellbeing, but the completely overlooked environmental protection aspects aspects of human rights.
Chris Adams: Wow, that's quite a large one, especially when you take into account the human rights. Okay. We've spoken about the external impact, but I am led to believe there is also an internal impact from this as well. Like the AI has, has some direct impact that we might want to talk about as well. As I understand it, we spoke about 2 to 3 percent of emissions here, but if we know there's an external impact, why would we care about any of the internal impacts of AI we might be doing or why you might want to care about the internal impacts of AI as well, for example, like the direct emissions.
Will Buchanan: So by direct emissions, you're talking about, let's say, the scope, too, of the operational cost of the model.
Chris Adams: Yeah, there'll be things that we have, there's an external impact or there is a, we use this phrase scope 4, for example, to talk about all the other things that induce in the world. But there is a, a kind of stuff which happens inside the system boundary that we've spoken about. And presumably that's something we should be caring about as well, right? So there'll be steps that we can take to make the, the use of AI, particularly like say the, the models more efficient and more effective and more, all these parts here, this is something that we should be looking at as well, presumably right?
Will Buchanan: Totally. And so in our paper, which is going to be published, I think on Monday, we've calculated the emissions of several different models. And one of them was a 6 billion parameter transformer model. And the operational carbon footprint was equivalent to about a rail car of coal. And that's just for training. So it's really imperative that we address this and provide transparency this
Lynn Kaack: Is that Is that for developing a model or for training at once? I mean, is that with grid search, architecture search?
Will Buchanan: For for a single training run. So it does not account for sweeps or deployments.
Chris Adams: All right, so there's a, there's some language that we haven't heard for here, so, but maybe it might be worth asking, maybe Will, could you maybe talk about, just briefly, you said a rail car full of coal, and I don't actually know what that is, I mean, in metric terms, what does that look like?
Will Buchanan: A hundred million grams. I don't have the conversion handy, but we took the U.S EPA greenhouse gas equivalencies. And I should add the methodology that we applied was the Green Software Foundation's SCI. So we calculated the energy consumed by the model and multiplied it by the carbon intensity of the grid that powers that data center.
Chris Adams: Okay, cool, and that was per training run? So that wasn't the, in the, the equation of the entire model, is that correct?
Will Buchanan: Correct.
Abhishek Gupta: That's the other interesting part as well, right? When you're thinking about the life cycle is, or life cycle of the model, I should say, because life cycle has multiple meanings here, which is that once that model is out there, what are the inference costs, right? And are we, if this is something that's going to be used you know, hundreds, thousands, tens of thousands of times, if it's something, you know, if it's, if it's a large model that's, you know, now being used as a pre trained model and is going to be fine tuned on by, by other folks downstream, are we able to then, you know, talk about amortization of that cost across all of those use cases?
And again, I think what becomes interesting and, and is how do we account for that stuff as well, right? Because we, we don't have complete visibility on that as well. And, and I know Lynn's nodding here because her paper that's, I think coming out, being released in an hour and a half, actually the embargo gets lifted on our paper, actually talks about some of those system level impacts.
So maybe, maybe Lynn, you want to chime in and talk a little bit about that as well?
Lynn Kaack: Yeah, thank you so much. Exactly. So I think what's a crucial number that we're currently still missing is not what is emitted from a single model in a well known setting, but what is emitted overall from applying machine learning? So what are the usage patterns and practice? Like how often do people develop models from scratch?
How often do they train or retrain them? People I mean, of course, organizations and typically large organizations and companies. And how do they perform inference on how much data, how frequently? And there are some numbers out there from Facebook and Google and in their large scale applications actually inference outweighs their training and development costs in terms of greenhouse gas emissions.
So inference might become a bigger share depending on the application. So we really need to understand better how machine learning is being used in practice also to understand the direct emissions that come from it.
Chris Adams: An inference is a use of a model once it's in the wild. Is that what an inference is in this case? So there's an environment, so you could think of the making part, and then there is the usage part from the inference, right? So is that how that part works?
Lynn Kaack: Exactly. So if you use a model on a data point, we call that inference. So you feed in the data and it gives you a result. Then training means you sort of train a single configuration of the model once on your training data set, and then development is what I refer to as if you search over different configurations of the model.
So there are lots of hyperparameters that you can adjust to achieve better performance. And if new models are being developed, then there's an extensive search over those hyperparameters and architecture configurations that then of course gets really energy intensive because we are training the model thousands of times essentially.
Will Buchanan: Yeah, one of the figures that really resonated with me, I think Nvidia posted on their blog that inferencing for about 80 to 90 percent of the carbon cost of a model. think Lynn on one of your papers, it was, Amazon had also claimed around 90 percent. So these are really non trivial costs, and I'm not aware of any framework to measure this.
Lynn Kaack: Yeah. So that Amazon number just to be clear is costs or monetary costs that came from a talk, but there are numbers now published by Google and Facebook, but they look at some applications of theirs where inference outweighs training in terms of energy consumption. They're not exact numbers. It's not entirely clear which applications those are, but there is some data at least that shows that.
And I think it just highly depends on the application that you're looking at. And sometimes, you know, you build a model and then you do inference once and you have the data set that you, and then in other types, you build a model and then you apply it billion times a day. So, of course, that can then add up to a lot more energy consumption.
Chris Adams: Wow, I didn't realize that was actually an issue, because most of the numbers I've seen have been focusing on the training part. So, Will, I think this is something we spoke about before, that training, there's a kind of trend in the use, in the energy use from training already. Is this something, because I've seen figures from OpenAI, but my assumption was that basically computers are generally getting more efficient. About twice as efficient every two years, or so with like Moore's Law or Kumi's Law or things like that. But if you're seeing an uptick in usage here, is, does that mean that they're staying about the same? Or is there, is there a trend that we should be taking into account there?
Will Buchanan: So I think the computational costs of training have been doubling every 3.4 months or so, and so I think the trend is only accelerating. The models are just getting larger and larger, and you've got, I think, GPT 3 is one of the largest ones around at this point. We might challenge Moore's Law.
Chris Adams: Okay. So if Moore's Law is doubling every once every two years, I mean, what is the impact of doubling every 3.4 months? I mean, over a few years, what does that work out to be? Because I don't think I could do the exponential numbers, the exponential math, but it sounds like it's, it sounds like a pretty big number, basically, if something is doubling on every three or four months, right?
Will Buchanan: I also don't have the math handy, but I think it's important to note here, and Abhishek was talking about this earlier, models are very flexible, so you can train them once and then provide some fine tuning or transfer learning approach on top of them, and then repurpose these models for a number of different applications. And then you can even compress them, let's say using ONNX Runtime. You can be very efficient. You can really amortize the cost of that model.
Abhishek Gupta: So yeah, just building on Will's point, there's a lot of work on quantizing the weights of a trained network, applying distillation approaches, using teacher student model approaches that actually helps to shrink down the model quite a bit, especially with the whole push for TinyML, trying to shrink down models so that they can be deployed on edge devices has been something that's helped to manage to a great extent, the computational impacts.
One of the other things that I wanted to highlight as, as, you know, Will was talking about more models getting larger is, there's this almost fetish in the world today to continuously scale and keep pushing for ever larger models and in chasing SOTA, as they would say, so chasing state of the art, you know, which is great for academic publications where you get to show, "Hey, I improved state of the art performance on this benchmark data set by 0.5 percent or whatever," right? And in performance, I think what's being ignored is that that has a tremendous, tremendous computational cost. In fact, one of the hidden costs that I think doesn't get talked about enough is there's this statistic out there that, you know, 90 percent of the models don't make it into production.
And that kind of relates to things like, you know, neural architecture search and, you know, hyper parameter tuning, where you're constantly trying to refine a model to achieve better performance. A lot of that actually goes to waste because that stuff doesn't make it into production. So it's actually not even used.
And so there's a whole bunch of computational expenditure that is done that actually never sees the light of day, never becomes useful. That obviously has environmental impacts, right? Because of the operational and embodied carbon impacts, but none of that actually gets talked about, reported, documented anywhere because, well, who wants to know that, hey, I trained, you know, 73 different, you know, combinations to get to where I'm at.
You just talk about the final results.
Chris Adams: Okay, let's say that if you don't want to go down one of those rabbit holes, what should you be using or where would you start if you wanted to start applying some of these ideas about greener AI in your work on a daily basis? Does anyone have anything that they would lead with, for example?
Will Buchanan: Bigger is not always better. Sometimes you really should choose the right tool for the job. We've had some really great graduate student projects from the University of Washington's Information School, and they built some case studies and samples around green AI. As an example, a project led by Daniel Chin was comparing a sparse or a dense model of a green AI model to a dense model for an anomaly detection setting.
And they found that using sparse, meaning less trees and a shallow, smaller depth per tree, random forest would save a massive amount of carbon and provide the equivalent accuracy. So I think it saved about 98 percent in terms of the monetary cost and energy.
Chris Adams: Okay, wow, that's bigger than I was expecting. What would you say to people if they're in production, they're trying to do something now?
Lynn Kaack: I think a big goal should be to not only develop more energy efficient machine learning models, but then also ensure that those are actually being used. And surprisingly, even sometimes within the same company, certain model developments are not being passed on to other parts of the company. So really trying to develop standard models that then are also being used in practice is important.
So interoperability of energy efficient machine learning models.
Chris Adams: If people, someone does want to look at this stuff, and they do want to apply some of these ideas, you spoke a little bit about using some other models. would you suggest people look if they wanted to operationalize some of the kind of wins or some of the better ways to make green software greener, for example?
I realize you've got a paper coming out and you work on this day to day. So yeah, what would you point us to?
Lynn Kaack: So, I mean, as I understand, there's a lot of ongoing research in the machine learning community for more energy efficient machine learning. So I don't have any names on top of my head in terms of workshops or community resources where one can see what are the most energy efficient model types. For a specific application.
I know that there are some very comprehensive papers also that summarize all the different research approaches that are being taken, but I would encourage if you are looking for using like a deep learning model of some kind, just inform yourself quickly if there's also a leaner version of it. So much of the like widely used models like BERT, for example, smaller versions that can almost do the same thing.
And maybe your performance doesn't suffer much. If you're using a much lighter model.
Chris Adams: Okay, so light up models and looking around what we have there. And Will, is there a paper or a source you might point to?
Will Buchanan: I was actually going to talk about the Carbon Aware paper that we're about to publish. I think that's a slightly different track.
Chris Adams: That's up next week, right? So that'll be the 13th or 14th of June. That's when that'll be visible, correct?
Will Buchanan: Exactly.
Chris Adams: Okay, cool. All right, then. There's a load more that we could dive into. We've got copious, copious, copious show notes here. So what I'm gonna do is I'm gonna say thank you everyone for coming in and sharing your wisdom and your experiences with us, and hopefully we'll have more conversations about green software in future. Thank you folks.
Asim Hussain: Hey everyone, thanks for listening. Just a reminder to follow Environment Variables on Apple Podcasts, Spotify, Google Podcasts, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show and of course we want more listeners.
find out more about the Green Software Foundation, please visit greensoftware. foundation. Thanks again and see you in the next episode.
105 פרקים
כל הפרקים
×E
Environment Variables

1 OCP, Wooden Datacentres and Cleaning up Datacentre Diesel 1:01:18
1:01:18
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי1:01:18
Host Chris Adams is joined by special guest Karl Rabe, founder of WoodenDataCenter and co-lead of the Open Compute Project’s Data Center Facilities group, to discuss sustainable data center design and operation. They explore how colocating data centers with renewable energy sources like wind farms can reduce carbon emissions, and how using novel materials like cross-laminated timber can significantly cut the embodied carbon of data center infrastructure. Karl discusses replacing traditional diesel backup generators with cleaner alternatives like HVO, as well as designing modular, open-source hardware for increased sustainability and transparency. The conversation also covers the growing need for energy-integrated, community-friendly data centers to support the evolving demands of AI and the energy transition in a sustainable fashion. Learn more about our people: Chris Adams: LinkedIn | GitHub | Website Karl Rabe: LinkedIn | Website Find out more about the GSF: The Green Software Foundation Website Sign up to the Green Software Foundation Newsletter Resources: Windcloud [02:31] Open Compute Project [03:36] Software Carbon Intensity (SCI) Specification [35:47] Sustainability » Open Compute Project [38:48] Swiss Data Center Association [39:07] Solar Microgrids for Data Centers [47:24] How to green the world's deserts and reverse climate change | Allan Savory [53:39] Wooden DataCenter - YouTube [55:33] If you enjoyed this episode then please either: Follow, rate, and review on Apple Podcasts Follow and rate on Spotify Watch our videos on The Green Software Foundation YouTube Channel! Connect with us on Twitter , Github and LinkedIn ! TRANSCRIPT BELOW: Karl Rabe: That's a perfect analogy, having like a good neighbor approach saying, "look, we are here now, we look ugly, we always box, you know, but we help, you know, powering your homes, we reduce the cost of the energy transition, and we also heat your homes." Chris Adams: Hello, and welcome to Environment Variables, brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software. I'm your host, Chris Adams. Hello, and welcome to another edition of Environment Variables, where we bring you the latest news and updates from the world of sustainable software development. I'm your host, Chris Adams. How do you green the bits of a computing system that you can't normally control with software? We've discussed before that one option that you can do might be to shift where you run computing jobs from one part of the world to another part of the world where the energy is greener. And we've spoken about how this is essentially a way to run the same code, doing the same thing, but with a lower carbon footprint. But even if you have two data centers with the same efficiency on the same grid, one can still be greener than the other simply because of the energy gone into making the data center in the first place and the materials used. So does this make a meaningful difference though, and can it make a meaningful difference? I didn't know this. So I asked Karl Rabe the founder of Wooden Data Center and Windcloud, and now increasingly involved in the Open Compute Project, to come on and help me navigate these questions as he is the first person who turned me onto the idea that there are all these options available to green the shell, the stuff around the servers that we have that also has an impact on the software we run. Karl, thank you so much for joining me. Can I just give you the floor to introduce yourself before we start? Karl Rabe: Thanks, Chris. This is an absolute honor and I'll have to admit, you know, you're a big part on my carbon aware journey, and so I'm very glad that we finally get to speak. I'm Karl, based out of North Germany. We initially, I always say I had a one proper job. I'm a technical engineer by training, and then I moved into the data. Then I fell into the data center business, we can touch on it a little later, which was Windcloud, which remains, which was data center thought from the energy perspective, which is a very important idea in 2025. But we pivoted about four years ago to Wooden Data Center, probably can touch upon those a little later, in also realizing there is this supply chain component to the data center. And there are also tools to action against those. And I'm learning and supporting and providing, you know, as a co-lead in the data center facilities group of the OCP where we work, you know, with the biggest organizations directly in order to shape and define the latest trends in the data center and especially navigating the AI buildout in somewhat of a, yeah, sustainable way. Chris Adams: Okay, cool. And when you say OCP, you're referring to the Open Compute Project, the kind of project with Microsoft, Meta, various other companies, designing essentially open source server designs, right? Karl Rabe: Correct. That is the, initially started by then Facebook now Meta, in order yeah, to create or to cut out waste on the server design. It meanwhile involves and grew into cooling environments, data center design, chiplet design. It's a whole range of initiatives. Very interesting to look into. And, happy to talk about some of those projects. Yeah. Chris Adams: All right, thanks Karl. So if you are new to this podcast, my name is Chris Adams. I am the director of technology and policy at the Green Web Foundation, a small Dutch non-profit focused on a fossil free internet by 2030. And I also work with the Green Software Foundation, the larger industry body, in their policy working group. And we are gonna talk about various projects and we'll add as many all the show notes to all the links we can think of as we discuss. So if there's any particular things that caught your eye, like the OCP or Wooden Data Centers, if you follow the link to this website, to this podcast's website, you'll see all the links there. Alright then Karl, are you sitting comfortably? Karl Rabe: I am sitting very well. Yeah. Chris Adams: Good stuff. Alright, then I guess we can start. So maybe I should ask you, where are you calling me from today, actually? Karl Rabe: I'm calling you today from the west coast of the North Sea Shore in northern Germany. We are not a typical data center region for Germany, per se. We, which is Frankfurt, you know, 'cause of the big internet hub there. But we are actually located right within a wind farm. You know, in my home, which is, initially was, you know, home growing up and turned to my home office and eventually to what was somewhat considered the international headquarter of Wooden Data Center. Yeah, and we're very close to the North Sea and we have a lot of renewable power around. Chris Adams: Oh, I see. Okay. So near the north of Germany, near Denmark, where Denmark has loads of wind, you've got the similar thing where, okay. So Karl Rabe: Yeah, absolutely. Yeah. Chris Adams: Oh, I see. I get you. So, ah, alright. For people who are not familiar with the geography of like Europe, or Northern Europe in particular, the north part of Germany has loads of wind turbines and loads of wind energy, but lots of the power gets used in other parts of it. So, Karl is in the windiest part of Germany, basically. Karl Rabe: That's correct, yeah. We basically have offshore conditions on shore. And it's a community owned wind farm, which is also a special setup, which is very easy to get, you know, the people's acceptance. We have about a megawatt per inhabitant of this small community. And so this is becoming, you know, the biggest, yeah, economic factor of the small community. Chris Adams: Wow. A megawatt per, okay, so just for context, for people who are not familiar with megawatts and kilowatts, the typical house might use what may be about half a kilowatt of constant draw on average over the year. So that's a lot of power per person for that space. So that's a, you're in a place of power abundance compared to the scenario people are wondering where's the power gonna be coming from? Wow, I did not know that. Karl Rabe: No, that, is, yeah, that is the, so it's a bit of that background, so to speak. We are now trying to go from 300 megawatts to 400 megawatts. There has been, you know, Germany's pushing for more renewable energy, and we have still some spots that we can, under new regulations now, build out. And the goal or the big dream of our organization, the company running this wind farm for us is trying to produce a billion kilowatt hours per year. And so we're now slightly below that and we're trying to, Yeah, add another, yeah. For, we need to reach probably another 25 percent more production. And, it is, so to speak, you are absolutely right, we are in an energy abundance and that was one of the prerequisites for Windcloud. 'Cause you know, the easiest innovations, is one and one is two. And so we have in, we had energy, I was aware that we also had fiber infrastructure in the north to run those set wind, parts. So we said, why don't we bring a load to those? That was the initial start of Windcloud. Chris Adams: Okay, so maybe we should talk a little bit about that. I hadn't realized the connection between the geography and the fact that you're literally in the middle of a wind farm, which is why this came together. Okay. So, the, so as I understand it, and now this makes sense why you are so involved in Windcloud. So for context, my understanding of Windcloud is it's essentially a company where rather than like connecting data centers via big power lines to like somewhere else where the actual generation is miles away from where the data centers are, the ideals instead was to actually put the data centers literally inside the towers of the wind turbines themselves. So you don't need to have any cables and, well you've obviously got green energy because it's right there, you're literally using the wind turbine. So, apart from this sounding kind of cool, can you tell me like why you do this from a sustainability perspective in the first place? Karl Rabe: Yeah, so the way we discovered that I wanted to, and this is the, probably the biggest reference that I can give on the software developer front, and I came out of a study in the UK. We had a really nice cohort. We were constantly bouncing ideas off of each other. I wanted to actually build small aircraft, because we have a wind farm and we have wealth with that. We actually have people building small planes in our location. They told me I needed about 5 million euros to do it, which I didn't have. So I started pivoting to a software idea. And why the software did to host that, I just quickly discovered, you know, the amount of energy going into data centers, the amount of, you know, associated issues, and back then, 2015, 16, we were literally just discovering the energy aspect of it. We need didn't discuss, you know, water and land use and all of that. We really focused on the energy and then we say, "look, well wait a second. You know, we have all this excess of energy. We literally cannot deliver that at this point. So we have a very high share of shutting down our wind turbines when there's just too much energy to move it around. Why not bring the data center as a flexible load close to the production, and enable, you know, sustainable compute to then send package rather than energy, which is way easier, you know, over the global fiber grids." And that's how I got started and fell into the data industry. Big benefit and big learnings from that stand that I didn't know nothing about data centers. And as an engineer, a lot of things were not adding up. We looked at the servers back then, and even then it said, okay, this is good, you know, to run from 15 to 32 degrees. I said "32 degrees? Why? What is data center cooling and why is data center cooling? We don't have 32 degrees in the north." Most likely now, we probably ought to do within eight years. Karl Rabe: But the important thing was really challenging this, and we started with very little money and we couldn't afford like the proper fancy stuff that all of this data center make, you know, like a chillers, you know, spending electric energy to cool something which really does not need cooling, in my opinion, up to now. That was the start of this, you know, and so this is, the company of Windcloud is still ongoing. We had what we were, what we had as a huge problem. And I was always, my gut feeling for this was always we need to find a way to be able to compete with the Nordics. So we have renewable energy, but we need to have it cost effective. And that was something that we tried two or two and a half times, I would say with the, with always having a legal way to access the energy in a proper setting. It was always extremely difficult and extremely frustrating also because the German energy system is very complicated. It is, you know, geared or developed from a centralized view of this, and is benefiting, you know, large scale industry and large scale energy companies, to putting other terms, as you know, in, you're probably familiar with the, Asterix comics. You know, that far off and north in Germany that probably people, you know, there was a bit suspect, you know, what we're doing there or now we start producing energy and now we also want to use the energy so that is not adding up. It's very hard and close to impossible to access your own produced energy at scale, you know, which is in an abundance. And that was, yeah, that was something what we always faced, which led to other innovations. So we build the first data center or one of the few data centers to reuse the heat in Germany, putting into an algae farm. And we trying to create really efficient, PUEs already back then, you know, whereas the industry stranded is quite still quite high in ours. Claim I never had enough money to build a data center with a PUE above a 1.2, or even 1.1. The first servers were cooled with a, you know, a temperature regulated fan, you know, out of the, we built with the same guy who built, you know, a pig stale for my father, you know. that was, you know, we nearly didn't call it Windcloud. We nearly called it Swines and Servers, Chris Adams: Okay. Pigcloud. Yeah. Karl Rabe: Yeah, Pigcloud, but it could have been, you know, could have been misleading. And the, so the good thing turning out of that, you know, and going back to that, to those struggles in getting started is that we were forced to uncover a lot of the cooling change and the energy distribution change, which are were not, you know, not really adding up for us. And that is, you know, still one of the biggest support for us to build efficient data centers and to create, you know, sustainable solutions. Chris Adams: Okay. Cool. Alright then. So. Okay. There's, I didn't realize anything about the Schweins und Servers aspect at all actually. Would you even, I'm not sure what German is for server would actually be in this context. Was it literally gonna be Schweins und Servers, or? Karl Rabe: Yes. Some. So, yeah, something like that. Chris Adams: Okay. Wow. That's, I was not expecting that. I think Windcloud sounds a bit better, to be honest. Karl Rabe: Yeah, thanks. No, the brand, the name is great. I think it's still, yeah, I'm very simple like that. You know, we had Windcloud, so we take wind, we make cloud. Now we are have, we are Wooden Data Center. We build data centers outta wood. So we, but there's this, but it's, to be fully honest, is right now, is so to speak, we call Wooden Data Center, but what we do is try to decarbonize the data center. So wood is obviously, is a massive component of that, but we do see real good effort in the supply chains. Happy to go into that a little later, but there are some examples from fluids. We just found, you know, bio-based polycarbonate for hot and cold containments. So the amount of components throughout the data center that have, that has a bio-based, ergo, a low carbon alternative is ever so increasing. Chris Adams: Can I come back to that a little bit later? 'Cause I just wanna, touch on the Karl Rabe: Yeah. no. Chris Adams: So the wind thing, so basically Windcloud, the big idea was putting data centers in the actual wind turbines themselves. So that gives you access to green energy straight away, because you're literally using power that otherwise either couldn't be transmitted because there were, because the pipes weren't big enough essentially in some cases. And, I guess plus point to that in that if you are already using a building that's already there, you don't have to build a whole new building to put the data centers inside. So there's presumably some kind of embodied energy advantage there because there's a load of energy going into, kind of, that goes into making concrete and stuff that you don't have to do because you are already using an existing, like, building, right? Karl Rabe: Yeah. So to clarify on that, it is good that you touch on that because there is, this is literally is a bit of a crossover because the company you're referring to is Wind Cause, which is a good friend of ours and they are using the turbine towers. Chris Adams: Ah, Karl Rabe: They can do so because they use a little bit different type of turbine. And they're also based in the south of Germany, we had the same idea because it's also very difficult to build next to a wind farm. The big difference is that the towers used at Wind Cause, they are concrete and they have quite a lot of space. They're about 27 meters wide. because of the initial, discussion that we have onshore, or offshore conditions onshore, we have steel towers, which are shorter and hence don't have this big diameters. You know, we build tall. And so we always had the challenge of still needing a data center. And so that's where our learnings inspirations came from For Wooden Data Center. But we still tried to reuse existing infrastructure. So we were at one point within the Windcloud journey, I was the co-owner of a former military bunker area. And so we wanted to place within those long concrete tubes, we want to place data centers in order to yeah. Have a security aspect and don't need, you know, a lot of additional housing or even bunkering. And there's obviously the dodging bullets where has spent a lot of concrete and steel concrete in order for those facilities. Chris Adams: I see. So you're reusing some of the existing infrastructure, so rather than building totally new things, you're essentially reusing same, you're reusing stuff that's already had a bunch of like energy and emissions spent to create it in the first place. I see. Okay. All right. So, Karl Rabe: And, back then, you know, also to, because it's such a short time back then, really need to emphasize that we were, we really, you know, only had a hunch and a feeling, oh yeah, sort of has CO2 associated to it and probably also the building of a data center. You know, we have, we really, it was so hard to quantify, and I think we still, carbon accounting is still, is somewhat of, not wizardry, but it's really hard to pull the right numbers. You know, only two years ago at the OCP Summit, so in a Google presentation, the range that they mentioned, you know, for steel and concrete carbon was, you know, 7 to 11 for equally both. So the range of the total uncertainty, I feel, is quite high. You know, and this is the biggest, one of the biggest and most funded, best funded organizations in the world. You know, we're still not being able to get it more concrete, you know, and that's something we really need to work with the industry and supply chains in order to be even aware to specify the problem. Chris Adams: So, can I unpack that for a second before we talk a little bit about this? And so you're basically saying even the largest companies in the world, they don't necessarily have that good access to know how, what the carbon intensity of the concrete they've used in one data center compared to another one, it can quite, it can vary quite a lot. Is that what you're saying there? Karl Rabe: So this was basically specifying the global numbers for steel and concrete. So, I do believe that we have now relatively good visibility for our own builds and projects and also what we do now moving forward. But to really try to grasp the global problem of it, that was still, you know, two years ago was still had this high uncertainty, you know, 'cause we were working with numbers, maybe they're now five years older, we don't know the complete, you know, build out of every city, every building globally. You know, it's just a lot of guesswork in that, globally. And so I especially believe that although we, Wooden Data Center, the amount of innovation that is put into concrete, you know, has the potential to drastically reduce that for buildings. You know, the, it was a, it's definitely still a huge problem in, for the data quality and the data, yeah, the emissions, you know, guesswork that's in there, you know, and a lot of those things are based on scenarios, you know, and those are getting ever so more real. But the best example for Wooden Data Center is, there's a comparison comparing a steel concrete building to a CLT one, Chris Adams: Yeah. Karl Rabe: and it is assuming that after 20 years, it's only living for 20 years, which, you know, can easily be 200 years, but that afterwards it is being reduced into, you know, building chairs or tools or toys. But if you take then the CLT and burn it, then obviously you have a zero sum gain. Every, all the carbon that was stored. It's Cross-Laminated Timber, you know. Chris Adams: Yeah. So this is the kind of like the special, the, this, it's a special kind of, essentially like machined timber that is, that provides some of the kind of strength properties of maybe steel or stuff like that, but is made from wood, basically, right? Karl Rabe: Correct. So we need to stretch the importance that this is actually a material innovation. It's a relatively young material based on a, I think a thesis, PhD thesis from Austria. And so we only have CLT or cross-laminated timber for about 25 years. Chris Adams: Oh, I see. Karl Rabe: Or maybe now 26, years. So the, you probably are familiar, or you have seen there are those huge wooden beams in, you know, storage buildings. Chris Adams: Yeah. Karl Rabe: Those are called GLT, like glue laminated timbers. And the difference is those boards are basically glued in one direction and they're really good for those beams or for posts. But to have like ceilings, walls, and roofs, those massive panels, you now have the material of cross laminated. Chris Adams: Oh, okay. In both directions, right? Yeah. Karl Rabe: Correct. And those now enable like full massive wooden buildouts. And that's something, and so the biggest challenge is that we, if we say wood, then the association we probably will touch on now or later is fire. Chris Adams: Yeah. Karl Rabe: But in reality, in nature, we don't have those massive panels which don't, you know, just flame up. They have, they're fully tested and certified to glim down, which is, you know, they turn black and then they slowly, you know, in a thousand degrees, they slowly, you know, shrink Chris Adams: Like smolder, right? Yeah. Karl Rabe: Yeah. And so, but, the, how we design data centers is basically factor in this component, and we are able to create really fire secure data centers built out of those new wood materials basically. Chris Adams: Okay. All right. So a lot of us are typically thinking of data centers as things made entirely with wood and with steel, concrete and plastic all over the place. And essentially you can introduce wood into this place and it's not gonna burn down because you have this material, which is treated in such a way that it is actually very fire resistant. And that means you could probably replace, I mean, maybe you could talk to him a little bit about which bits you can replace. Like, can you rep, would you replace like a rack or a wall or like a roof? Maybe we can talk about that so we can like, make it a bit easier to kind of picture what this stuff looks like. Karl Rabe: No, absolutely. I'm afraid I'm still, always very liberal in sending out samples to my clients, you know, but I don't have it here in my hand, but, so that is a very good the question, is basically like, if we would talk like slide deck or something that I'll try to show in terms of scope one, two, and three, what we can do and what we have now, and that it's like the, biggest component is in obviously the housing. You know, what is your building or your room of a data center? When you are touching on existing buildings we CLT is also ideal for building and building concept of existing large storage or logistic buildings to put in data centers. We can build that up quite quickly out or create rooms very quickly in those, and there is other huge advantage of CLT is that we get those pre-manufactured and they just fit, Chris Adams: Oh, like stick them together like Lego rather than have to pour Karl Rabe: concrete? Yeah. Little, Yeah, a little bit. You need like, a little bit of leveling foundation. If you have an existing floor, still, some datas, you know, preferred to in the greenfield also have a new floor. But that is is something that it helps to, with those, we can create the IT room relatively quickly and then have the build out of those averaging up to 40% quicker times than traditional steel sandwich concrete, you know, data center. So it is enormously easy to work too. It's very precise to pre-design and pre-manufacture and then very easy to work with. If there's something, if there's a problem on site, you know, you just crank out the chainsaw and adapt and adjust. Chris Adams: Okay. Just to carve it down a bit. Karl Rabe: And yeah, so to speak. But once you have then those assembled and secured, it has like a lot of mass to it and a lot of volume to that which creates very good fire protective physical resistance and availability properties. And that is something that we now, it's really being seen as one of the core benefits. You know, the speed what we can build this out. Chris Adams: Oh, ok. Karl Rabe: We have introduced wooden racks, and we also see more and more attention for those. Chris Adams: Wait, sorry. Can I just, you said wooden rack, like as in the big steel rack that holds the servers themselves, you could, that could be made of wood as well now, so you'd have like a rack thing holding a bunch of servers, right? Karl Rabe: Correct, So we built this also. We have, one of our clients has send us like a server casing and ask to also think about to do the casing, but we probably, we're not a hundred percent there yet. In order to do that, we would have, we would've an idea, in terms of the spirit of OCP, which is, you know, like, reduce and cut out stuff. You know, one vision of that would be just a wooden, you know, board where you have dedicated spaces. You slide in your main board, connect power, connect liquid cooling, have fans on the back and then cycle through only the boards. Remove, you know, not even fancy, but just base frames of a server. But right now the, it's a combination, for the 19 inch standard and also the OCP standard, to use, you know, reduce up to 98% of the steel in those constructions and then only have functional parts in order to stick in the servers made from steel railings and have then wooden frames. And we do that for the OCP format, which is very popular. We get a lot of the special requirements because we are the only ones who producing like a small version of the rack, which, the OCP has a lot of advantage, but the base rack format is a two meter 30 high, which is like a really hyperscale, you know, mass density approach. Which doesn't fit even through the doors of most data centers I know, you know, they still have relatively, you know, standard two meter high doors or able to fit in like a 42 inch rack. But you need like a very special facilities because those racks come also pre-integrated and then you roll them in place. So you need a facility that has high doors, ramps with small inclines, you know, or no ramps at all, in order to be able to place a fully integrated rack. We started building OCP racks because back at the time only hyperscalers were really getting those, and we wanted to do more of this open compute format and were able to offer that. And the version three rack, you know, was a good candidate to convert to a wooden based structure. Chris Adams: All right, so we'll come to that a little bit later because I actually came across some of your work when you were building, designing some of these on YouTube so people can see what all this stuff looks like. But if I just come back to the original question, essentially, so it sounds like you can replace quite a lot in a data center. So you can replace the shell of the building, like literally green the shell by replacing the concrete, which is one of the largest sources of, you know, creating concrete and cement is one of the largest sources of emissions globally. So you can switch, you can move from a source of emissions to, is it a sink? Cause CO2 and carbon gets sucked out of the sky to be turned into trees. So you've gone from something which is a source to a sink and that, and you can replace not just the walls, the outer building, but also quite a lot of the actual structure itself. Just not the servers yet. So that's probably like a, I mean, maybe I could ask you then like, If I'm switching from maybe regular concrete and regular steel, I mean, is there any, like, do you folks have any idea about like what the kind of changing quantitative terms might actually be if I was to have like an entirely concrete, entirely steel data center and then replaced all of that with, say, wooden alternatives, for example? Like is it like a 5% reduction or is there any, like, what kind of changes are we looking at for the embodied figures, for example? Karl Rabe: So the conservative industry figures are somewhat off between minimum 20%, only having the production change up to 40%. So Microsoft, we, the good thing also we have to mention is that we are an industry now. Microsoft announced those productions I know the other hyperscalers are looking at that. We only, in Germany we had two other companies started getting into construction. That's why it's for us really important to be on the decarbonization path. Chris Adams: Ah, I Karl Rabe: see. So we do come with our own data center, even concepts and philosophies, which I can talk about a little later. But coming back to the point it is still very hard to quantify. And the, but really positive things about carbon accounting or calculations, as I mentioned, we now have as a data center, we have this negative component, which I have to laugh 'cause an engineer immediately and said, can we then just use more wood? You know, can we make the wall thicker? You know, obviously yes, you could do that, but there's a cost to that and there's also, you know, it be betrays the idea, you know. But, the really exciting thing is that I now go to show, from show to show, and I was two weeks ago in London and just on the flight somebody showed me, a picture of an air handling unit inside of a wooden enclosure. And I was chasing an hour through the London show, 'cause I assumed it was there, but it was on a, it was on a different show. but that is the kind of things that we can really think about is enclosures. So also we have started, we have one, well, for the OCP reg or for this AI build out, we have also created a rear door, which is, so to speak, a wooden rear door. So the fans are traditional, the heat exchanges obviously needs to be traditional, but it is also like a micro aluminum micro channel heat exchanger, which is derived from other industries, which is, you know, helping mass production, reducing cost, reducing emissions. And that is the other thing that is happening in the industry that we're trying to find, not data center specific solutions, but rather find mass produced industry solutions and adapt them to the data center in enhanced reducing cost and time. Chris Adams: Alright. So in the same way that basically cross laminated timber and the use of wood is something that has been in use in not just in the data center industry, like people make, what are they called? Are they called plyscrapers? You know, skyscrapers with wood. Plyscrapers. It's, so the idea was that, okay, things which are made, being made in volume here can be made more efficiently and like this is one way that you are adapting 'em to a new domain. And it may be that if people are getting much, much better at making say very efficient heat pumps, 'cause they can cool things down as well as heat them up, that might be another thing you're looking at. Say, "well actually that might be able to be used in this context as well." Okay. Alright. And if I am, so if I go back to the original thing about saying, okay, we're looking at possible savings maybe 20% up to like possibly 40%, like that's the kind of Karl Rabe: yeah. That's the range that we have, you know, I think, so the problem is do we, if, did Microsoft evaluate with IT or without IT? So for the facility, I think we can potentially come to net zero approach, which we, you know, but by first principle, I think we can at least achieve realistic reductions to let's say 80, 70-80, 85% with those tools that are set, you know, basically the easy steel replacements, the, like, the rack, the enclosures, the housing, fluids is something we have. There's a very interesting, you know, no-brainer replacement for fossil diesel on backend generators. It's a liquid called HVO Chris Adams: Yeah. Let's come to that in a second actually. 'Cause I did wanna ask a little bit about some of the things you can do for the fuel here. So maybe if we just, so basically the, so there are some savings available there and these should be something that you could, some, this is something that should show up in some kind of numerical description. So if you had like, maybe two data centers and one was using wood in strategic places, then the embodied carbon should be lower. So if, I mean, if I was looking for this there like a label to look for or a standard I can look for? Because in the Green Software Foundation we have this idea called Software Carbon Intensity, which includes like the carbon intensity of the energy you use and stuff like that. But they also look at the building itself. So theoretically, if you had a wooden data center and a bog standard concrete data center, you know, if you run your code in the greener data center, you would probably have a better score if you had access to these, the data or stuff like that. Do you know, like, do, any places share this data or have like a label for anything like this? Karl Rabe: They definitely share the data. I, for example, so we definitely also need to Eco Data Centers in Sweden's and we, which were, you know, basically we approach to them. Our whole world was shook. You know? It's like, oh, so we come from this energy perspective, but they didn't have idea and they build it, you know, sustainably. They build it sustainably. So we need to change, you know. That was, you know, it was a huge eyeopener. And they also are the few first ones to, I'm not sure if they used like the LCA method, but they were quantifying the embed carbon and are certifying to you annually too, as a client, which I think is the way to go. And we need to figure out how to standardize that. I assume there's potentially a standard that we can use. I know that other data center providers are building sustainably and putting this effort forward. But we don't have a unified label yet, I'm afraid. Chris Adams: Okay. Well this Karl Rabe: I know that some, also challenge, like, there's like a data center climate neutral act and some of them specifically exclude scope three, which, you know, I know where they're coming from. Also in Germany and Germans, you know, they're all about energy efficiency. They love to talk about, you know, just the, energy and the scope two, basically. But then, you know, Chris Adams: Most of the Karl Rabe: missing out, this dimension, you know,. Missing out the dimension is being faithful to your girlfriend or wife, you know, like three days out of a week. You know? It is, it's not Chris Adams: You are not showing the full picture, right? Karl Rabe: Yeah. You're not, doing it at all basically. Right. I would probably, you know, just need to Google it and there are, you know, building labels that you could be used in construction. Quantifications, I'm sure, but there's not yet like a data center specific label. There is good work also in OCP to do metrics and key performance indicators, and they're all looking at that and there is, I think they're trying to build towards something like real, like true net zero. Chris Adams: Oh yeah. Okay. Karl Rabe: But... Chris Adams: So there are some, so there are some initiatives going on to kind of make this something that you could plausibly see, and, but it's quite early at the moment right now. So like, let's say that I, you know, we spoke before about, okay, I can run my computing jobs in one data center or I can choose to run it somewhere else. These numbers don't show up just yet, but there is work going on. Actually, I've just realized there is actually a embedded carbon working group inside the OCP who have been looking at some of this stuff. So maybe we'll share a link to that, because that's actually one of the logical places you'd look for that. Okay, Karl Rabe: And they do real good work. They do a lot of good initiatives, happening there. There's also, it's Swiss from the Swiss Data Center Association. They also have a label, that is looking at some of this, and they want also to include scope three. So this is coming up, but it's, not as easy as, you know, having an API, you know, pushing it to the software developer and saying, look, we have this offset because this was constructed, you know, with concrete or steel, and this is, you know, Chris Adams: Okay. So we're not there yet, but that's a, that's the direction we might be heading towards. Okay. Alright. We'll add some links to that. And now I'd like to pick up the other thing you mentioned about HVO and stuff like that because you spoke before about, you know, Windcloud or wind node and like data centers running in, or like, you know, relying on wind right now, we know it's a really common refrain that the wind doesn't blow all the time. And like it's news to some people that sun, that's, you know, it is not always sunny, for example. So there'll be cases where there'll be times where you need to get the power from somewhere and, you know, in the form of backup power. And like loads of data centers, you said before they rely on like fossil diesel generators, right. And that can be, it's bad from a climate point of view, it's also quite expensive, but it's also terribly really bad from an air quality point of view as well, because, you know, people are up, you know, you can see elevated cases of like asthma and all kinds of respiratory problems around data centers and things like that. But you mentioned there's options there to kind of reduce the impact or have like more responsible options there. Maybe we could talk a little bit about like what's available to me there if I wanted to reduce that part, for example. Karl Rabe: No, happy to go into that. That is something that we are now thinking about quite heavily this year. And we're already presenting on two occasions, a sense. So the easy options in order to reduce your carbon on the scope one part for data center, which is basically, you know, that's just the direct burning of fossil resources and that is the testing of your backup generators. The easy option for that is this second gen diesel, HVO 100. And the, when I realized the key feature of this fuel, which about 15% more expensive, is that it doesn't age. Fossil diesel and especially, you know, biodiesel, the first generation and fossil diesel with biogen, there's always, in Europe there you have a certain degree of mixed in of this, it ages through bacteria biologically. So it's degrading. So, the, which is, you know, really bad because this diesel sitting there in a tank, you run it half an hour every two weeks, and you maybe change the fuel filter twice, once, twice a year. But if you really have an issue, you know, all of a sudden you use this diesel for four hours and then your system, your filter clocks, and you still have a problem, right? If Chris Adams: So your backup isn't a very good backup. So backup needs to be a good backup. Yeah. Yeah. Karl Rabe: Yeah. so your backup can run Chris Adams: you had one job, right? Yeah. Karl Rabe: Yeah. Yeah. And so, how it's mitigated is people try to use 'pure' diesel or, you know, heating oil, you know, which is not so prone to it, but still ages. They are recycling or, you know, really pumping out the fuel and pumping it in again every three years or they continuously filter it. All of this is either adding energy or cost. And so, the, this new form of biodiesel, which is, you know, your old frying fat, cracked with hydrogen to, is it looks very clear and it's very chemically treated that it's not really aging. People don't know really yet how long it stays. They certified 10 years, potentially it's stays, good longer and is also burning cleaner. So Chris Adams: Ah, so it'sn't going to be bad like bad air and stuff as well then? Karl Rabe: Yeah. So for the majority of your enterprise IT, your standard data center that's around you, you know, cutting out the whole AI discussion, probably that's the easiest way to do something about that. This is like a drop in replacement. You just, you know, you empty your tank, you put it in, or you burn your old fuel and put a new, that is something that is, you know, easily increasing the availability of your facility and you can change with that. Chris Adams: Can I just try to like summarize that? So, because I don't work with data centers on a daily, so there's like basically fossil diesel, the kind of stuff that, you know, you might associate with dieselgate and like all kinds of bad air, air quality issues. And then the, kind of the other option, which is maybe a little bit more expensive, you said around 15%, there's something called HVO, which is essentially like biodiesel that's been treated in a particular way to get rid of lots of the gunk so it burns more cleanly and works better as a reliable form of backup. So the backup is actually a decent backup rather than a thing which might not be a very good backup. Oh okay. So that's like one of the things, and that's like the direction that we might be moving towards and that's kind of what we would like to see more of for the case where you need to rely on some kind of liquid fuel power. Right. Karl Rabe: Yeah. Chris Adams: Okay. Karl Rabe: That is, I think is for most people, you know, just a very easy low hang fruit to just replace, you know, it does not, you know, most engines are certified for, nowadays, all engines run on it, you know, it's, it has the same, Yeah, criteria, properties like traditional diesel, the only thing that's different is it's 4% lighter, you know? Chris Adams: Oh, I see. Karl Rabe: So that's the only real on the spec sheet Chris Adams: Oh, okay. Alright. So if I may, so that's one of the options. These, so you can replace fossil diesel with essentially non-fossil cleaner, slightly less, you know, less toxic diesel. So that's one thing that you might have in for your backup. Now, I've heard various people talking about, say hydrogen, for example. Now hydrogen can come from fossil sources. So people, most of the time, actually, most hydrogen does come from basically cracking natural gas or methane gas, but it can come from green places. And that's why is, that's another option that you might have to generate power locally. Karl Rabe: Is that something that people tend to use? So I think the best, the best reference for hydrogen is like the champagne of our energy transition. You know, we need, we need to put in a lot of energy to put, to produce it. It's not easy to store, that we need a lot of facilities to actually create green hydrogen. Karl Rabe: The majority of hydrogen is not green hydrogen, but it's gray or blue, which is basically Chris Adams: like a carbon capture hydrogen, which is still a bit questionable. Yeah. Karl Rabe: all based from fossil tracking, you know, so it's, it potentially, you know, you, you also have the same goal. everything that we do for our clients is under this extremely short impact of time. You know, we have solve everything within now, within five years time, not even five years. Right. And so that's also something that I'm always, you know, spark a good discussion. When we talk about SMRs, you know, have the big pushback for nuclear over in the US, and also in Europe we have voices for that. And the short answer is, the three reasons I don't believe in it. They're not quick, you know, they're not cheap. Two projects were just, a year ago, there were two potential very, you know, hopeful projects for SMRs were canceled in the US, and half a year later it was a big thing. The big solution. like, what changed, you know? And then the third point is that is the, very German, perspective, you know, all the fears or the, challenges around the fuel, like getting it mostly 70% from Russia or, then the waste, you know, dumping it somewhere is not solved, still. And so, this is not a 2030 technology basically. That's my, the point what we can do and what I'm happy also to link, there's a good article from some of the, hyperscalers looking into solar combined with batteries, combined with gas based backup. The gas based has the one flexibility that it can start fossil, can move to bio, and potentially also can run on hydrogen. So this is, in terms of the speed with which we are now deploying hundreds of megawatts, you know, every data center for AI is now, you know, 100, 200, 300 gigawatts. You know, things that we did not, yeah, yeah. So it's things that we, you know, like yeah, we're discussing, you know, five or one to five gigawatts for the large people. And every other data center is all of a sudden is now a hundred megawatts, which used to be like a mega facility, just two years back. So that build out can only really be achieved not with grids or interconnects, those are too slow. This can only be basically with micro grids. Chris Adams: I see. Okay. Karl Rabe: Helping, you know, that are battery backed and gas based backed. And the big advantage of this is that if we think about the data center, traditionally a data center is a data fortress, right? You don't get in, data doesn't get out. It is, you know, is like a bank, you know, in terms of the security measures to do that. And also all of the infrastructure was handled that way. But if we are thinking about the UPS, and the genset not being sitting straight at the data center or only sitting straight at the data center but technically belonging to the utility and being able to provide flexible power, you know, because we have this, as mentioned, underlying flexible build out of renewable energy, and we need, you know, reliable switch-on power, which data centers all have. And so if we can put those together, there's a little bit of this working together, finding the right location where it would make most sense, and then allowing for SLAs and with clients to bidirectionally use batteries, gas turbines, Chris Adams: Oh, I see. Karl Rabe: engine power. This would, you know, help our, yeah, help us to transition, especially if we go into, you know, renewable shares, 60% and above and at latest from 80% we need those backup technologies. And then, and that is coming back to the question of hydrogen. Hydrogen is a technology that would, is so expensive that it would need to run all the time, basically. With renewable energy, we have high loads of abundance of energy and only need short times of flexible energy generation for which gas and batteries is virtually ideal. And so we promote this idea of an energy-integrated data center, which has the electrical part supporting into the grid and is also, you know, taking advantage of the heat reuse, especially for liquid-cooled facilities in order to give heat out. And the benefit of that is not only from an economical perspective, but also we see more and more discussions about not in my backyard. If a data center is energy integrated, it's not a question, you know, it's a must have. And there's also a reason why it needs to be there, you know, in order to be able to stabilize the, your town grid or your local area. And so that's what we are trying to promote. We got a lot of good feedback and we see the first, hopefully we'll have the first data center realized with a medium voltage UPS this year, which is like a first step in moving the availability components of a data center, the batteries and the gen sets to a higher area, which, a lot of the cost in a data center is from the low voltage distribution. The power that you put in the batteries is also first transferred down, and then it's moved, you know, through the data center until it sits in the battery and then needs to go out. And all of those are rectification steps. And all of this makes, yeah, all of Chris Adams: You lose, so do you lose power every single time you switch between them? Oh, okay. So it sounds like you, there's a shift from, like, data center as a fortress where, you know, you could do that before to like something where you have to be like a bit more symbiotic with your local environment because for a start, if you don't, you're not gonna allow it, you won't be allowed to build it. But also it's gonna change the economics in your favor if you are prepared to like play nicely and integrate with your, essentially be a good neighbor. All right. That seems to be what you seem to be suggesting. Karl Rabe: That's a perfect analogy. Having like a good neighbor approach. Saying, "look, we are here now, we look ugly, we always box, you know, but we help, you know, powering your homes, we reduce the cost of the energy transition, and we also heat your homes." You know, and that is then, is then a relatively easy sale. Chris Adams: Okay. So that points to possibly that, honestly, that points to quite a different strategy required for people who are, whose job it is to get data centers built. They need to figure out how to honestly relate to communities and say, well, which bits are we supposed to be useful? Rather than the approach that you come to sometimes see where people basically say, "well, we're not even gonna tell you who, it is or who the company is, but we're gonna use all your power and use all your water." That approach's days are probably numbered, and that's not a very good strategy to use. It does make more sense to actually have a much more like neighborly approach and these are maybe new skills that need to be developed inside the industry then. Karl Rabe: Absolutely correct. And so, you need the, you need an open collaboration approach to that, and that is, you know, mirrored, so we trying to be a bit of an example there. And if you go, if you talk about, you had a good point in there, which we usually don't have a lot of time to expand on, but I think podcast a good format for that. You ask like, where do you get the ideas or what's the guiding star on that? And so, I was fortunate to be an exchange worker, you know, on a farm in Canada. And they introduced to me the idea of holistic management, which is like a, basically, decision making framework, that is based on financially viable, socially viable, and economically viable. And so those three bases are necessary in order to create sustainable decisions or holistic decisions. Those need to be short and long term viable. And that has been, you know, my guiding star as an entrepreneur and really being able to cut out those things. You know, there's a lot of startups, especially in Germany. We had those Berlin startups who all came from a business school and all of their ideas worked on an Excel sheet, always cutting out like a social perspective, you know? And so that was, you know, that's the opposite basically of what we are trying to do. And this framework was found by a farmer who first applied it to grass management and cattle farming, technically. But it is, and it is wildly interesting what he's able to do. He's basically retaking, stopping desertification and reversing effects in subtropical, semi arid areas. Yeah. So we'll definitely put that in the notes. It's a tED Talk from Alan Kettle, which I think he's still alive. He's in his, he must be 90 now. And it's fascinating. But that was a guiding star. And in order to promote our ideas, you know, a lot of our designs, you know, we put on YouTube, but we also put the files up. The, racks, you know, you can download the CD files. There's, we believe they're created with open source tools. So especially in engineering, we only recently really have powerful open source tools for CAD, for single line diagram. So we can give the source files with that. And that is is something how we believe that open collaboration and openness helps to build, you know, the trust Chris Adams: Ah, Karl Rabe: to build with speed and to really work together, you know? And that's what we get mirrored in the Open Compute Foundation. Yeah, that is something that we believe is, for challenges that we face as humanity, you know, I believe that only this open approach, and especially an open source, open hardware, open data, framework can help us. Chris Adams: All right. Okay, so we're coming up to time and I just wanted, and you did alluded to it a few times and I just wanted to provide a bit of space to let you talk a little bit about that before we kind of finish up. You spoke a few times about the fact that these models, when you work, bunch of designs for the racks and things are like online and available, and did you say that they're on YouTube, like people can see the videos of this or can like download like something in blender to mess around with themselves or work with it? Maybe you could just expand on that a little bit because I haven't come across that before and Karl Rabe: Okay, sure, sure. So, yeah, we initially, when we started, you know, we designed everything and we put it, we still need to, shamefully, we still need to put, do the push for, to GitLab and GitHub. We use right now, we put those model on a construction setup, of course, called, GrabCAD. Chris Adams: Mm-hmm. Karl Rabe: And for our, it, you know, it's not only our own thoughts to open source this and to build the trust, but it's also our biggest, easiest marketing tool. You know, create a model, publishing it, put a video tape. We are a bit behind. We have a lot of new and great ideas and things to share. But that's how we approach it, you know, we'll come up with idea, put it out there and, also, you know, make ourselves criticizable, you know, we'll, we are the only ones comfortably saying, look, we have the best data centers in the world, 'cause you can go, you can download, you can fact check our ideas, and if you have something against it, you know, just give us a feedback. And we are open to change that. And this way forward, you know, helps us also to approach the biggest companies in the world. They really like this open approach, you know, and they're happy to take the files in the models and to work on that. Chris Adams: So you basically have like models of like, this is a model of Karl Rabe: Our rack, you know, this is our module data center. These are ideas behind that. And so that's how we are moving this forward. So people can approach this, they can download, they can see if it fits. They can make suggestions. Chris Adams: And like see if it's tall enough for the door and all of the basic or the practical things. Karl Rabe: Yeah. All those things, you know, and see, okay, we have smaller data center, oh, the base design doesn't fit in this setup, or we need to change something where we place, you know, the dry coolers or something like that. And so that is really, you know, really good feedback and sparks discussions. Chris Adams: Yeah, I haven't heard about that before. All right. Well, Karl, thank you so. This has been a lot of fun. Now, we've come up to time and I really enjoyed this tour through all the stuff hap that happens below the software stack for engineers like us, for example. If someone does wanna look at this or learn about this or maybe kind of check out any of the models themselves, if they wanted to build any of this stuff themselves, where should they look? Like, how do we, where do people find you online or any other projects that you're looking at, you're So, working on? Karl Rabe: So the best thing technically to, is LinkedIn. This is, you know, our strong platform, to be honest, we are very active there. We publish most there. The webpage is still under construction. You know, people already understand what we do from going to that. LinkedIn is great. Look, go and, you know, trying to reach us and see what we do at the Open Compute Foundations is also often very great. But yeah, just technically why Google is very easy to find us on LinkedIn and to reach Chris Adams: So Karl Rabe on LinkedIn, Wooden Data Center, there aren't that many other companies who are called Wooden Data Center. And then for any of the Open Compute Project stuff, that's the other place to look at where you're working. 'Cause you're doing the open compute modular data center stuff. Those are the ones, yeah? Karl Rabe: Yeah. Correct. Chris Adams: Brilliant. Karl, thank you so much for this. This has been loads of fun and I hope that we've had listeners follow us along as well to see all the options and things available to them. Alright, Karl Rabe: It was a pleasure. Thanks so much. And, Chris Adams: Likewise, Karl. And, hope the wind turbines treat you well where you're staying. All right, take care mate. Karl Rabe: Yeah. Thank you. Bye bye. Cheers. Chris Adams: Hey everyone, thanks for listening. Just a reminder to follow Environment Variables on Apple Podcasts, Spotify, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show, and of course, we'd love to have more listeners. To find out more about the Green Software Foundation, please visit greensoftware.foundation. That's greensoftware.foundation in any browser. Thanks again, and see you in the next episode.…
E
Environment Variables

Host Chris Adams sits down with James Hall, Head of GreenOps at Greenpixie, to explore the evolving discipline of GreenOps—applying operational practices to reduce the environmental impact of cloud computing. They discuss how Greenpixie helps organizations make informed sustainability decisions using certified carbon data, the challenges of scaling cloud carbon measurement, and why transparency and relevance are just as crucial as accuracy. They also discuss using financial cost as a proxy for carbon, the need for standardization through initiatives like FOCUS, and growing interest in water usage metrics. Learn more about our people: Chris Adams: LinkedIn | GitHub | Website James Hall: LinkedIn Greenpixie: Website Find out more about the GSF: The Green Software Foundation Website Sign up to the Green Software Foundation Newsletter News: The intersection of FinOps and cloud sustainability [16:01] What is FOCUS? Understand the FinOps Open Cost and Usage Specification [22:15] April 2024 Summit: Google Cloud Next Recap, Multi-cloud Billing with FOCUS, FinOps X Updates [31:31] Resources: Cloud Carbon Footprint [00:46] Greenops - Wikipedia [02:18] Software Carbon Intensity (SCI) Specification [05:12] GHG Protocol [05:20] Energy Scores for AI Models | Hugging Face [44:30] What is GreenOps - Newsletter | Greenpixie [44:42] Making Cloud Sustainability Actionable with FinOps Fueling Sustainability Goals at Mastercard in Every Stage of FinOps If you enjoyed this episode then please either: Follow, rate, and review on Apple Podcasts Follow and rate on Spotify Watch our videos on The Green Software Foundation YouTube Channel! Connect with us on Twitter , Github and LinkedIn ! TRANSCRIPT BELOW: James Hall: We want get the carbon data in front of the right people so they can put climate impact as part of the decision making process. Because ultimately, data in and of itself is a catalyst for change. Chris Adams: Hello, and welcome to Environment Variables, brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software. I'm your host, Chris Adams. Hello and welcome to Environment Variables where we explore the developing world of sustainable software development. We kicked off this podcast more than two years ago with a discussion about cloud carbon calculators and the open source tool, Cloud Carbon Footprint, and Amazon's cloud carbon calculator. And since then, the term GreenOps has become a term of art in cloud computing circles when we talk about reducing the environmental impact of cloud computing. But what is GreenOps in the first place? With me today is James Hall, the head of GreenOps at Greenpixie, the cloud computing startup, cloud carbon computing startup, to help me shed some light on what this term actually means and what it's like to use GreenOps in the trenches. James, we have spoken about this episode as a bit of a intro and I'm wondering if I can ask you a little bit about where this term came from in the first place and how you ended up as the def facto head of GreenOps in your current gig. Because I've never spoken to a head of GreenOps before, so yeah, maybe I should ask you that. James Hall: Yeah, well, I've been with Greenpixie right from the start, and we weren't really using the term GreenOps when we originally started. It was cloud sustainability. It was about, you know, changing regions to optimize cloud and right sizing. We didn't know about the FinOps industry either. When we first started, we just knew there was a cloud waste problem and we wanted to do something about it. You know, luckily when it comes to cloud, there is a big overlap between what saves costs and what saves, what saves carbon. But I think the term GreenOps has existed before we started in the industry. I think it, yeah, actually originally, if you go to Wikipedia, GreenOps, it's actually to do with arthropods and Trilobites from a couple million years ago, funnily enough, I'm not sure when it started becoming, you know, green operations. But, yeah, it originally had a connotation of like data centers and IT and devices and I think Cloud GreenOps, where Greenpixie specializes, is more of a recent thing because, you know, it used to be about, yeah, well it is about how do you get the right data in front of the right people so they can start making better decisions, ultimately. And that's kind of what GreenOps means to me. So Greenpixie are a GreenOps data company. We're not here to make decisions for you. We are not a consultancy. We want get the carbon data in front of the right people so they can put climate impact as part of the decision making process. Because ultimately, data in and of itself is a catalyst for change. You know, whether you use this data to reduce carbon or you choose to ignore it, you know, that's up to the organization. But it's all about being more informed, ignoring or, you know, changing your strategy around the carbon data. Chris Adams: Cool. Thank you for that, James. You mentioning Wikipedia and Greenops being all about Trilobites and Arthropods, it makes me realize we definitely should add that to the show notes and that's the thing I'll quickly just do because I forgot to just do the usual intro folks. Yeah, my name's Chris Adams. I am one of the policy director, technology and policy director at the Green Web Foundation, and I'm also the chair of the policy working group inside the Green Software Foundation. All the things that James and I'll be talking about, we'll do our best to judiciously add show notes so you can, you too can look up the origins of, well, the etymology of GreenOps and find out all about arthropods and trilobites and other. And probably a lot more cloud computing as well actually. Okay. Thank you for that James. So you spoke a little and you did a really nice job of actually introducing what Greenpixie does. 'Cause that was something I should have asked you earlier as well. So I have some experience using these tools, like Cloud Carbon Footprint and so on to estimate the environmental impact of digital services. Right. And a lot of the time these things use billing data. So there are tools out there that do already do this stuff. But one thing that I saw that sets Greenpixie apart from some other tools as well, was the actual, the certification process, the fact that you folks have, I think, an ISO 14064 certification. Now, not all of us read over ISO standards for fun, so can you maybe explain why that matters and what that actually, what that changes at all, or even what that certification means? 'Cause, It sounds kind of impressive and exciting, but I'm not quite sure, and I know there are other standards floating around, like the Software Carbon Intensity standard, for example. Like yeah, maybe you could just provide an intro, then see how that might be different, for example. James Hall: Yeah, so ISO 14064 is a kind of set of standards and instructions on how to calculate a carbon number, essentially based on the Greenhouse Gas Protocol. So the process of getting that verification is, you know, you have official auditors who are like certified to give out these certifications, and ultimately they go through all your processes, all your sources, all the inputs of your data, and kind of verify that the outputs and the inputs make sense. You know, do they align with what the Greenhouse Gas Protocol tells you to do? And, you know, it's quite a, it's a year long process as they get to know absolutely everything about your business and processes, you really gotta show them under the hood. But from a customer perspective, it means you know, that it proves that the methodology you're using is very rigorous and it gives them confidence that they can use yours. I think if a company that produces carbon data has an ISO badge, then you can probably be sure that when you put this data in your ESG reports or use it to make decisions, the auditors will also agree with it. 'Cause the auditors on the other side, you know, your assurers or from EY and PWC, they'll be using the same set of guidance basically. So it's kind of like getting ahead of the auditing process in the same way, like a security ISO would mean the security that the chief security officer that would need to, you know, check a new vendor that they're about to procure from. If you've got the ISO already, you know they meet our standards for security, it saves me a job having to go and look through every single data processing agreement that they have. Chris Adams: Gotcha. Okay. So there's a few different ways that you can kind of establish trust. And so one of the options is have everything entirely open, like say Cloud Carbon Footprint or OpenCost has a bunch of stuff in the open. There's also various other approaches, like we maintain a library called CO2.js, where we try to share our methodologies there and then one of the other options is certification. That's another source of trust. I've gotta ask, is this common? Are there other tools that have this? 'Cause when I think about some of the big cloud calculators, do you know if they have this, let's say I'm using say, a very, one of the big three cloud providers. Do these have, like today, do you know if they actually have the same certification or is that a thing I should be looking for or I should be asking about if I'm relying on the numbers that I'm seeing from our providers like this. James Hall: Yeah, they actually don't. Well, technically, Azure. Azure's tool did get one in 2020, but you need to get them renewed and reordered as part of the process. So that one's kind of becoming invalid. And I'm not sure AWS or Google Cloud have actually tried, to be honest, but it's quite a funny thought that, you know, it's arguably because this ISO the, data we give you on GCP and AWS is more accurate than the data, or at least more reliable than the data that comes directly out the cloud providers. Chris Adams: Okay. Alright. Let's, make sure we don't get sued. So I'm just gonna stop there before we go any further. But that's like one of the things that it provides. Essentially it's an external auditor who's looked through this stuff. So rather than being entirely open, that's one of the other mechanisms that you have. Okay, cool. So maybe we can talk a little bit more about open source. 'Cause I actually first found out about Greenpixie a few years ago when the Green Software Foundation sent me to Egypt, for COP 27 to try and talk to people about green software. And I won't lie, I mostly got blank looks from most people. You know, they, the, I, there are, people tend to talk about sustainability of tech or sustainability via tech, and people tend not to see them as, most of the time I see people like conflating the two rather than actually realizing no, we're talking about of the technology, not just how it's good for stuff, for example, and he told me, I think one of your colleagues, Rory, was this, yeah. He was telling me a bit about, that Greenpixie was initially using, when you just first started out, you started looking at some tools like Cloud Carbon Footprint as maybe a starting point, but you've ended up having to make various changes to overcome various technical challenges when you scale the use up to like a large, to well, basically on a larger clients and things like that. Could you maybe talk a little bit about some of the challenges you end up facing when you're trying to implement GreenOps like this? Because it's not something that I have direct experience myself. And it's also a thing that I think a lot of people do reach for some open source tools and they're not quite sure why you might use one over the other or what kind of problems they, that they have to deal with when you start processing that, those levels of like billing and usage data and stuff like that. James Hall: I think with the, with cloud sustainability methodologies, the two main issues are things like performance and the data volume, and then also the maintenance of it. 'Cause just the very nature of cloud is you know, huge data sets that change rapidly. You know, they get updated on the hour and then you've also got the cloud providers always releasing new services, new instance types, things like that. So, I mean, like your average enterprises with like a hundred million spend or something? Yeah. Those line items of usage data, if you like, go down to the hour will be billions of rows and terabytes of data. And that is not trivial to process. You know, a lot of the tooling at the moment, including Cloud Carbon Footprint, will try to, you know, use a bunch of SQL queries to truncate it, you know, make it go up to monthly. So you kind of take out the rows by, you know, a factor of 24 times 30 or whatever that is. It's about 740, I think. Something like that (720). Yeah. Yeah. So, and they'll remove things like, you know, there's certain fields in the usage data that will, that are so unique that when you start removing those and truncating it, you're really reducing the size of the files, but you are really losing a lot of that granularity. 'Cause ultimately this billing data is to be used by engineers and FinOps people. They use all these fields. So when you start removing fields because you can't handle the data, you're losing a lot of the familiarity of the data and a lot of the usability for the people who need to use it to make decisions. So one of the big challenges is how do you make a processor that can easily handle billions of line items without, you know, falling over. And CCF, one of the issues was the performance really when you start trying to apply it to big data sets. And then on the other side is the maintenance. You know, arguably it's probably not that difficult to make a methodology of a point in time, but you know, over the six months it takes you to create it, it's way out date. You know, they've released a hundred new instance types across the three providers. There's a new type of storage, there's a brand new services, there's new AI models out there. And so now, like Greenpixie's main job is how do we make sure the data is more, we have more coverage of all the skews that come out and we can deliver the data faster and customers have more choices of how to ingest it. So if you give customers enough choice and you give it to them quick enough and it's, you know, covering all of their services, then you know, that's what those, lack of those three things is really what's stopping people from doing GreenOps, I think. Chris Adams: Ah, okay, so one of them was, one of the things you mentioned was just the volume, the fact that you've got, you know, hours multiply the number of different, like a thousand different computers or thousands of computers. That's a lot of data. And then there's a, there's like one of the issues about like the metrics issue, like you, if you wanna provide a simple metric, then you end up losing a lot of data. So that's one of the things you spoke about. And the other one was just the idea of models themselves not being, there's natural cost associated with having to maintain these models. And as far as I'm aware, there aren't, I mean, are there any kind of open sources of models so that you can say, well this is what the figures probably would be for an Amazon EC, you know, 6XL instance, for example. That's the stuff you're talking to when you say the models that you, they're hard to actually up to, hard to keep up to date, and you have to do that internally inside the organization. Is that it? James Hall: Yes, we've got a team dedicated to doing that. But ultimately, like there will always be assumptions in there. 'Cause some of these chip sets you actually can't even get your hands on. So, you know, if Amazon release a new instance type that uses an Intel Xeon 7850C, that is not commercially available. So how do you get your hands on an Intel Xeon 7850B that is commercially available and you're like, okay, it, these six things are similar in terms of performance in hardware. So we're using this as the proxy for the M5 large or whatever it is. And then once you've got the power consumption of those instance types, then you can start saying, okay, this is how we, this is how we're mapping instances to real life hardware. And then that's when you've gotta start being really transparent about the assumptions, because ultimately there's no right answer. All you can do is tell people, this is how we do it. Do you like it? Do you? And you know, over the four years we've been doing this, you know, there's been a lot of trial and error. Actually, right at the start, one of the questions was, what are my credentials? How did I end up as head of GreenOps? I wouldn't have said four years ago I have any credentials to be, you know, a head of GreenOps. So it was a while when I was the only head of GreenOps in the world, according to a Sales Navigator. Why me? But I think it's like, you know, they say if you do 10,000 hours of anything, you kind of, you become good at it. And I wouldn't say I'm a master by any means, but I've made more mistakes and probably tried more things than anybody else over the four years. So, you know, just, from the war stories, I've seen what works. I've seen what doesn't work. And I think that's the kind of, that's the kind of experience people wanna trust. And why Greenpixie made me the head of GreenOps. Chris Adams: Okay. All right. Thanks for that, James. So maybe this is actually a nice segue to talk about a common starting point that lots of people do actually have. So over the last few years, we've also seen people talk about move from not moved away, not just talking about DevOps, but talking about like FinOps. This idea that you might apply kind of some financial thinking to how you purchase and consume, say, cloud services for example. And this tends to, as far as I understand, kinda nudge people towards things like serverless or certain kinds of ways of buying it in a way, which is almost is, you know, very much influenced by fi by I guess the financial sector. And you said before that there's some overlap, but it's not totally over there, it's not, you can't just basically take a bunch of FinOps practices and think it's gonna actually help here. Can we explore that a bit and maybe talk a little bit about what folks get wrong when they try to like map this straight across as if it's the same thing? Please. James Hall: Yeah, so one of the big issues is cost proxies, actually. Yeah, a lot of FinOps as well, how do you fix, or how do you optimize from a cost perspective? What already exists? You know, you've already emitted it. How do you now make it cheaper? The first low hanging fruit that a finance guy trying to reduce their cloud spend would do is things like, you know, buy the instances up front. So you've paid for the full year and now you've been given a million hours of compute. That would might, that might cut your bill in half, but if anything that would drive your usage up, you know, you've got a million hours, you are gonna use them. Chris Adams: Commit to, so you have to commit to then spending a billion. You're like, "oh, great. I have the cost, but now I definitely need to use these." Right? James Hall: Yeah, exactly. And like, yeah, you say commitments. Like I promise AWS I'm gonna spend $2 million, so I'm gonna do whatever it takes to spend that $2 million. If I don't spend $2 million, I'll actually have to pay the difference. So if I only do a million in compute, I'm gonna have to pay a million and get nothing for it. So I'm gonna do as much compute as humanly possible to get the most bang for my back. And I think that's where a lot of the issues is with using costs. Like if you tell someone something's cheap, they're not gonna use less, they're gonna be like, "this looks like a great deal." I'm guilty of it myself. I'll buy clothes I don't need 'cause it's on a clearance sale. You know? And that's kind of how cloud operates. But when you start looking at, when you get a good methodology that really looks at the usage and the nuances between chip sets and storage tiers, you know, there is a big overlap between, you know, cutting the cost from a 2X large to a large that may halve your bill, and it will halve your carbon. And that's the kind of things you need to be looking out for. You need a really nuanced methodology that really looks at the usage more than just trying to use costs. Chris Adams: Okay, so that's one place where it's not so helpful. And you said a little bit like there are some places where it does help, like literally just having the size of the machine is one of the things you might actually do. Now I've gotta ask, you spoke before about like region shifting and stuff, something you mentioned before. Is there any incentive to do anything like that when you are looking at buying stuff in this way? Or is there any kind of, what's the word I'm after, opinion that FinOps or GreenOps has around things like that because as far as I can tell, there isn't, there is very rarely a financial incentive to do anything like that. If anything, it costs, usually costs more to use, maybe say, run something in, say Switzerland for example, compared to running an AWS East, for example. I mean, is that something you've seen, any signs of that where people kind of nudge people towards the greener choice rather than just showing like a green logo on a dashboard for example? James Hall: Well, I mean, this is where GreenOps comes into its own really, because I could tell everyone to move to France or Switzerland, but when you come to each individual cloud environment, they will have policies and approved regions and data sovereignty things, and this is why all you can do is give them the data and then let the enterprise make the decision. But ultimately, like we are working with a retailer who had a failover for storage and compute, but they had it all failing over to one of the really dirty regions, like I think they were based in the UK and they failed over to Germany, but they did have Sweden as one of the options for failover, and they just weren't using it. There's no particular reason they weren't using it, but they had just chosen Germany at one point. So why not just make that failover option Sweden? You know, if it's within the limits of your policies and what you're allowed to do. But, the region switching is completely trivial, unfortunately, in the cloud. So you know, you wouldn't lift and shift your entire environment to another place because there are performance, there are cost implications, but again, it's like how do you add sustainability impact to the trade-off decision? You know, if increasing your cost 10% is worth a 90% carbon reduction for you, great. Please do it if you know the hours of work are worth it for you. But if cost is the priority, where is the middle ground where you can be like, okay, these two regions are the same, they have the same latency, but this one's 20% less carbon. That is the reason I'm gonna move over there. So it's all about, you've already, you can do the cost benefit analysis quite easily, and many people do. But how do you enable them to do a carbon benefit analysis as well? And then once they've got all the data in front of them, just start making more informed decisions. And that's why I think the data is more important than, you know, necessarily telling them what the processes are, giving them the, here's the Ultimate Guide to GreenOps. You know, data's just a catalyst for decisions and if you just need to give them trustworthy data. And then how many use cases does trustworthy data have? You know, how many, how long is a piece of string? I've seen many, but every time there's a new customer, there's new use cases. Chris Adams: Okay, cool. Thank you for that. So, one thing that we spoke before in this kind of pre-call was the fact that, sustainability is becoming somewhat more mainstream. And there's now, within the kind of FinOps foundation or the people who are doing stuff for FinOps are starting to kind of wake up to this and trying to figure out how to incorporate some of this into the way they might kind of operate a team or a cloud or anything like that. And you. I believe you told me about a thing called FOCUS, which is, this is like something like a standardization project across all the FinOps and then, and now there's a sustainability working group, particularly inside this FOCUS group. For people who are not familiar with this, could you tell me what FOCUS is and what this sustainability working group as well working on? You know, 'cause working groups are supposed to work on stuff, right? James Hall: Yeah, so as exactly as you said, FOCUS is a standardization of billing data. So you know, when you get your AWS bill, your Azure bill, they have similar data in them. But they will be completely different column names. Completely different granularities, different column sizes. And so if you're trying to make a master report where you can look at all of your cloud and all of your SaaS bills, you need to do all sorts of data transformations to try and make the columns look the same. You know, maybe AWS has a column that goes one step more granular than Azure, or you're trying to, you know, do a bill on all your compute, but Azure calls it virtual machines. AWS calls it EC2. So you either need to go and categorize them all yourself to make a, you know, a master category that lets you group by all these different things or, you know, thankfully FOCUS have gone and done that themselves, and it started off as a, like a Python script you could run on your own data set to do the transformation for you, but slowly more cloud providers are adopting the FoCUS framework, which means, you know, when you're exporting your billing data, you can ask AWS give me the original or give me a FOCUS one. So they start giving you the data in a way where it's like, I can easily combine all my data sets. And the reason this is super interesting for carbon is because, you know, carbon is a currency in many ways, in the fact that the, Chris Adams: there's price on it in Europe. There's a price on it in the UK. Yeah. James Hall: There's a price on it, but also like the way Azure will present you, their carbon data could be, you know, the equivalent of yen, AWS could be the equivalent of dollars. They're all saying CO2 E, so you might think they're equivalent, but actually they're almost completely different currencies. So this effort of standardization is how do we bring it back? Maybe like, don't give us the CO2 E, but how do we go a few steps before that point and like, how do we start getting similar numbers? So when we wanna make a master report for all the cloud providers, it's apples to apples, not apples to oranges. You know, how do we standardize the data sets to make the reporting, the cross cloud reporting more meaningful for FinOps people? Chris Adams: Ah, I see. Okay. So I didn't realize that the FOCUS stuff has actually listing, I guess like what the, let's, call them primitives, like, you know, compute and storage. Like they all have different names for that stuff, but FOCUS has a kind of shared idea for what the concept of cloud compute, a virtual machine might be, and likewise for storage. So that's the thing you are trying, you're trying to apply, attach a carbon value to in these cases, so you can make some meaningful judgment or so you can present that information to people. James Hall: Yeah, it's about making the reports at the same, but also how do you make the numbers, the source of the numbers more similar? 'Cause currently, Azure may say a hundred tons in their dashboard. AWS may say one ton in their dashboard. You know, the spend and the real carbon could be identical, but it's just the formula behind it is so vastly different that you're coming out with two different numbers. Chris Adams: I see. I think you're referring to at this point here. Some places they might share a number, which is what we refer to as a location based figure. So that's like, what was kind of considered on the ground based on the power intensity from the grid in like a particular part of the world. And then a market based figure might be quite a bit lower. 'Cause you said, well, we've purchased all this green energy, so therefore we are gonna kind of deduct that from what a figure should be. And that's how we'd have a figure of like one versus 100. But if you're not comparing these two together. It's gonna, these are gonna look totally different. And you, like you said, it's not apples. With apples. It's apples with very, yeah. It's something totally different. Okay. That is helpful. James Hall: It gets a lot more confusing than that 'cause it's not just market and location based. Like you could have two location based numbers, but Azure are using the grid carbon intensity annual average from 2020 because that's what they've got approved. AWS may be using, you know, Our World in Data 2023 number, you know, and those are just two different sources for grid intensity. And then what categories are they including? Are they including Scope 3 categories? How many of the scope 2 categories are they including? So when you've got like a hundred different inputs that go into a CO2 number, unless all 100 are the same, you do not have a meaningful comparison between the two. Even location/market based is just one aspect of what goes into the CO2 number, and then where do they get the kilowatt hour numbers from? Is it a literal telemetry device? Or are they using a spend based property on their side? Because that's not completely alien to cloud providers to ultimately rely on spend at the end of the day. So does Azure use spend or does AWS use spend? What type of spend are they using? And that's where you need the transparency as well, because if you don't understand where the numbers come from, it could be the most accurate number in the world, but if they don't tell you everything that went into it, how are you meant to know? Chris Adams: I see. Okay. That's really interesting. 'Cause the Green Web Foundation, the organization I'm part of, there is a gov, there's a UK government group called the Government Digital Sustainability Alliance. And they've been doing these really fascinating lunch and learns and one thing that showed up was when the UK government was basically saying, look, these are, this is the carbon footprint, you know, on a kind of per department level. Like this is what the Ministry of Justice is, or this is what say the Ministry of Defense might be, for example. And that helps explain why you had figures where you had a bunch of people saying the carbon footprint of all these data centers is really high. And then you said they, there were people talking about saying, well, we're comparing this to cloud looks great, but 'cause the figures for cloud are way lower. But the thing they, the thing that I was that people had to caveat that with, they basically said, well, we know that this makes cloud look way more efficient here, and it looks like it's much more, much lower carbon, but because we've only got this final kind of market based figure, we know that it's not a like for like comparison, but until we have that information, we're, this is the best we actually have. And this, is an organization which actually has like legally binding targets. They have to reduce emissions by a certain figure, by a certain date. This does seem like it has to be, I can see why you would need this transparency because it seems very difficult to see how you could meaningfully track your progress towards a target if you don't have access to that. Right? James Hall: Yeah. Well, I always like to use the currency conversion analogy. If you had a dashboard where AWS is all in dollars, Azure, or your on premise is in yen. There's 149 yen in 1 dollar. So, but if you didn't know this one's yen and this one's dollars, you'd be like, "this one's 149 times cheaper. Why aren't we going all in on this one?" But actually it's just different currencies. And they are the same at the end of the day. Under the hood, they're the same. But, know, just the way they've turned it into an accounting exercise has kind of muddied the water, which is why I love electricity metrics more. You know, they're almost like the, non fungible token of, you know, data centers and cloud. 'Cause you can use that to calculate location-based. You can use calculate market-based. You can use electricity to calculate water cooling and metrics and things like that. So if you can get the electricity, then you're well on your way to meaningful comparisons. Chris Adams: And that's the one that everyone guards very jealously a lot of the time, right? James Hall: Exactly. Yeah. Well that's directly related to your cost of running business and that is the proprietary information. Chris Adams: I see. Okay. Alright, so we spoke, we've done a bit of a deep dive into the GSG protocol, scope 3, supply chain emissions and things like that. If I may, you mentioned, you, referenced this idea of war stories before. Right. And I. It's surprisingly hard to find people with real world stories about okay, making meaningful changes to like cloud emissions in the world. Do you have any like stories that you've come across in the last four years that you think are particularly worth sharing or that might be worth, I dunno, catch people's attention, for example. Like there's gotta be something that you found that you are allowed to talk about, right. James Hall: Yeah, I mean, MasterCard, one of our Lighthouse customers, they've spoken about the work we're doing with them a lot in, at various FinOps conferences and things like that. But they're very advanced in their GreenOps goals. They have quite ambitious net zero goals and they take their IT sustainability very seriously. Yeah, when we first spoke to them. Ultimately the name of the game was to get the cloud measurement up to the point of their on-premise. 'Cause their on-premise was very advanced, daily electricity metrics with pre-approved, CO2 numbers or CO2 carbon coefficients that multiplied the, you multiply the electricity with. But they were getting, having no luck with cloud, essentially, you know, they spend a lot in the cloud and, but they, they were honestly like, rather than going for just the double wins, which is kind of what most people wanna do, where it's like, I'm gonna use this as a mechanism to save more money. They honestly wanted to do no more harm and actually start making decisions purely for the sustainability benefits. And we kind of went in there with the FinOps team, worked on their FinOps reporting, combined it with their FinOps recommendations and the accountability, which is their tool of choice. But then they started having more use cases around. How do they use our carbon data, not our electricity data from the cloud or like, because we have a big list of hourly carbon coefficients. They wanna use that data to start choosing where they put their on-premise data centers as well, and like really making the sustainability impact a huge factor in where they place their regions, which I think is a very interesting one. 'Cause we had only really focused on how do we help people in their public cloud. But they wanted to align their on-premise reporting with their cloud reporting and ultimately start even making decisions. Okay, I know I need to put a data center in this country. Do I go AWS, Azure, or on-prem for this one? And what is the sustainability impact of all three? And, you know, how do I weigh that against the cost as well? And it's kind of like the golden standard of making sustainability a big part of the trade-off decision. 'Cause they would not go somewhere, even if it saved them 50% of their cost, if it doubled their carbon. They're way beyond that point. So they're a super interesting one. And even in public sector as well, like the departments we are working with are relatively new to FinOps and they didn't really have like a proper accountability structure for their cloud bill. But when you start adding carbon data to it, you are getting a lot more eyes onto the, onto your bills and your usage. And ultimately we help them create that more of a FinOps function just with the carbon data. 'Cause people find carbon data typically more interesting than spend data. But if you put them on the same dashboard, now it's all about how do you market efficient usage? And I think that's one of the main, use cases of GreenOps is to get more eyes or more usage. So, 'cause the more ideas you've got piling in, the more use cases you find and. Chris Adams: Okay. Alright, so we spoke, so you spoke about carbon as one of the main things that people are caring about, right. And we're starting to develop more of an awareness that maybe some data centers might themselves be exposed to kind of climate risks themselves. Because I know they were built on a floodplain, for example. And you don't want a data center on a floodplain in the middle of a flood, for example. Right. but there's also like the flip side, you know, that's too much water. But there are cases where people worry about not enough water, for example. I mean, is that something that you've seen people talk about more of? Because there does seem to be a growing awareness about the water footprint of digital infrastructure as well now. Is that something you're seeing people track or even try to like manage right now? James Hall: Well, we find that water metrics are very popular in the US more so than the CO2 metrics, and I think it's because the people there feel the pain of lack of water. You know, you've got the Flint water crisis. In the UK, we've got an energy crisis stopping people from building homes. So what you really wanna do is enable the person who's trying to use this data to drive efficiency, to tell as many different stories as is possible,. You know, the more metrics and the more choice they have of what to present to the engineers and what to present to leadership, the better outcomes they're gonna get. Water is a key one because data centers and electricity production uses tons of water. And the last thing you wanna do is, you know, go to a water scarce area and put a load of servers in there that are gonna guzzle up loads of water. One, because if that water runs out, your whole data center's gonna collapse. So it's, you're exposing yourself to ESG risk. And also, you know, it doesn't seem like the right thing to do. There are people trying to live there who need to use that water to live. But you know, you've got data centers sucking that water out, so you know, can't you use this data to again, drive different decisions, could invoke an emotional response that helps people drive different decisions or build more efficiently. And if you're saving cost at the end of that as well, then everyone's happy. Chris Adams: So maybe this is actually one thing we can talk about because, or just like, drill into before we kind of, move on to the next question and wrap up. So we, people have had incentives to track cost and cash for obvious reasons, carbon, as you're seeing more and more laws actually have opinions about carbon footprint and being able to report that people are getting a bit more aware of it. Like we've spoken about things like location based figures and market based figures. And we have previous episodes where we've explored and actually kind of helped people define those terms. But I feel comfortable using relatively technical terminology now because I think there is a growing sophistication, at least in certain pockets, for example. Water still seems to be a really new one, and it seems to be very difficult to actually have, find access to meaningful numbers. Even just the idea of like water in the first place. Like you, when you hear figures about water being used, that might not be the same as water. Kind of. It's not, it might not be going away, so it can't be used. It might be returned in a way that is maybe more difficult to use or isn't, or is sometimes it's cleaner, sometimes it's dirtier, for example. But this, it seems to be poorly understood despite being quite an emotional topic. Have you, yeah, what's your experience been like when people try to engage with this or when you try to even find some of the numbers to present to people and dashboards and things? James Hall: Yeah. So yeah, surprisingly, all the cloud providers are able to produce factors. I think it's actually a requirement that when you have a data center, you know what the power usage effectiveness is, so what the overhead electricity is, and you know what the water usage effectiveness is. So you know, what is your cooling system, how much water does it use, how much does it withdraw? Then how much does it actually consume? So the difference between withdrawal and consumption, is withdrawal is you let you take clean water out, you're able to put clean water back relatively quickly. Consumption is you have either poisoned the water with some kind of, you know, you've diluted it or you know, with some kind of coolant that's not fit for human consumption or you've now evaporated it. And there is some confusion sometimes around "it's evaporated, but it'll rain. It'll rain back down." But, you know, a lake's evaporation and redeposition processs is ike a delicate balance. If it, you know, evaporates 10,000 liters a day and rains 10,000 liters a day after, like a week of it going into the clouds and coming back down the mountain nearby. If you then have a data center next to it that will accelerate the evaporation by 30,000 leases a day, you really upset the delicate balance that's in there and that, you know, you talk about are these things sustainable? Like financial sustainability is, do you have enough money and income to last a long time, or will your burn rate run out next month? And it's the same with, you know, sustainability. I think fresh water is a limiting resource in the same way a company's bank balance is their limiting resource. There's a limited amount of electricity, there's a limited amount of water out there. I think it was the cEO of Nvidia. I saw a video of him on LinkedIn that said, right now the limit to your cloud environment is how much money you can spend on it. But soon it will be how much electricity is there? You know, you could spend a trillion dollars, but if there's no more room for electricity, there's no more electricity to be produced, then you can't build anymore data centers or solar farms. And then water's the other side of that. I think water's even worse because we need water to even live. And you know what happens when there's no more water because the data centers have it. I think it invokes a much more emotional response. When you have good data that kind of is backed by good sources, you can tell an excellent story of why you need to start reducing. Chris Adams: Okay, well hopefully we can see more of those numbers because it seems like it's something that is quite difficult to get access to at the moment. Water's it, water in particular. Alright, so we're coming to time now and one thing we spoke about in the prep call was talking about the GSG protocol. We did a bit but nerd like nerding into this and you spoke a little bit about yes, accuracy is good, but you can't just only focus on accuracy if you want someone to actually use any of the tools or you want people to adopt stuff, and you said that in the GHG protocol, which is like the gold standard for people working out kind of the, you know, carbon footprint of things. You said that there were these different pillars inside of that matter. And if you just look at accuracy, that's not gonna be enough. So can you maybe expand on that for people who maybe aren't as familiar with the GSG protocol as you? Because I think there is something that, I think, that there, there's something there that's worth, I think, worth exploring. James Hall: Yeah. So it just as a reminder for those out there, the pillars are accuracy, yes, completeness, consistency, transparency, and relevance. A lot of people worry a lot about the accuracy, but, you know, just to give an example that if you had the most amazing, accurate number for your entire cloud environment, you know, 1,352 tons 0.16 grams, but you are one engineer under one application, running a few resources, the total carbon number is completely useless to you, to be honest. Like how do you make, use that number to make a decision for your tiny, you know, maybe five tons of information. So really you've got to balance all of these things. You know, the transparency is important because you need to build trust in the data. People need to understand where it comes from. The relevance is, you know, again, are you filtering on just the resources that are important to me? And the consistency touches on, aWS is one ton versus Azure is 100 tons. You can't decide which cloud provider to go into based on these numbers because you know, they're marking their own homework. They've got a hundred different ways to calculate these things. And then the completeness is around, if you're only doing compute, but 90% is storage, you are missing out on loads of information. You know, you could have a super accurate compute for Azure, but if you've got completely different numbers for AWS and you dunno where they come from, you've not got a good data set, a good GreenOps data set to be able to drive decisions or use as a catalyst. So you really need to prioritize all five of these pillars in an equal measure and treat them all as a priority rather than just go for full accuracy. Chris Adams: Brilliant. We'll sure make a point of sharing a link to that in the show notes for anyone else who wants to dive into the world of pillars of sustainability reporting, I suppose. Alright. Okay. Well, James, I think that takes us to time. So just before we wrap up, there's gonna be usual things like where people can find you, but are there any particular projects that are catching your eye right now that you are kind of excited about or you'd like to direct people's attention to? 'Cause we'll share a link to the company you work for, obviously, and possibly yourself on LinkedIn or whatever it is. But is there anything else that you've seen in the last couple of weeks that you find particularly exciting in the world of GreenOps or kind of the wider sustainable software field? James Hall: Yeah, I mean, a lot of work being done around AI sustainability is particularly interesting. I recommend people go and look at some of the Hugging Face information around which models are more electrically efficient. And from a Greenpixie side, we've got a newsletter now for people wanting to learn more about GreenOps and in fact, we're building out a GreenOps training and certification that I'd be very interested to get a lot of people's feedback on. Chris Adams: Cool. Alright, well thank you one more time. If people wanna find you on LinkedIn, they would just look up James Hall Greenpixie, presumably right? Or something like that. James Hall: Yeah, and go to our website as well. Chris Adams: Well James, thank you so much for taking me along to this deep dive into the world of GreenOps ,cloud carbon reporting and all the, and the rest. Hope you have a lovely day and yeah. Take care of yourself mate. Cheers. James Hall: Thanks so much, Chris. Hey everyone, thanks for listening. Just a reminder to follow Environment Variables on Apple Podcasts, Spotify, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show, and of course, we'd love to have more listeners. Chris Adams: To find out more about the Green Software Foundation, please visit greensoftware.foundation. That's greensoftware.foundation in any browser. Thanks again, and see you in the next episode.…
E
Environment Variables

1 The Week in Green Software: Data Centers, AI and the Nuclear Question 43:50
43:50
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי43:50
Host Anne Currie is joined by the seasoned Chris Liljenstolpe to talk about the latest trends shaping sustainable technology. They dive into the energy demands of AI-driven data centers and ask the big question around nuclear power in green computing. Discussing the trajectory of AI and data center technology, they take a look into the past of another great networking technology, the internet, to gain insights into the future of energy-efficient innovation in the tech industry. Learn more about our people: Anne Currie: LinkedIn | Website Christopher Liljenstolpe: LinkedIn | Website Find out more about the GSF: The Green Software Foundation Website Sign up to the Green Software Foundation Newsletter Resources: AI’s Growing Energy Appetite – The Need for Transparency [05:24] How DeepSeek erased Silicon Valley's AI lead and wiped $1 trillion from U.S. markets | Fortune Asia [17:35] The SMR Gamble: Betting on Nuclear to Fuel the Data Center Boom [22:53] AI’s Growing Footprint: The Supply Chain Cost of Big Tech Events: Webinar: Data-driven grid decarbonization | Electricity Maps - March 19 at 5:00 PM CET, Virtual Cloud Optimization 2025 – FinOps, GreenOps & AI-Driven Efficiency - March 20 at 4:00 PM GMT, Amsterdam Code Green London March Meetup (Community Organised Event) - March 20 at 6:30 PM GMT, London Green Software Ireland | Meetup - March 26 at 8:00 PM GMT, Virtual If you enjoyed this episode then please either: Follow, rate, and review on Apple Podcasts Follow and rate on Spotify Watch our videos on The Green Software Foundation YouTube Channel! Connect with us on Twitter , Github and LinkedIn ! TRNSCRIPT BELOW: Christopher Liljenstolpe: The US grid's gonna be capped by 2031. We will be out of power in the United States by 2031. Europe will be out first. So something has to give, we have to become more efficient with the way we utilize these resources, the algorithms we build. Chris Adams: Hello, and welcome to Environment Variables, brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software. I'm your host, Chris Adams. Anne Currie: Hello, and welcome to This Week in Software, where we bring you the latest news and insights from the world of sustainable software. This week I'm your guest host Anne Curry. As you know, I'm quite often your guest host, so you're not hearing the dult tones of the usual host, Chris Adams. today we'll be talking to Chris Liljenstolpe. Christopher Liljenstolpe, a leading expert in data center architecture and sustainability at Cisco Networks. Christopher is also the father of Project Calico and co-founder of Tigera, and he's a super expert in cloud infrastructure in green computing. But before I introduce him, I'm going to make it clear I've known Chris for years because he, and he's worked very closely with my husband, so we know each other very well. So that might explain why we seem like we know each other quite well. Who knows. What I do know from Chris is that it's impossible to say what we'll be talking about today. We will go all over the place. But Chris, do you wanna introduce yourself? Christopher Liljenstolpe: We might even cover the topic at hand, although that is an unlikely outcome. But who knows? That might be a first. That would be a first, but it might be an outcome. Anne Currie: So introduce yourself. Introduce yourself. Christopher Liljenstolpe: Sure. So, as Anne said, my name's Christopher Liljenstolpe. I am currently senior director for data Center Architecture, and sustainability here at Cisco, which means, once again, I failed to duck. So I'm the poor sod who's gotten the job of trying to square an interesting circle, which is, how do we build sustainable data centers, and what does a sustainable data center look like? At the same time, dealing with this oncoming light at the end of the tunnel that is certainly not sunshine and blue birds, but is a locomotive called AI. And it's bringing with it gigawatt data centers. So, you know, put that in perspective. Mintel, two years ago we were talking about a high power data center might be a 90 kilowatt rack data center, or a 100 kilowatt rack data center, or a 60 kilowatt rack data center. And about two years ago we went to, okay, it might be 150 kilowatt rack data center, and that was up from 30 kilowatts from years ago. Took a very long time to get to 30 kilowatts. That was good. From two years ago to nine months ago. Nine months ago it went from 150 kilowatts to 250 kilowatts. So it took us decades to get from two kilowatts to 90 kilowatts to 150 kilowatts. And then in a year we went from 150 to 250, maybe 350. Jensen last week just took us to 600 kilowatts a rack. So yeah, that light at the end of the tunnel is not sunshine at the end of the tunnel. So yeah, how do we do sustainable data centers when you've got racks that need nuclear power plants that need strapped into each and every rack? So, you know, I'm the one who gets to figure out, you know, what does a gigawatt data center look like and how do you make it sustainable? So that's my day job. And then, and this really becomes a system of systems problem, which is usually what I end up doing throughout most of my career. Put the Lego blocks together, build system of systems, and then figure out what Lego blocks are missing and what we need to build. So, I did that with Anne's husband on a slightly different space, which was how do you build very scalable networks with millions of endpoints for Kubernetes? And now I'm doing this for data center infrastructure. Anne Currie: Which at least is absolutely fascinating. So for listeners, a bit background on me. I'm one of the authors of O'Reilly's new book, Building Green Software. I'm also the CEO of a learning and development company Strategically Green with the husband who used to work with Chris. So, in Building Green Software, Chris was a major contributor to the networking chapter. So if you are interested in some of the background in this, and the networking chapter is very high level, you don't need to know any super amazing stuff about it, it'll ramp you up on the basics of networking. So take a, have a look, have a read of that. If you want a kind of, a little bit of a lightweight background to what we'll be talking about today. But actually what we're talking about today is not networking. It is, it was a part of, it is obviously at a key part of any data center, but that's not really where your focus is on the moment. It sounds like, more like energy is what you are caring about at the moment with DCs. Is that true or both? It'll always be both, but... Christopher Liljenstolpe: It is, it's both. Energy starts behaving a bit like networking a bit at this level. And it's getting the energy and getting the energy out as well. The cooling is actually a real interesting part of it, but we really start thinking about the energy as an energy network. You almost have to, when we start thinking about energy flows this size, and controlling them and managing them. But, then there's other aspects to this as well. Some of the things that are driving this insane, I'll be right out and say it, this insane per rack density. Why do we need 600 kilowatt racks? Do we need 600 kilowatt racks? But let's assume we do need them. Why do we need them? We need to pack as many GPUs as closely together as possible. That means that we need, and why do we need to do that? We need to get them as closest together as possible because we want them to be network close for very high speed so that they, we have a very high performance cluster or closely bound cluster so that you get your ChatGPT answers very quickly, and they don't hallucinate. So that means putting lots of GPUs and a very high bandwidth memory very close to one another. And when you do that in networking, you want that to be in copper and you want that to be a very specific kind of networking that really ends up using a whole lot of energy unless you pack it very closely together. So that 600 kilowatts is actually the low power variant. If we stretched further out, it would be by another order of magnitude, because we'd have to go into fiber. So we pack it very close. And that means we end up packing a lot of stuff very closely together that drives a lot of power into one rack, and it takes a lot of power to get the heat back out of it again. So it would be worse if we stretched it further out, but it's a networking, it's partially a networking thing that's driving this, actually. So is there one of the things, levers we can try and pull, is there a better way of doing this networking to cluster these things tighter together? So it always comes back to the network, one way or the other. Anne Currie: It does indeed always come. So although I live in a networking household, this I'm not so familiar with it, I don't know how this works. Is this that the GPUs have to talk together very fast, so there's almost no transit time elapsed, transit time in messages between the machines. Is that why the networking is so important? Christopher Liljenstolpe: You wanna get as many GPUs talking as closely together as possible. More specifically GPUs and their high bandwidth memory. So the HBM stacks, the high bandwidth memory stacks and the GPUs. The minute that you have, the way, and one good question, if this isn't a good architecture or not. There are basically in a aI infrastructure, there's three networks that tie the infrastructure together. This what's called the scale up Network, which is the very high speed network that stitches, some number of GPUs together, and that's on the order of, today, anywhere from 3.6 terabits per second, upwards to what's coming down the road, about 10 terabits a second of what's called non-blocking traffic network between the GPUs in a scale up cluster. And that could be anywhere from eight GPUs up to now within the next year or two, 500 and some odd GPUs in that cluster. So in that realm, you could have up to 500 GPUs all talking to each other at 10 terabits a second, or eight terabits a second depending on the GPU manufacturer, et cetera. And that's the highest performing part of the network. Then those clusters are talking to other GPUs and other clusters at usually around 800 gigabits a second. So that's a huge step down in performance. And then those GPUs are talking to the outside world, all those GPUs are going to the outside world at the servers, those things are in the server. Then usually those are packaged for eight GPUs in the server. Those servers driving to the outside world at 800 gigabits a second per server. And that's how they get their data. That's how they get their requests and how they give their answers. so 800 gigabits a second. Anne Currie: I'm gonna stop now and ask a stupid question, which, say a very simple question. So stepping back, a network, and I'm not a net network expert, so I might be able to say something totally stupid here. So, networks, there are two, at least two very important things about networks. One is the bandwidth. The bandwidth is how much enormous, how much data can you get down the pipes from one place to another? And the other is latency. How long does it take to do it? So I think what you are saying there, if I understand it correctly, is AI really needs high bandwidth. And that's what's driving it. It's not latency, it's bandwidth. Christopher Liljenstolpe: It's, yeah, no, you are correct. And people get that wrong. Because there's such high bandwidth, the latency doesn't matter as much, head end latency, because the amount of data being moved is big and the bandwidth is high. There is a little bit of a latency hit, but high performance computing is more latency sensitive. If you've got a very high bandwidth network, the data packets are actually pretty small. So the latency isn't as big a hit. The third is congestion. Congestion kills an AI network. And this is the problem. So if I can take the whole model that I'm computing against and put it in that scale up domain, then everything can talk to everything at full bandwidth and there's no congestion. But if you remember those GPUs that are in the high bandwidth domain, there's eight today, or maybe 72 or 36 or 256 or maybe 500 and some odd if Jenssen's build is correct and some of the other things we're working on with some other vendors might be correct. So that's a lot of bandwidth. If you can't fit it all in that one, then they have to go over that slower 800 gig per GPU versus 10 terabits per GPU to talk to A GPU in another one of those high bandwidth clusters. And all of a sudden you go from 10 terabits or eight terabits, or three terabits even, to 800 gigabits. So that's all of a sudden a much more contended or congested network. So you go from running down a, you know, a motorway at two o'clock in the morning to a bmo, a b, you know, side road, with lots of people on it. And the GPUs do this. Anne Currie: Oh yeah. Christopher Liljenstolpe: And everything slows to a crawl. And all the GPUs go to massive, basically idle. And that's what people don't want. 'Cause those GPUs are very expensive. There's hundreds of those GPU servers are hundreds of thousands of dollars. They use a lot of power and they're just idling waiting for the GPU on the other side of that slow link to get back with an answer. So you don't want your, model or that you're inferring against or your training to be split across these things. So you want everything on that high speed link. And if you want everything on that very high speed link, that multiple terabits per second per GPU, and to think about this, that means that if I've got eight GPUs in a server, that means I've got 80 terabits of bandwidth coming into that server. And if I've got 10 servers, let's say, in that cluster, that means I've got 80 terabits of bandwidth between that server and every other server in that cluster. And you do the math, that's about 10,000 cables running up and down inside that rack. So the cabling becomes interesting. There's all sorts of interesting problems here. so I cram everything in. So this is why I wanna get everything crammed in as tightly as possible so I can get as many things into that rack, it's an easier problem. And the power to put that on copper that runs maybe one meter in length or a meter and a half is less than a wat per cable. Per what's called cerdes. Put it on fiber, I'm over a watt, at least, maybe over a couple of watts. So I go from a 10th of a watt to a couple of watts and it takes more space on the board and everything else so that we get into physics problems. That's why I need to pack it in tight. That's why I need more power in a higher density space, 'cause I wanna get everything into that one high bandwidth domain. Now, another practice might be to do away with this concept of scale out and scale up, and there's some architectures that might do that. But the main model today, the NVIDIA model is scale up and scale out are kept separate. One can argue is that a good model? It is the model in the industry today. That means the software developers have to be cogent of that as well. And the scheduler, people who design the schedulers have to be cogent of that as well. And so this is a design that now ripples through the entire architecture all the way up through the software stack and everything else. Anne Currie: So what you're saying is that we, when we talk about AI and we talk about GPUs and all that kind of stuff, and the incredible amount of power that it requires, we tend not to think about the fact that actually it's the networking that requires one hell of a lot of that power. It's, this is not networking going, you know, across the country. There's not networking outside the data centers. This is networking inside them. Christopher Liljenstolpe: This is networking the rack. Anne Currie: within, Christopher Liljenstolpe: This is a one meter diameter, two meter diameter network and it's tens of thousands of cables. Anne Currie: So I'm sure that something you've been thinking about a lot recently is the enormous shift that's taken place with DeepSeek coming in. Has that completely, have you got, how much of an effect does that have on the network side of things? Christopher Liljenstolpe: So the whole idea behind DeepSeek is you don't need to do, from a training perspective, I think of it as the data sort pre-trained. So you don't need to do as much pre-training. You don't need to do as much training, therefore you don't need as many GPUs to sort of prep your data, prep your model. So that means you don't need as big a scale up cluster to train to get ready to infer. And remember, training doesn't make you any money. If you're in this to make money, training doesn't make you any money. It's inference. It's using the, you know, using the model is what makes the money. And potentially inference as well might be impacted. But Jensen made an interesting point was, as we start doing reasoned inference, that's gonna require a lot more compute. Now it starts looking more like inference, like training, and you're gonna make, up until recently, inference was always one and done. You make one pass through inference and you get the result. That's why we used to get some interesting, let's just call them interesting results. We used to call it, you know, hallucinations. But now you take and you make one pass through and then you sort of check it. Does it make sense and do you reason? Does it look reasonable? And you make another pass through again, another pass through again, and a pass through again, this reasoned inference. That all of a sudden starts using a lot more compute. Looks a little bit more like a training job almost. And that now starts using a lot more GPUs and you need more scale up bandwidth in GPUs. So it'll be interesting to see if DeepSeek benefits, should benefit that reasoned inference as well. The bigger question is, DeepSeek probably only be as good as the pre-trained data they ingest, right? So this sort of becomes, you know, do we feed our AIs with other AI data? And at some point, do we all become self referenced, right? Do we take AI data to feed other AI data? And pretty soon we're all, you know, it is like if all the code in GitHub is written by AIs, and then we use, we train coding models for GitHub using AI written code. Is that a good thing or not a good thing? Anne Currie: If it's tested code. I mean, if they also write tests and they run the tests and the code works, then, but... Christopher Liljenstolpe: Yeah. Yeah. Of course, it's sort of like having the developer write their code too, right? You up with a monoculture. Anne Currie: Yeah, that is true. Christopher Liljenstolpe: You end up with a monoculture. Anne Currie: Yeah, it, yeah, Christopher Liljenstolpe: Or not. Or not, maybe you don't end up with a monoculture. I don't know. This is, now we're getting into philosophy. Anne Currie: So it's interesting. I, I do know, Christopher Liljenstolpe: And now everyone just watched this went from infrastructure to software design to philosophy, and just went. Anne Currie: You know, it's, I, the AI stuff, I do find quite fascinating. I do know somebody who's a Deep Mind engineer and used to work on OpenAI, and I remember them telling me years ago, years and years ago that the big, the massive change, the switch from, you know, it was kind of when AI was starting to get good, I was talking to her nearly 10 years ago. I was like, suddenly it's got a lot better. Why has it got a lot better? And he said it's randomness. It's, we realized that actually if you injected a load of randomness into, a load more randomness into its decision making, suddenly got vastly better. It was a sea change. So it's not as predictable. And it's, it, you know, it is odd that AI, something we don't talk about a lot is that AI is based, at its heart, on the injection of randomness, which I find fascinating. And then, yeah. Christopher Liljenstolpe: There was, an interesting study. If you train AI on bad data in one domain, it will start giving you, bad results off of other domains as well. Anne Currie: That's interesting. Christopher Liljenstolpe: Which was a really sort of, but anyway, yeah, now we're really off the rail. Anne Currie: But yeah, we are, and in fact we've only got 10 minutes left, so we should actually go back onto sustainability. 'Cause the question I wanted to ask you, you mentioned in our bit that we were talking about there, about racks, that, you know, racks are becoming, you know, you needed a nuclear power station for every rack these days. But is that literally the case? Can this only be done through nuclear or can it be done like Texas are making out, are making calls for large, flexible loads for all mega amounts of solar that they're running at. Is it realistic? What do you think, is nuclear and AI, is it a prerequisite? Christopher Liljenstolpe: It is not a prerequisite, but it is probably gonna be a base load demand. And that's because the amount, at least at this point, the amount of money you will invest if you're gonna put up anything a hundred megawatts or more of AI compute, that is a serious amount of investment. And let's also be honest, if you're talking about 500 megawatts or a gigawatt facility, you're also, you're not lifting a substation permit, 'cause there aren't substations for things like that, you are going to jack yourself into a power plant. Because at that point, you know, a gigawatt is a power generation station, right? That is a reactor in a nuclear power station that. Is a, you know, a gas generator, a gas turbine in a, you know, a co-generation power plant, et cetera. It's a turbine in a major hydro, right? It is a full scale commercial power plant unit. So there's no reason to have a substation because you are consuming a full commercial power plant. So you might as well plant it there. That's not small money. You are gonna have to guarantee a load to a power company to do that. One. Two, the amount you're gonna spend on the GPUs, let alone all the other infrastructure that goes around it, that is a huge capital investment. You are not gonna want that sitting idle for one minute in a year. So that is going to be a base load that will always, your shareholders are gonna string you up, that will always be running, so that's gonna be a base load. So something's gonna have to support that base load. It could be solar, but then you're gonna have to have a very big battery plant. There's one going in, in India. There's a one gigawatt facility going in for AI, and it's fully built out. It's gonna be held up by a solar plant. That solar plant is gonna be, one third of the ground is going to be solar, and the remainder is gonna be battery to hold the thing for 24x7 so they will be doing solar, but it's going to be solar battery. But yeah, this will be, you're gonna want this thing running all the time. So we joke about it being nuclear. The funny thing was three years ago we were saying these small modular reactors, a hundred megawatts, that's a perfect size for a data hall. Now we're just saying, you know, go, you know, unshutter your commercial nuclear reactors because the gigawatt size commercial nuclear reactors by now are about the right size, the interesting part to that is, what do you do when you have to refuel the reactor? Because the reactors, most commercial reactors have to be shut down when you refuel. If you're jacked into a reactor, you're, what do you do when they have to shut down the reactor? That's a year process. What do you do for power? 'Cause you're probably not connected to the grid. You're connected to, like what they did in Pennsylvania. You're connected to the reactor. What do you do for power when they shut down that reactor? I hope the folks have thought about that. Maybe you still do small, like small modulars. Maybe you do 12 small modulars at a hundred megawatts each, and you sort of have an n+2. Interesting thoughts. Anne Currie: Well, that is a very interesting thought. So yeah, so you're making two fascinating points there that I have never heard made. One is that we are totally over, we've totally run ahead of SMRs, you know, all that thing we're talking. Totally. We've galloped ahead of that and yet it might actually be worth bringing them back just because of that kind of modern resilience thing of it's better to have 10 than one. You know better to have 10 small ones than one big one. Christopher Liljenstolpe: Yeah, I've got resilient reactors, and if it's molten salts, you can refuel them by just, topping off the salt tanks as you go. And you can remove the poison out of 'em as you go. So, you know, just, back the salt truck up and dump more salt in. It's a little more than that, but yeah, sort of. Anne Currie: Yeah. Christopher Liljenstolpe: If you're interested in bashing your head into the wall and learning about things that you never thought you'd have to learn about, this is a fun time to get into data center infrastructure because you get to do things like, okay, how do I cram a couple hundred terabits per second into a network in a rack? At the same time, talk about liquid molten salt reactors. I mean, you know, it's sort of a broad spectrum of, you know, and oh, and let's also talk about signal integrity of dielectric fluids. 'Cause we might have to send all this stuff swimming in a tank. It's, you know, you have a lot of interesting conversations in one day. Anne Currie: It sounds like you're in a pretty fun area at the moment and we thought it was fun. We thought network cloud networking was fun five years ago. That was nothing as it turns out. Christopher Liljenstolpe: Yeah, so, and one thing that's sort of interesting now is we took Scalable Sustainable Infrastructure Alliance in the Linux Foundation. We've merged it, as I'm sure you've heard, with Green Software Foundation, which, so we thought it was probably time to get the hardware guys and the software guys talking, and gals talking together because we realized that we really needed to have these, the stack not have this wall between the hardware and the software. We really needed to have the same things we were talking about before I alluded to. It's like, okay, the hardware impacts of the horror show that we've got going on. I say that in the nicest possible way to my friends doing the chips, the unique challenges that we have coming, we really need better understanding on the scheduler sides, et cetera, and how we manage that and monitor that and the impacts of that on the software side. So we decided to take the folks who are working on open hardware designs and making those sustainable, and marrying that to the software side and the green software folks who are working on how we manage and monitor that as well. So we decided to take those two and put them together. And the first project out of that is gonna be something called Project Mycelium, which is going to be actually looking at, how we build software linkages on, how you manage and monitor the hardware infrastructure on the software side. Anne Currie: Named after the networks of fungus under the, the way that actually, everything in a forest is more, more connected together than we'd ever realized previously using these incredible mycelium connections, I take it. I'm guessing that's why it's named that way. Christopher Liljenstolpe: Exactly. Exactly. And a good friend of mine, who used to be the CTO, field CTO at Equinix, is gonna be running that project for me there. Anne Currie: So, yeah. Utterly fascinating stuff. So yes, I mean, so take, so stepping back from all of this, it's mind blowing amount of new, of complex new thoughts and approaches to things. And what's your view? I mean, you, have a kind of. 30,000, 40,000 foot of view, tend to, on all of these things. What are you thinking? Where's it all going? What's it gonna, what's gonna happen? Christopher Liljenstolpe: Well, one of my jokes is yes. AI will kill us all. The question is, will it get smart enough and realize we're the problem and actively kill us, or will it just take so much resources, it will just melt all the ice caps and create a water world before it becomes sentient and just kill us that way? that's it. There's a joke in every joke. I think right now the path that we're on frankly is not sustainable. You know, we can't, you know, the next logical step from this is we're looking, you know, if we follow that train of 150 know, 60, 100 152, 5600, it's north of a megawatt a rack. That path is unsustainable both from, resources, power, but also economics. It just, we can't do that. At the going rate, the US grid's gonna be capped by 2031. We will be out of power in the United States by 2031. Europe will be out first. so yeah. So something has to give, we have to become more efficient with the way we utilize these resources, the algorithms we build. We're still brute forcing AI. We think this is all brilliant software. It's not, we're still brute forcing the heck out of this stuff. So something's gotta give there. I think when that does, there'll be a lot of business models that might face some challenges. Because there's a lot of value built that this is going to continue going this way. But it needs to happen. So we're gonna end up, I think, and there's a lot of fluff as well. There's a lot of pet, the equivalent of pets.com, out there right now. I think we'll end up with a lot more distributed use cases for AI that don't need the same amount of power. We don't need huge inference across it. But yeah, the, current trend will have to get adjusted, and somebody's gonna figure it out. Anne Currie: The old phrase. Christopher Liljenstolpe: People try it out. Anne Currie: If something can't go on, it won't, it'll stop, you know? know Christopher Liljenstolpe: There will be enough economic pressure that it will drive an innovation that will fix it. So I mean, just you looking at it, just Anne Currie: Yeah, it's the code. Christopher Liljenstolpe: I'm not sure how we'll mine enough copper to support this building the power transmission infrastructure. So anyway, that's my doom and gloom part of this. But I think, it's, what we will end up by the time we're done with it though, is a very efficient computational infrastructure, is it's forcing us to look at everything along the stack. Air is an absolutely horrible heat transfer fluid. We are, everyone's running madly down the road of liquid. Everyone's running madly down the road of higher voltage. Which again, the way we transmit power in a data center is pretty horrible today. Everyone's ringing all the efficiencies they can outta that because now we have to, it's just economically impossible to do it any other way. So whatever comes outta the back of this is we are gonna have a very efficient data center infrastructure. Which is all for the better. We're probably gonna end up with a, we will probably end up driving, this will probably fix the grids, because it has to, because we're driving a very different power transmission infrastructure. So we'll fix a bunch of problems along the way. Silver lining. Anne Currie: And there is a lot of money behind it. So it's, yeah, it is actually aligned with a lot of good things that we want and it's driving a lot of money in those directions. Yeah. It's interesting. If it doesn't kill us all, which, you know, Christopher Liljenstolpe: Yeah, and who knows? It'll probably, it'll probably bring back nuclear, we'll probably, have, be able to have rational conversations about other non-carbon emitting power sources that, Anne Currie: Space-based solar power. Well, I'm desperate for it. Christopher Liljenstolpe: Maybe, yeah, maybe. Might get some countries that just recently shuttered all their nuclear plants go back and put their cooling towers back up. Not talking about any European countries. Anne Currie: Well, I'm sure everybody's brain is completely full now, so, and we've had a really interesting discussion that I have utterly enjoyed. So I think we should probably draw the podcast to an end with any final comments that anybody wants to make. So everything we, well, everything we talked about that we can put in the show notes, will be in the show notes at the bottom of the episode. Do you have any final points that you want to make? Christopher Liljenstolpe: I mean, it is fun times. And it's not all doom and gloom, but you know, it is right now, there is a bit of a hike and you know, it, at this point it seems like it is a train that's gonna keep on going and it will correct. But it is leading to a lot of innovation and that innovation will hang around. Just like when the dot-com bust happened, we will see a correction here and what people thought originally the internet was going to do and what was gonna be delivered by the internet didn't really happen. But it certainly, the things that it is used for, people never, even the people who originally created the ARPANET or the people who invested in the dot-com original late nineties explosion, what they, the money they put into it, they had, they did not foresee what it is being used for now. But we, the world has been, you know, forever changed by that for good and ill both, by that investment and it's gonna be the same thing here. What we're investing in building now, we think we know what it's gonna be used for, we're wrong. Everything we think it's gonna be used for, 5% of it will probably be still what it's being used for 15 years from now. The rest of it, we have no idea. And we'll benefit from it and we'll suffer for it. But, we're building a base infrastructure and other people will build, will actually build on that base infrastructure and deliver things that we will have no idea about. Anne Currie: Yeah, I, that reminds me of sort of a discussion that we had a few years back about why the internet survived 2020, the beginning of the pandemic, which kept up the west. 'Cause otherwise nobody would, if we hadn't been able to all stay at home and work over, video conferencing, things like that. And a lot of the infrastructure that was put in place that we relied on there was to support high definition stream tv. So it was like game people put it in so the folk could watch Game of Thrones, then Game of Thrones saved the West. It's like, who would've predicted that? You just don't know what's gonna happen. Christopher Liljenstolpe: Exactly. Yep. Indeed. And that infrastructure actually, which we didn't talk about, was put in place because, service providers made a horrible choice early on of putting in broadband that was the cheap choice that couldn't do multicast. If they had put in multicast capable infrastructure, they wouldn't have put in the amount of backbone infrastructure that they did. Because they would've had multicast and they wouldn't have had to do the build, that they did, which indeed actually helped us. So it was, you know, that not having multicast out there actually probably saved our bacon. And it pains me to no end. because I was sitting there banging away in the mid nineties, et cetera, as like, "we need to get multicast out there. It's so much more efficient. It will save so much money." And if we had, we probably would've been in much worse shape when the pandemic hit. Anne Currie: It is interesting that flabbiness, things like inefficient code and inefficient code is what we've been building for the past 20 years. Most of my career, we've been building highly inefficient code, but it does mean there's a lot of untapped potential in there to improve, you know, it's, Christopher Liljenstolpe: True. Anne Currie: Unrealized potential as a result of lazy behavior in the past. We are mining our own past laziness that might save us all. Christopher Liljenstolpe: Indeed. Anne Currie: On that note, our laziness and lack of foresight in the past have tended to save us in the future. It might well save us again. On that happy note or that nuanced note, thank you very much for listening and thank you very much for being my excellent guest today, Chris. Christopher Liljenstolpe: Thank you for having me on, Anne, and thank you everyone for listening. I hope it was, if not educational, at least entertaining. Anne Currie: I'm sure it was both. Thank you very much and speak to you on the next time I'm hosting the Environment Variables podcast. Goodbye. Christopher Liljenstolpe: Bye everyone. Hey everyone, thanks for listening. Just a reminder to follow Environment Variables on Apple Podcasts, Spotify, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show, and of course, we'd love to have more listeners. Chris Adams: To find out more about the Green Software Foundation, please visit greensoftware.foundation. That's greensoftware.foundation in any browser. Thanks again, and see you in the next episode.…
E
Environment Variables

1 Backstage: Green Software Patterns 11:45
11:45
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי11:45
In this episode, Chris Skipper takes us backstage into the Green Software Patterns Project, an open-source initiative designed to help software practitioners reduce emissions by applying vendor-neutral best practices. Guests Franziska Warncke and Liya Mathew, project leads for the initiative, discuss how organizations like AVEVA and MasterCard have successfully integrated these patterns to enhance software sustainability. They also explore the rigorous review process for new patterns, upcoming advancements such as persona-based approaches, and how developers and researchers can contribute. Learn more about our people: Chris Skipper: LinkedIn | Website Franziska Warncke: LinkedIn Liya Mathew: LinkedIn Find out more about the GSF: The Green Software Foundation Website Sign up to the Green Software Foundation Newsletter Resources: Green Software Patterns | GSF [00:23] GitHub - Green Software Patterns | GSF [ 05:42] If you enjoyed this episode then please either: Follow, rate, and review on Apple Podcasts Follow and rate on Spotify Watch our videos on The Green Software Foundation YouTube Channel! Connect with us on Twitter , Github and LinkedIn ! TRANSCRIPT BELOW: Chris Skipper: Welcome to Environment Variables, where we bring you the latest news from the world of sustainable software development. I am the producer of the show, Chris Skipper, and today we're excited to bring you another episode of Backstage, where we uncover the stories, challenges, and innovations driving the future of green software. In this episode, we're diving into the Green Software Patterns Project, an open source initiative designed to curate and share best practices for reducing software emissions. The project provides a structured approach for software practitioners to discover, contribute, and apply vendor-neutral green software patterns that can make a tangible impact on sustainability. Joining us today are Franziska Warncke and Liya Mathew, the project leads for the Green Software Patterns Initiative. They'll walk us through how the project works, its role in advancing sustainable software development, and what the future holds for the Green Software Patterns. Before we get started, a quick reminder that everything we discuss in this episode will be linked in the show notes below. So without further ado, let's dive into our first question about the Green Software Patterns project. My first question is for Liya. The project is designed to help software practitioners reduce emissions in their applications. What are some real world examples of how these patterns have been successfully applied to lower carbon footprints? Liya Mathew: Thanks for the question, and yes, I am pretty sure that there are a lot of organizations as well as individuals who have greatly benefited from this project. A key factor behind the success of this project is the impact that these small actions can have on longer runs. For example, AVEVA has been an excellent case of an organization that embraced these patterns. They created their own scoring system based on Patterns which help them measure and improve their software sustainability. Similarly, MasterCard has also adopted and used these patterns effectively. What's truly inspiring is that both AVEVA and MasterCard were willing to share their learnings with the GSF and the open source community as well. Their contributions will help others learn and benefit from their experiences, fostering a collaborative environment where everyone can work towards a more sustainable software. Chris Skipper: Green software patterns must balance general applicability with technical specificity. How do you ensure that these patterns remain actionable and practical across different industries, technologies and software architectures? Liya Mathew: One of the core and most useful features of patterns is the ability to correlate the software carbon intensity specification. Think of it as a bridge that connects learning and measurement. When we look through existing catalog of patterns, one essential thing that stands out is their adaptability. Many of these patterns not only align with sustainability, but also coincide with security and reliability best practices. The beauty of this approach is that we don't need to completely rewrite our software architecture to make it more sustainable. Small actions like catching static data or providing a dark mode can make significant difference. These are simple, yet effective steps that can lead us a long way towards sustainability. Also, we are nearing the graduation of Patterns V1. This milestone marks a significant achievement and we are already looking ahead to the next exciting phase: Patterns V2. In Patterns V2, we are focusing on persona-based and behavioral patterns, which will bring even more tailored and impactful solutions to our community. These new patterns will help address specific needs and behaviors, making our tools even more adaptable and effective. Chris Skipper: The review and approval process for new patterns involves multiple stages, including subject matter expert validation and team consensus. Could you walk us through the workflow for submitting and reviewing patterns? Liya Mathew: Sure. The review and approval process for new patterns involve multiple stages, ensuring that each pattern meets a standard before integration. Initially, when a new pattern is submitted, it undergoes an initial review by our initial reviewers. During this stage, reviewers check if the pattern aligns with the GSF's mission of reducing software emissions, follows the GSF Pattern template, and adheres to proper formatting rules. They also ensure that there is enough detail for the subject matter expert to evaluate the pattern. If any issue arises, the reviewer provides clear and constructive feedback directly in the pull request, and the submitter updates a pattern accordingly. Once the pattern passes the initial review, it is assigned to an appropriate SME for deeper technical review, which should take no more than a week, barring any lengthy feedback cycles. The SME checks for duplicate patterns validates the content as assesses efficiency and accuracy of the pattern in reducing software remission. It also ensures that the pattern's level of depth is appropriate. If any areas are missing or incomplete, the SME provides feedback in the pull request. If the patterns meet all the criteria, SME will then remove the SME review label and adds a team consensus label and assigns this pull request back to the initial reviewer. Then the Principles and Patterns Working Group has two weeks to comment or object to the pattern, requiring a team consensus before the PR can be approved and merged in the development branch. Thus the raw process ensures that each pattern is well vetted and aligned with our goals. Chris Skipper: For listeners who want to start using green software patterns in their projects, what's the best way to get involved, access the catalog, or submit a new pattern? Liya Mathew: All the contributions are made via GitHub pull requests. You can start by submitting a pull request on our repository. Additionally, we would love to connect with everyone interested in contributing. Feel free to reach out to us on LinkedIn or any social media handles and express your interest in joining our project's weekly calls. Also, check if your organization is a member of the Green Software Foundation. We warmly welcome contributions in any capacity. As mentioned earlier, we are setting our sights on a very ambitious goal for this project, and your involvement would be invaluable. Chris Skipper: Thanks to Liya for those great answers. Next, we had some questions for Franziska. The Green Software Patterns project provides a structured open source database of curated software patterns that help reduce software emissions. Could you give us an overview of how the project started and its core mission? Franziska Warncke: Great question. The Green Software Patterns project emerged from a growing recommendation of the environmental impact of software and the urgent need for sustainable software engineering practices. As we've seen the tech industry expand, it became clear that while hardware efficiency has been a focal point for sustainability, software optimization was often overlooked. A group of dedicated professionals began investigating existing documentation, including resources like the AWS Well-Architected Framework, and this exploration laid to groundwork for the project. This allows us to create a structured approach to the curating of the patterns that can help reduce software emissions. We developed a template that outlines how each pattern should be presented, ensuring clarity and consistency. Additionally, we categorize these patterns into the three main areas, cloud, web, and AI. Chris Skipper: Building an open source knowledge base and ensuring it remains useful, requires careful curation and validation. What are some of the biggest challenges your team has faced in developing and maintaining the green software patterns database? Franziska Warncke: Building and maintaining an open source knowledge base like the Green Software Patterns database, comes with its own set of challenges. One of the biggest hurdles we've encountered is resource constraints. As an open source project, we often operate with limited time personnel, which makes it really, really difficult to prioritize certain tasks over others. Despite this challenge, we are committed to continuous improvement, collaboration, and community engagement to ensure that the Green Software Patterns database remains a valuable resource for developers looking to adopt more sustainable practices. Chris Skipper: Looking ahead, what are some upcoming initiatives for the project? Are there any plans to expand the pattern library or introduce new methodologies for evaluating and implementing patterns? Franziska Warncke: Yes, we have some exciting initiatives on the horizon. So one of our main focuses is to restructure the patterns catalog to adopt the persona-based approach. This means we want to create tailored patterns for various worlds within the software industry, like developers, project managers, UX designers, and system architects. By doing this, we aim to make the patents more relevant and accessible to a broader audience. We are also working on improving the visualization of the patterns. We recognize that user-friendly visuals are crucial for helping people understand and adopt these patterns in their own projects, which was really missing before. In addition to that, we plan to categorize the patterns based on different aspects. Such as persona type, adoptability and effectiveness. This structured approach will help users quickly find the patterns that are most relevant to their roads and their needs, making the entire experience much more streamlined. Moreover, we are actively seeking new contributors to join us. And we believe that the widest set of voices and perspective will enrich our knowledge base and ensure that our patterns reflect a wide range of experience. So, if anyone is interested, we'd love to hear from you. Chris Skipper: The Green Software Patterns Project is open source and community-driven. How can developers, organizations, and researchers contribute to expanding the catalog and improving the quality of the patterns? Franziska Warncke: Yeah, the Green Software Patterns Project is indeed open source and community driven, and we welcome contributions from developers, organizations, and researchers to help expand our catalog and improve the quality of the patterns. We need people to review the existing patterns critically and provide feedback. This includes helping us categorize them for a specific persona, ensuring that each pattern is tailored to each of various roles in the software industry. Additionally, contributors can assist by adding more information and context to the patterns, making them more comprehensive and useful. Visuals are another key area where we need help. Creating clear and engaging visuals that illustrate how to implement these patterns can significantly enhance their usability. Therefore, we are looking for experts who can contribute their skills in design and visualization to make the patterns more accessible. So if you're interested, then we would love to have you on board. Thank you. Chris Skipper: Thanks to Franziska for those wonderful answers. So we've reached the end of the special backstage episode on the Green Software Patterns Project at the GSF. I hope you enjoyed the podcast. To listen to more podcasts about green software, please visit podcast.greensoftware.foundation. And we'll see you on the next episode. Bye for now. …
E
Environment Variables

1 The Week in Green Software: Sustainable AI Progress 50:27
50:27
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי50:27
For this 100th episode of Environment Variables, guest host Anne Currie is joined by Holly Cummins, senior principal engineer at Red Hat, to discuss the intersection of AI, efficiency, and sustainable software practices. They explore the concept of "Lightswitch Ops"—designing systems that can easily be turned off and on to reduce waste—and the importance of eliminating zombie servers. They cover AI’s growing energy demands, the role of optimization in software sustainability, and Microsoft's new shift in cloud investments. They also touch on AI regulation and the evolving strategies for balancing performance, cost, and environmental impact in tech. Learn more about our people: Chris Adams: LinkedIn | GitHub | Website Holly Cummins: LinkedIn | GitHub | Website Find out more about the GSF: The Green Software Foundation Website Sign up to the Green Software Foundation Newsletter News: AI Action Summit: Two major AI initiatives launched | Computer Weekly [40:20] Microsoft reportedly cancels US data center leases amid oversupply concerns [44:31] Events: Data-driven grid decarbonization - Webinar | March 19, 2025 The First Eco-Label for Sustainable Software - Frankfurt am Main | March 27, 2025 Resources: LightSwitchOps Why Cloud Zombies Are Destroying the Planet and How You Can Stop Them | Holly Cummins Simon Willison’s Weblog [32:56] The Goal If you enjoyed this episode then please either: Follow, rate, and review on Apple Podcasts Follow and rate on Spotify Watch our videos on The Green Software Foundation YouTube Channel! Connect with us on Twitter , Github and LinkedIn ! TRANSCRIPT BELOW: Holly Cummins: Demand for AI is growing, demand for AI will grow indefinitely. But of course, that's not sustainable. Again, you know, it's not sustainable in terms of financially and so at some point there will be that correction. Chris Adams: Hello, and welcome to Environment Variables, brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software. I'm your host, Chris Adams. Anne Currie: So hello and welcome to Environment Variables, where we bring you the latest news and updates from the world of sustainable software. Now, today you're not hearing the dulcet tones of your usual host, Chris Adams. I am a guest host on this, a common guest, a frequent guest host, Anne Currie. And my guest today is somebody I've known for quite a few years and I'm really looking forward to chatting to, Holly. So do you want to introduce yourself, Holly? Holly Cummins: So I'm Holly Cummins. I work for Red Hat. My day job is that, I'm a senior principal engineer and I'm helping to develop Quarkus, which is Java middleware. And I'm looking at the ecosystem of Quarkus, which sounds really sustainability oriented, but actually the day job aspect is I'm more looking at the contributors and, you know, the extensions and that kind of thing. But one of the other things that I do end up looking a lot at is the ecosystem aspect of Quarkus in terms of sustainability. Because Quarkus is a extremely efficient Java runtime. And so when I joined the team, one of the things we asked well, one of the things I asked was, can we, know this is really efficient. Does that translate into an environmental, you know, benefit? Is it actually benefiting the ecosystem? You know, can we quantify it? And so we did that work and we were able to sort of validate our intuition that it did have a much lower carbon footprint, which was nice. But some things of what we did actually surprised us as well, which was also good because it's always good to be challenged in your assumptions. And so now part of what I'm doing as well is sort of broadening that focus from, instead of measuring what we've done in the past, thinking about, well, what does a sustainable middleware architecture look like? What kind of things do we need to be providing? Anne Currie: Thank you very much indeed. That's a really good overview of what I really primarily want to be talking about today. We will be talking about a couple of articles as usual on AI, but really I want to be focused on what you're doing in your day job because I think it's really interesting and incredibly relevant. So, as I said, my name is Anne Currie. I am the CEO of a learning and development company called Strategically Green. We do workshops and training around building green software and changing your systems to align with renewables. But I'm also one of the authors of O'Reilly's new book, Building Green Software, and Holly was probably the most, the biggest single reviewer/contributor to that book, and it was in her best interest to do so because, we make, I make tons and tons of reference to a concept that you came up with. I'm very interested in the backstory to this concept, but perhaps you can tell me a little bit more about it because it is, this is something I've not said to you before, but it is, this comes up in review feedback, for me, for the book, more than any other concept in the book. Lightswitch Ops. People saying, "Oh, we've put in, we've started to do Lightswitch Ops." If anybody says "I've started to do" anything, it's always Lightswitch Ops. So tell us, what is Lightswitch Ops? Holly Cummins: So Lightswitch Ops, it's really, it's about architecting your systems so that they can tolerate being turned off and on, which sounds, you know, it sounds sort of obvious, but historically that's not how our systems have worked. And so the first step is architect your system so that they can tolerate being turned off and on. And then the next part is once you have that, actually turn them off and on. And, it sort of, it came about because I'm working on product development now, and I started my career as a performance engineer, but in between those two, I was a client facing consultant, which was incredibly interesting. And it was, I mean, there was, so many things that were interesting, but one of the things that I sort of kept seeing was, you know, you sort of work with clients and some of them you're like, "Oh wow, you're, you know, you're really at the top of your game" and some you think, "why are you doing this way when this is clearly, you know, counterproductive" or that kind of thing. And one of the things that I was really shocked by was how much waste there was just everywhere. And I would see things like organizations where they would be running a batch job and the batch job would only run at the weekends, but the systems that supported it would be up 24/7. Or sometimes we see the opposite as well, where it's a test system for manual testing and people are only in the office, you know, nine to five only in one geo and the systems are up 24 hours. And the reason for this, again, it's sort of, you know, comes back to that initial thing, it's partly that we just don't think about it and, you know, that we're all a little bit lazy, but it's also that many of us have had quite negative experiences of if you turn your computer off, it will never be the same when it comes back up. I mean, I still have this with my laptop, actually, you know, I'm really reluctant to turn it off. But now we have, with laptops, we do have the model where you can close the lid and it will go to sleep and you know that it's using very little energy, but then when you bring it back up in the morning, it's the same as it was without having to have the energy penalty of keeping it on overnight. And I think, when you sort of look at the model of how we treat our lights in our house, nobody has ever sort of left a room and said, "I could turn the light off, but if I turn the light off, will the light ever come back on in the same form again?" Right? Like we just don't do that. We have a great deal of confidence that it's reliable to turn a light off and on and that it's low friction to do it. And so we need to get to that point with our computer systems. And you can sort roll with the analogy a bit more as well, which is in our houses, it tends to be quite a manual thing of turning the lights off and on. You know, I turn the light on when I need it. In institutional buildings, it's usually not a manual process to turn the lights off and on. Instead, what we end up is, we end up with some kind of automation. So, like, often there's a motion sensor. So, you know, I used to have it that if I would stay in our office late at night, at some point if you sat too still because you were coding and deep in thought, the lights around you would go off and then you'd have to, like, wave your arms to make the lights go back on. And it's that, you know, it's this sort of idea of like we can detect the traffic, we can detect the activity, and not waste the energy. And again, we can do exactly this our computer systems. So we can have it so that it's really easy to turn them off and on. And then we can go one step further and we can automate it and we can say, let's script to turn things off at 5pm because we're only in one geo. And you know, if we turn them off at 5pm, then we're enforcing quite a strict work life balance. So... Anne Currie: Nice, nice work. Holly Cummins: Yeah. Sustainable. Sustainable pace. Yeah. Or we can do sort of, you know, more sophisticated things as well. Or we can say, okay, well, let's just look at the traffic and if there's no traffic to this, let's turn it off. off Anne Currie: Yeah, it is an interestingly simple concept because it's, when people come up with something which is like, in some ways, similar analogies, a light bulb moment of, you know, why don't people turn things off? Becasue, so Holly, everybody is an unbelievably good public speaker. One of the best public speakers out there at the moment. And we first met because you came and gave talks at, in some tracks I was hosting on a variety. Some on high performance code, code efficiency, some on, being green. One of the stories you told was about your Lightswitch moment, the realization that actually this was a thing that needed to happen. And I thought it was fascinating. It was about how, I know everybody, I've been in the tech industry for a long time, so I've worked with Java a lot over the years and many years ago. And one of the issues with Java in the old days was always, it was very hard to turn things off and turn them back on again. And that was fine in the old world, but you talked about how that was no longer fine. And that was an issue with the cloud because the cloud, using the cloud well, turning things on and off and things, doing things like auto scaling is utterly key to the idea of the cloud. And therefore it had to become part of Quarkus, part of the future of Java. Am I right in that understanding? Holly Cummins: Yeah, absolutely. And the cloud sort of plays into both parts of the story, actually. So definitely we, the things that we need to be cloud native, like being able to support turning off and on again, are very well aligned to what you need to support Lightswitch Ops. And so the, you know, there with those two, we're pulling in the same direction. The needs of the cloud and the needs of sustainability are both driving us to make systems that, I just saw yesterday, sorry this is a minor digression, but I was looking something up, and we used to talk a lot about the Twelve-Factor App, and you know, at the time we started talking about Twelve-Factor Apps, those characteristics were not at all universal. And then someone came up with the term, the One-Factor App, which was the application that could just tolerate being turned off and on. And sometimes even that was like too much of a stretch. And so there's the state aspect to it, but then there's also the performance aspect of it and the timeliness aspect of it. And that's really what Quarkus has been looking at that if you want to have any kind of auto scaling or any kind of serverless architecture or anything like that, the way Java has historically worked, which is that it eats a lot of memory and it takes a long time to start up, just isn't going to work. And the sort of the thing that's interesting about that is quite often when we talk about optimizing things or becoming more efficient or becoming greener, it's all about the trade offs of like, you know, "oh, I could have the thing I really want, or I could save the world. I guess I should save the world." But sometimes what we can do is we can just find things that we were paying for, that we didn't even want anymore. And that's, I think, what Quarkus was able to do. Because a lot of the reason that Java has a big memory footprint and a lot of the reason that Java is slow to start up is it was designed for a different kind of ops. The cloud didn't exist. CI/CD didn't exist. DevOps didn't exist. And so the way you built your application was you knew you would get a release maybe once a year and deployment was like a really big deal. And you know, you'd all go out and you'd have a party after you successfully deployed because it was so challenging. And so you wanted to make sure that everything you did was to avoid having to do a deployment and to avoid having to talk to the ops team because they were scary. But of course, even though we had this model where releases happen very rarely, or the big releases happen very rarely, of course, the world still moves on, you know, people still had defects, people, so what you ended up with was something that was really much more optimized towards patching. So can we take the system and without actually taking, turning it off and on, because that's almost impossible, can we patch it? So everything was about trying to change the engine of the plane while the plane was flying, which is really clever engineering. If you can support that, you know, well done you. It's so dynamic. And so everything was optimized so that, you know, you could change your dependencies and things would keep working. And, you know, you could even change some fairly important characteristics of your dependencies and everything would sort of adjust and it would ripple back through the system. But because that dynamism was baked into every aspect of the architecture, it meant that everything just had a little bit of drag, and everything had a little bit of slowdown that came from that indirection. And then now you look at it in the cloud and you think, well, wait a minute. I don't need that. I don't need that indirection. I don't need to be able to patch because I have a CI/CD pipeline, and if I'm going into my production systems and SSHing in to change my binaries, something has gone horribly wrong with my process. And you know, I need to, I have all sorts of problems. So really what Quarkus was able to do was get rid of a whole bunch of reflection, get rid of a whole bunch of indirection, do more upfront at build time. And then that gives you much leaner behavior at runtime, which is what you want in a cloud environment. Anne Currie: Yeah. And what I love about this and love about the story of Quarkus is, it's aligned with something, non functional requirements. It's like, it's an unbelievably boring name, and for something which is a real pain point for companies. But it's also, in many ways, the most important thing and the most difficult thing that we do. It's like, being secure, being cost effective, being resilient. A lot of people say to me, well, you know, actually all you're doing with green is adding another non functional requirement. We know those are terrible. But I can say, no, we need to not make it another non functional requirements. It's just a good, another motivator for doing the first three well, you know. Also scaling is about resilience. It's about cost saving, and it's about being green. And it's about, and being able to pave rather than patch, I think is, was the term. It's more secure, you know. Actually patching is much less secure than repaving, taking everything down and bringing it back up. All the modern thinking about being more secure, being faster, being cheaper, being more resilient is aligned or needs to be aligned with being green and it can be, and it should be, and it shouldn't just be about doing less. Holly Cummins: Absolutely. And, you know, especially for the security aspect, when you look at something like tree shaking, that gives you more performance by getting rid of the code that you weren't using. Of course, it makes you more secure as well because you get rid of all these code paths and all of these entry points and vulnerabilities that had no benefit to you, but were still a vulnerability. Anne Currie: Yeah, I mean, one of the things that you've talked about Lightswitch Ops being related to is, well, actually not Lightswitch Ops, but the thing that you developed before Lightswitch Ops, the concept of zombie servers. Tell us a little bit about that because that not only is cost saving, it's a really big security improvement. So tell us about zombie, the precursor to Lightswitch Ops. Holly Cummins: Yeah, zombie servers are again, one of those things that I sort of, I noticed it when I was working with clients, but I also noticed it a lot in our own development practices that what we would do was we would have a project and we would fire up a server in great excitement and you know, we'd register something on the cloud or whatever. And then we'd get distracted and then, or then we, you know, sometimes we would develop it but fail to go to production. Sometimes we'd get distracted and not even develop it. And I looked and I think some of these costs became more visible and more obvious when we move to the cloud, because it used to be that when you would provision a server, once it was provisioned, you'd gone through all of the pain of provisioning it and it would just sit there and you would keep it in case you needed it. But with the cloud, all of a sudden, keeping it until you needed it had a really measurable cost. And I looked and I realized, you know, I was spending, well, I wasn't personally spending, I was costing my company thousands of pounds a month on these cloud servers that I'd provisioned and forgotten about. And then I looked at how Kubernetes, the sort of the Kubernetes servers were being used and some of the profiles of the Kubernetes servers. And I realized that, again, there's, each company would have many clusters. And I was thinking, are they really using all of those clusters all of the time? And so I started to look into it and then I realized that there had been a lot of research done on it and it was shocking. So again, you know, the sort of the, I have to say I didn't coin the term zombie servers. I talk about it a lot, but, there was a company called the Antithesis Institute. And what they did, although actually, see, now I'm struggling with the name of it because I always thought they were called the Antithesis Institute. And I think it's actually a one letter variant of that, which is much less obvious as a word, but much more distinctive. But I've, every time I talked about them, I mistyped it. And now I can't remember which one is the correct one, but in any case, it's something like the Antithesis Institute. And they did these surveys and they found that, it was something like a third of the servers that they looked at were doing no work at all. Or rather no, no useful work. So they're still consuming energy, but there's no work being done. And when they say no useful work as well, that sounds like a kind of low bar. Because when I think about my day job, quite a lot of it is doing work that isn't useful. But they had, you know, it wasn't like these servers were serving cat pictures or that kind of thing. You know, these servers were doing nothing at all. There was no traffic in, there was no traffic out. So you can really, you know, that's just right for automation to say, "well, wait a minute, if nothing's going in and nothing's coming out, we can shut this thing down." And then there was about a further third that had a utilization that was less than 5%. So again, you know, this thing, it's talking to the outside world every now and then, but barely. So again, you know, it's just right for a sort of a consolidation. But the, I mean, the interesting thing about zombies is as soon as you talk about it, usually, you know, someone in the audience, they'll turn a little bit green and they'll go, "Oh, I've just remembered that server that I provisioned." And sometimes, you know, I'm the one giving the talk and I'm like, Oh, while preparing this talk, I just realized I forgot a server, because it's so easy to do. And the way we're measured as well, and the way we measure our own productivity is we give a lot more value to creating than to cleaning up. Anne Currie: Yeah. And in some ways that makes sense because, you know, creating is about growth and cleaning up you know, it's about degrowth. It's about like, you know, it's like you want to tell the story of growth, but I've heard a couple of really interesting, sales on zombie servers since you started, well, yeah, since you started talking about it, you may not have invented it, but you popularized it. One was from, VMware, a cost saving thing. They were, and it's a story I tell all the time about when they were moving data centers in Singapore, setting up a new data center in Singapore. They decided to do a review of all their machines to see what had to go across. And they realized that 66 percent of their machines did not need to be reproduced in the new data center. You know, they had a, and that was VMware. People who are really good at running data centers. So imagine what that's like. But moving data centers is a time when it often gets spotted. But I will say, a more, a differently disturbing story from a company that wished to remain nameless. Although I don't think they need to because I think it's just an absolutely bog standard thing. They were doing a kind of thriftathon style thing of reviewing their data center to see if there was stuff that they could save money on, and they found a machine that was running at 95, 100 percent CPU, and they thought, they thought, Oh my God, it's been hacked. It's been hacked. Somebody's mining Bitcoin on this. It's, you know, or maybe it's attacking us. Who knows? And so they went and they did some searching around internally, and they found out that it was somebody who turned on a load test, and then forgot to turn it off three years previously. And And the, I would say that obviously that came up from the cost, but it also came up from the fact that machine could have been hacked. You know, it could be, could have been mining Bitcoin. It could have been attacking them. It could have been doing anything. They hadn't noticed because it was a machine that no one was looking at. And I thought it was an excellent example. I thought those two, excellent examples of the cost and the massive security hole that comes from machines that nobody is looking at anymore. So, you know, non functional requirements, they're really important. And Holly Cummins: Yeah. Anne Currie: doing better on them is also green. And also, they're very, non functional requirements are really closely tied together. Holly Cummins: Yeah. I mean, oh, I love both of those stories. And I've heard the VMware one before, but I hadn't heard the one about the hundred percent, the load test. That is fantastic. One of the reasons I like talking about zombies and I think one of the reasons people like hearing about it I mean, it's partly the saving the world. But also I think when we look at greenness and sustainability, some of it is not a very cheerful topic, but the zombie servers almost always when you discover the cases of them, they are hilarious. I mean, they're awful, but they're hilarious And you know, it's just this sort of stuff of, "how did this happen? How did we allow this to happen?" Sometimes it's so easy to do better. And the examples of doing bad are just something that we can all relate to. And, but on the same time, you know, you sort of think, oh, that shouldn't have happened. How did that happen? Anne Currie: But there's another thing I really like about zombie servers, and I think you've pointed out this yourself, and I plagiarized from your ideas like crazy in Building Green Software, which is one of the reasons why I got you to be a reviewer, so you could complain about it if you wanted to early on. The, Holly Cummins: It also means I would agree with you a lot. Yes. Oh This is very, sensible. Very sensible. Yes. Anne Currie: One of the things that we, that constantly comes up when I'm talking to people about this and when we're writing the book and when we're going out to conferences, is people need a way in. And it's often that, you know, that people think the way into building green software is to rewrite everything in C and then they go, "well, I can't do that. So that's the end. That's the only way in. And I'm not going to be able to do it. So I can't do anything at all." Operations and zombie servers is a really good way in, because you can just do it, you can, instead of having a hackathon, you can just do a thrift a thon, get everybody to have a little bot running that doesn't need to be running, instantly halve your, you know, it's not uncommon for people to find ways to halve their life. Yeah. carbon emissions and halve their hosting costs simultaneously in quite a short period of time and it'd be the first thing they do. So I quite like it because it's the first thing they do. What do you think about that? It's, is it the low hanging fruit? Holly Cummins: Yeah, absolutely, I think, yeah, it's the low hanging fruit, it's easy, it's, kind of entertaining because when you find the problems you can laugh at yourself, and there's, again, there's no downside and several upsides, you know, so it's, you know, it's this double win of I got rid of something I wasn't even using, I have more space in my closet, and I don't have to pay for it. Anne Currie: Yeah, I just read a book that I really should have read years and years ago, and I don't know why I didn't, because people have been telling me to read it for years, which was the goal. Which is, it's not about tech, but it is about tech. It's kind of the book that was the precursor to the Phoenix Projects, which I think a lot read. And it was, it's all about TPS, the Toyota Production System. In a kind of like an Americanized version of it, how are the tires production system should be brought to America. And it was written in the 80s and it's all about work in progress and cleaning your environment and getting rid of stuff that gets in your way and just obscures everything. , you can't see what's going on. Effectively, it was a precursor to lean, which I think is really very well aligned. Green and lean, really well aligned. And, it's something that we don't think about, that cleaning up waste just makes your life much better in ways that are hard to imagine until you've done it. And zombie, cleaning zombie servers up just makes your systems more secure, cheaper, more resilient, more everything. It's a really good thing to do. Holly Cummins: Yeah. And there's sort of another way that those align as well, which I think is interesting because I think it's not necessarily intuitive. Which is, sometimes when we talk about zombie servers and server waste, people's first response is, this is terrible. The way I'm going to solve it is I'm going to put in barriers in place so that getting a server is harder. And that seems really intuitive, right? Because it's like, Oh yes, we need to solve it. But of course, but it has the exact opposite effect. And again it seems so counterintuitive because it seems like if you have a choice between shutting the barn door before the horses left and shutting the barn door after the horses left, you should shut the barn door before the horses left. But what happens is that if those barriers are in place, once people have a server, if they had to sweat blood to get that server, they are never giving it up. It doesn't matter how many thriftathons you do, they are going to cling to that server because it was so painful to get. So what you need to do is you need to just create these really sort of low friction systems where it's easy come, easy go. So it's really easy to get the hardware you need. And so you're really willing to give it up and that kind of self service model, that kind of low friction, high automation model is really well aligned again with lean. It's really well aligned with DevOps. It's really well aligned with cloud native. And so it has a whole bunch of benefits for us as users as well. If it's easier for me to get a server, that means I'm more likely to surrender it, but it also means I didn't have to suffer to get it, which is just a win for me personally. Anne Currie: It is. And there's something at the end of the goal in the little bit at the end, which I thought was my goodness, the most amazing, a bit of a lightswitch moment for me, when it was talking to this still about 10 years ago, but it was, it's talking about, ideas about stuff that, basically underpin the cloud, underpin modern computing, underpin factories and also warehouses and because I worked for a long time in companies that had warehouses, so you kind of see that there are enormous analogies and it was talking about how a lot of the good modern practice in this has been known since the 50s. And, it, even in places like japan, where it's really well known, I mean, Toyota is so, the Toyota production system is so well managed, almost everybody knows it, and everybody wants to, every company in Japan wants to be operating in that way. Still, the penetration of companies that actually achieve it is very low, it's only like 20%. I thought, it's interesting, why is that? And then I realised that you'd been kind of hinting why it was throughout. And if you look on the Toyota website, they're quite clear about it. They say the Toyota production system is all about trial and error. Doesn't matter, you can't read a book that tells you what we did, and then say, "oh well if I do that, then I will achieve the result." They say it's all about a culture of trial and error. And then you achieve, then you build something which will be influenced by what we do, and influenced by what other people do, and influenced by a lot of these ideas. But fundamentally, it has to be unique to you because anything complicated is context-specific. Therefore, you are going to have to learn from it. But one of the, one of the key things for trial and error is not making it so hard to try something and so painful if you make an error that you never do any trial and error. And I think that's very aligned with what you were saying about if you make it too hard, then nobody does any trial and error. Holly Cummins: Yeah. Absolutely. Anne Currie: I wrote a new version of it, called the cloud native attitude, which was all about, you know, what are people doing? You know, what's the UK enterprise version of the TPS system, and what are the fundamentals and what are people actually doing? And what I realized was that everybody was doing things that were quite different, that was specific to them, that used some of the same building blocks and were quite often in the cloud because that reduced their bottlenecks over getting hardware. Because that's always, that's a common bottleneck for everybody. So they wanted to reduce the bottleneck there of getting the access to hardware. But what they were actually doing was built trial and error wise, depending on their own specific context. And every company is different and has a different context. And, yeah, so you have to be able to, that is why failure is so, can't be a four letter word. Holly Cummins: Yeah. Technically, it's a seven letter word if you say failure, but... Anne Currie: And it should be treated that way. Yeah. I'm very aware that actually our brief for this was to talk about three articles on AI. Holly Cummins: I have to say, I did have a bit of a panic when I was reviewing the articles because they were very deep into the sort of the intricacies of, you know, AI policy and AI governance, which is not my specialty area. Anne Currie: No, neither is it mine. All that and when I was reading it, I thought quite a lot about what we've just talked about. It is a new area. It's something that, as far as AI is concerned, I love AI. I have no problem with AI. I think it's fantastic. It's amazing what it can produce. And if you are not playing around on the free version of ChatGPT, then you are not keeping on top of things because it changes all the time. And it's, very like managing somebody. You get out of it what you put in. If you put in, if you make a very cursory, ask it a couple of cursory questions, you'll get a couple of cursory answers. If you, you know, leaning back on Toyota again, you almost need to five wise it. You need to No, go, no, but why? Go a little bit deeper. Now go a little bit deeper. Now go a little bit deeper. And then you'll notice that the answers get better and better, like a person, better and better. So if you, really do, it is worth playing around with it. Holly Cummins: Just on that, I was just reading an article from Simon Willison this morning and he, was talking about sort of, you know, a similar idea that, you know, you have to put a lot into it and that to get good, he was talking about it for coding assistance that, you know, to get good outputs, it's not trivial. And a lot of people will sort of try it and then be disappointed by their first result and go, "Oh, well, it's terrible" and dismiss it. But he was saying that one of the mistakes that people make is to anthropomorphize it. And so when they see it making mistakes that a human would never make, they go, "well, this is terrible" and they don't think about it in terms of, well, this has some weaknesses and this has some strengths and they're not the same weaknesses and strengths as a person would have. And so I can't just see this one thing that a human would never do and then dismiss it. I, you know, you need to sort of adapt how you use it for its strengths and weaknesses, which I thought was really interesting. The sort of the, you know, it's so tempting to anthropomorphize it because it is so human ish in its outputs because it's trained on human inputs, but it is not, it does not have the same strengths and weaknesses as a person. Anne Currie: Well, I would say the thing is, it can be used in lots of different ways. There are ways you can use it which, actually, it can react like a person, and therefore does need to be called. I mean, if you ask it to do creative things, it's quite human like. And it will come up with, and it will blag, and it will, you know, it's, you just have to treat it to certainly, certain creative things. You have to go, "is that true?" Can you double check that? Is that, I appreciate your enthusiasm there, but it might not be right. Can you just double check that? In the same way that you would do for, with a very enthusiastic graduate. And you wouldn't have fired them because they said something that seemed plausible and, well, unless you'd said, do not tell me anything that seems plausible, then you don't double check. Because to a certain extent, they're always enthused. And that's where ideas come from. Stretching what's saying, well, you know, I don't know if this is happening, but this could happen. You have to be a little bit out there to generate new ideas and have new thoughts. I heard a very interesting podcast yesterday where one of the Reeds, I can never remember if it was Reed Hastings or Reed Hoffman, you know, it's like it was talking about AI, it was AI energy use. And he was saying, we're not stupid, you know, if there's, basically, there are two things that we know are coming. One is AI and one is climate change. We're not going to build, to try and create an AI industry that's requires the fossil fuel industry because that would be crazy talk, you know, we do all need to remember that climate change is coming and it is a different model for how, and, you know, if you are building an AI system that relies on fossil fuels, then you are an idiot because, the big players are not. You know, it's, I love looking at our world in data and looking at what is growing in the world? And if you look to a chart that's really interesting to look at, if you ever feel depressed about climate change is to look at the global growth in solar power in solar generated power. It's going up like it's not even exponential. It's, you know, it's, it looks vertically asymptotic. You know, it's super exponential. It's going faster than exponential, nothing else is developing that way. Except maybe AI, but AI from a from a lower point and, actually I think the AI will, and then you've got things with AI, you've got stuff like DeepSeek that's coming out of field and saying, "do you know? You just didn't need to write this so inefficiently. You could, you know, you could do this on a lot less, and it'd be a lot cheaper, and you could do things on the edge that you didn't know that you could do." So, yeah, I'm not too worried about AI. I think that DeepSeek surprised me. Holly Cummins: Yeah, I agree. I think we have been seeing this, you know, sort of enormous rise in energy consumption, but that's not sustainable, and it's not sustainable in terms of climate, but it's also not sustainable financially. And so financial corrections tend to come before the climate corrections. And so what we're seeing now is architectures that are designed to reduce the energy costs because they need to reduce the actual financial costs. So we get things like DeepSeek where there's the sort of fundamental efficiency in the model of the architecture or the architecture of the model rather. But then we're also seeing things as well, like you know, up until maybe a year ago, the way it worked was that the bigger the model, the better the results. Just, you know, absolutely. And now we're starting to see things where the model gets bigger. And the results get worse and you see this with RAG systems as well, where when you do your RAG experiment and you feed in just two pages of data, it works fantastically well and then you go, "okay, I'm going to proceed." And then you feed in like 2000 pages of data and your RAG suddenly isn't really working and it's not really giving you correct responses anymore. And so I think we're seeing an architectural shift away from the really big monolithic models to more orchestrated models. Which is kind of bad in a way, right? Because it means we as engineers have to do more work. We can't just like have one big monolith and say, "solve everything." But on the other hand, what do engineers love? We love engineering. So it means that there's opportunities for us. So, you know, a pattern that we're seeing a lot now is that you have your sort of orchestrator model that takes the query in and triages it. And it says, "is this something that should go out to the web? Because, actually, like, that's the best place for this news topic. Or is this something that should go to my RAG model? Is this something..." You know, and so it'll choose the right model. Those models are smaller, and so they have a much more limited scope. But, within that scope, they can give you much higher quality answers than the huge supermodel, and they cost much less to run. So you end up with a system, again, it's about the double win, where you have a system which maybe took a little bit more work to architect, but gives you better answers for a lower cost. Anne Currie: That is really interesting and more aligned as well with how power is being developed potentially, you know, that there is, that you really want to be doing more stuff at the edge, which that you want, and you want people to be doing stuff at home on their own devices, you know, rather than just always having to go to, as you say, Supermodels are bad. We all disapprove of supermodels. Holly Cummins: Yeah. and in terms of, you know, that aligns with some of the sort of the, you know, the privacy concerns as well, which is, you know, people want to be doing it at home and certainly organizations want to be keeping their data in house. And so then that means that they need the more organization local model to be keeping their, dirty secrets in house. Anne Currie: Well, it is true. I mean, the thing is you, it is very hard to keep things secure and sometimes just do want to keep things in house, some of your data in house, you don't necessarily even want to stick it on Amazon if you can avoid it. But yes, so that's been a really interesting discussion and we have completely gone off topic and we've hardly talked at all about, the AI regulation. I think we both agree that AI regulation, it's quite soon to be doing it. It's interesting. I can see why, the Americans have a tendency to take a completely different approach to the EU. If you look at their laws and I have to, I did do some lecturing in AI ethics and legalities and American laws do tend to be like, well, something goes wrong, you know, get your pantsuit off and fix it. EU laws tend to be about, don't even, don't do it. You know, as you said before, close the door before the horse has, you know, has bolted. And the American law is about bringing it back. But in some ways, that is, that exemplifies why America grows much faster than Europe does. , Holly Cummins: Yeah. I was, when I was looking at some of the announcements that did come out of the AI summit, I think, yeah, I have really mixed feelings about it because I think I generally feel that regulation is good, but I also agree with you that it can have a stifling effect on growth, but one thing that I think is fairly clearly positive that did seem to be emphasized in the announcements as well is the open source aspect. So, like, we're, I mean, we have, you know, sort of open source models now, but they're not as open source as, you know, open source software in terms of how reproducible they are, how accessible they are for people to see the innards of, but I think I was thinking a little bit again when I was sort of the way the AI summit is is making these sort of bodies that have like the public private partnerships, which isn't anything new, but you know, we're sort of seeing quite a few governments coming together. So like the current AI announcement, I think had nine governments and dozens of companies, but it reminded me a little bit of the sort of the birth of radio. When we had this resource which was the airwaves, the frequencies that, you know, had, nobody had cared about. And then now all of a sudden it was quite valuable and there was potentially, you know, the sort of wild west of like, okay, who can take this and exploit it commercially? And then government stepped in and said, "actually, no, this is a resource that belongs to all of us. And so it needs to be managed." Who has access to it and who can just grab it. And I feel a bit like, even though in a technical sense, the data all around us isn't all of ours. It's, you know, a lot of it is copyrighted and that kind of thing. But if you look at the sort of aggregate of like all of the data that humanity has produced, that is a collective asset. And so it should be that how it gets used is for a collective benefit and that regulation, and making sure that it's not just one or two organizations that have the technical potential to leverage that data is a collectively good thing. Anne Currie: Especially at the moment, we don't want everything to be happening in the US, because, maybe the US is not the friendly partner that we would always thought it would be, it's, diversity Holly Cummins: diversity is good. Diversity of geographic interests. Anne Currie: Indeed. Yeah, it is. So yeah, it's, but it is early days. I'm not an anti AI person by any stretch. In fact, I love AI. I think it's really is an amazing thing. And we just need to align it with the interests of the rest of the humanity in terms Holly Cummins: Yes. Anne Currie: but it is interesting. They're saying that in terms of being green, the big players are not idiots. They know that things need to be aligned. But in terms of data, they certainly will be acting in their best interests. So, yeah, I can see they, yeah, indeed. Very interesting. So, we are now coming to time, we've done quite a lot, we've done quite a lot. There won't be much to edit out from what we've talked about today. I think it's great, it's very good. But, Holly Cummins: Shall we talk about the Microsoft article though? Cause that, I thought that was really interesting. Anne Currie: oh yeah, go for it, Yes, Holly Cummins: Yeah, so one of the other articles that we have is, It said that Microsoft had, was reducing its investment in data centers, which was, I was quite shocked to read that because it's the exact opposite of all of the news articles that we normally see, including one I saw this morning that said that, you know, the big three are looking at increasing their investment in nuclear. But I thought it was sort of interesting because we've, I think we always tend to sort of extrapolate from the current state and extrapolate it indefinitely forward. So we say demand for AI is growing, demand for AI will grow indefinitely, but of course, that's not sustainable. Again you know, it's not sustainable in terms of financially and so at some point there will be that correction and it seems like, Microsoft has perhaps looked at how much they've invested in data centers and said "oh, perhaps this was a little bit much, perhaps let's rollback that investment just a little bit, because now we have an over capacity on data centers." Anne Currie: Well, I mean, I wonder how much of DeepSeek had an effect on which is that everybody was looking at it and going, the thing is, I mean, Azure is, it's, not, well, I say this is a public story. So I could, because I have it in the book, the story of during the pandemic, the team, the Microsoft Teams folks looking at what they were doing and saying, "could this be more efficient?" And the answer was yes, because had really no effort in whatsoever to make what they were doing efficient. Really basic efficiency stuff they hadn't done. And so there was tons of waste in that system. And the thing is, when you gallop ahead to do things, you do end up with a lot of waste. DeepSeek was a great example of, you know this AI thing, we can do it on like much cheaper chips and much fewer machines. And you don't have to do it that way. So I'm hoping that this means that Microsoft have decided to start investing in efficiency. It's a shame because they used to have an amazing team who were fantastic at this kind of stuff, who used it, so we, I was saying, Holly spoke at a conference I did last year about code efficiency. And Quarkus being a really good example of a more efficient platform for running Java on. The first person I had on that used to work for Azure. And he used to, was probably the world's expert in actual practical code efficiency. He got made redundant. Yeah. Because, Microsoft at the time were not interested in efficiency. So "who cares? Pfft, go on, out." But he's now working at NVIDIA doing all the efficiency stuff there. Because some people are not, who paying attention to, I, well I think the lesson there is that maybe Microsoft were not paying that much attention to efficiency, the idea that actually you don't need 10 data centers. A little bit of easy, well, very difficult change to make it really efficient. But quite often there's a lot of low hanging fruit in efficiency. Holly Cummins: Absolutely. And you need to remember to do it as well, because I think that, I think probably it is a reasonable and correct flow to say, innovate first, optimize second. So, you know, you, don't have be looking at that efficiency as you're innovating because that stifles the efficiency and you know, you might be optimizing something that never becomes anything, but you have to then remember once you've got it out there to go back and say, "Oh, look at all of these low hanging fruit. Look how much waste there is here. Let's, sort it out now that we've proven it's a success." Anne Currie: Yeah. Yeah, it is. Yes. It's like "don't prematurely optimize does" not mean "never optimize." Holly Cummins: Yes. Yes. Anne Currie: So, I, my strong suspicion is that Microsoft are kind of waking up to that a little bit. The thing is, if you have limitless money, and you just throw a whole load of money at things, then, it is hard to go and optimize. As you say, it's a bit like that whole thing of going in and turning off those zombie machines. You know, you have to go and do it know, it's, you have to choose to do it. If you have limitless money, you never do it, because it's a bit boring, it's not as exciting as a new thing. Yeah, but yeah, limitless money has its downsides as well as up. Holly Cummins: Yes. Who knew? Anne Currie: Yeah, but so I think we are at the end of our time. Is there anything else you want to say before you, it was an excellent hour. Holly Cummins: Nope. Nope. This has been absolutely fantastic chatting to you Anne. Anne Currie: Excellent. It's been very good talking to you as always. And so my final thing is, if anybody who's listening to this podcast has not read building green software from O'Reilly, you absolutely should, because a lot of what we just talked about was covered in the book. Reviewed by Holly. Holly Cummins: I can recommend the book. Anne Currie: I think your name is somewhere as a, some nice thing you said about it somewhere on the book cover, but, so thank you very much indeed. And just a reminder to everybody, everything we've talked about all the links in the show notes at the bottom of the episode. And, we will see, I will see you again soon on the Environment Variables podcast. Goodbye. Chris Adams: Hey everyone, thanks for listening. Just a reminder to follow Environment Variables on Apple Podcasts, Spotify, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show, and of course, we'd love to have more listeners. To find out more about the Green Software Foundation, please visit greensoftware.foundation. That's greensoftware.foundation in any browser. Thanks again, and see you in the next episode.…
E
Environment Variables

1 AI Energy Measurement for Beginners 56:55
56:55
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי56:55
Host Chris Adams is joined by Charles Tripp and Dawn Nafus to explore the complexities of measuring AI's environmental impact from a novice’s starting point. They discuss their research paper, A Beginner's Guide to Power and Energy Measurement and Estimation for Computing and Machine Learning, breaking down key insights on how energy efficiency in AI systems is often misunderstood. They discuss practical strategies for optimizing energy use, the challenges of accurate measurement, and the broader implications of AI’s energy demands. They also highlight initiatives like Hugging Face’s Energy Score Alliance, discuss how transparency and better metrics can drive more sustainable AI development and how they both have a commonality with eagle(s)! Learn more about our people: Chris Adams: LinkedIn | GitHub | Website Dawn Nafus: LinkedIn Charles Tripp: LinkedIn Find out more about the GSF: The Green Software Foundation Website Sign up to the Green Software Foundation Newsletter News: The paper discussed: A Beginner's Guide to Power and Energy Measurement and Estimation for Computing and Machine Learning [01:21] Measuring the Energy Consumption and Efficiency of Deep Neural Networks: An Empirical Analysis and Design Recommendations [13:26] From Efficiency Gains to Rebound Effects: The Problem of Jevons' Paradox in AI's Polarized Environmental Debate | Luccioni et al [45:46] Will new models like DeepSeek reduce the direct environmental footprint of AI? | Chris Adams [46:06] Frugal AI Challenge [49:02] Within Bounds: Limiting AI's environmental impact [50:26] Events: NREL Partner Forum Agenda | 12-13 May 2025 Resources: Report: Thinking about using AI? - Green Web Foundation | Green Web Foundation [04:06] Responsible AI | Intel [05:18] AIEnergyScore (AI Energy Score) | Hugging Face [46:39] AI Energy Score [46:57] AI Energy Score - Submission Portal - a Hugging Face Space by AIEnergyScore [48:23] AI Energy Score - GitHub [48:43] Digitalisation and the Rebound Effect - by Vlad Coroama (ICT4S School 2021) [51:11] The BUTTER Zone: An Empirical Study of Training Dynamics in Fully Connected Neural Networks BUTTER-E - Energy Consumption Data for the BUTTER Empirical Deep Learning Dataset [51:44] OEDI: BUTTER - Empirical Deep Learning Dataset [51:49] GitHub - NREL/BUTTER-Better-Understanding-of-Training-Topologies-through-Empirical-Results Bayesian State-Space Modeling Framework for Understanding and Predicting Golden Eagle Movements Using Telemetry Data (Conference) | OSTI.GOV [52:26] Stochastic agent-based model for predicting turbine-scale raptor movements during updraft-subsidized directional flights - ScienceDirect [52:46] Stochastic Soaring Raptor Simulator [53:58] NREL HPC Eagle Jobs Data [55:02] Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI AIAAIC | The independent, open, public interest resource detailing incidents and controversies driven by and relating to AI, algorithms and automation If you enjoyed this episode then please either: Follow, rate, and review on Apple Podcasts Follow and rate on Spotify Watch our videos on The Green Software Foundation YouTube Channel! Connect with us on Twitter , Github and LinkedIn ! TRANSCRIPT BELOW: Charles Tripp: But now it's starting to be like, well, we can't build that data center because we can't get the energy to it that we need to do the things we want to do with it. we haven't taken that incremental cost into account over time, we just kind of ignored it. And now we hit like the barrier, right? Chris Adams: Hello, and welcome to Environment Variables, brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software. I'm your host, Chris Adams. Welcome to Environment Variables, where we bring you the latest news and updates from the world of sustainable software development. I'm your host Chris Adams. If you follow a strict media diet, you switch off the Wi-Fi on your house and you throw your phone into the ocean, you might be able to avoid the constant stream of stories about AI in the tech industry. For the rest of us, though, it's basically unavoidable. So having an understanding of the environmental impact of AI is increasingly important if you want to be a responsible practitioner navigating the world of AI, generative AI, machine learning models, DeepSeek, and the rest. Earlier this year, I had a paper shared with me with the intriguing title A Beginner's Guide to Power and Energy Measurement, an Estimation for Computing and Machine Learning. And it turned out to be one of the most useful resources I've since come across for making sense of the environmental footprint of AI. So I was over the moon when I found out that two of the authors were both willing and able to come on to discuss this subject today. So joining me today are Dawn Nafus and Charles Tripp, who worked on the paper and did all this research. And well, instead of me introducing them, well, they're right here. I might as well let them do the honors themselves, actually. So, I'm just going to work in alphabetical order. Charles, I think you're slightly ahead of Dawn. So, if I, can I just give you the room to, like, introduce yourself? Charles Tripp: Sure. I'm a machine learning researcher and Stanford algorithms researcher, and I've been programming pretty much my whole life since I was a little kid, and I love computers. I researched machine learning and reinforcement learning in particular at Stanford, started my own company, but kind of got burnt out on it. And then I went to the National Renewable Energy Lab where I applied machine learning techniques to energy efficiency and renewable energy problems there. And while I was there, I started to realize that computing energy efficiency was a risingly, like, an increasingly important area of study on its own. So I had the opportunity to sort of lead an effort there to create a program of research around that topic. And it was through that work that I started working on this paper, made these connections with Dawn. And I worked there for six years and just recently changed jobs to be a machine learning engineer at Zazzle. I'm continuing to do this research. And, yeah. Chris Adams: Brilliant. Thank you, Charles. Okay, so national, that's NREL that some people refer Charles Tripp: That's right. It's one of the national labs. Chris Adams: Okay. Brillinat. And Dawn, I guess I should give you the space to introduce yourself, and welcome back again, actually. Dawn Nafus: Thank you. Great to be here. My name is Dawn Nafus. I'm a principal engineer now in Intel Labs. I also run the Socio Technical Systems Lab. And I also sit on Intel's Responsible AI Advisory Council, where we look after what kinds of machine learning tools and products do we want to put out the door. Chris Adams: Brilliant, thank you, Dawn. And if you're new to this podcast, I mentioned my name was Chris Adams at the beginning of the podcast. I work at the Green Web Foundation. I'm the director of technology and policy there. I'm one of the authors of a report all about the environmental impact of AI last year, so I have like some background on this. I also work as the policy chair in the Green Software Foundation Policy Working Group as well. So that's another thing that I do. And if you, if there, we'll do our best to make sure that we link to every single paper and project on this, so if there are any particular things you find interesting, please do follow, look for the show notes. Okay, Dawn, I'm, let's, shall we start? I think you're both sitting comfortably, right? Shall I begin? Okay, good. So, Dawn, I'm really glad you actually had a chance to both work on this paper and share and let me know about it in the first place. And I can tell when I read through it, there was quite an effort to, like, do all the research for this. So, can I ask, like, what was the motivation for doing this in the first place? And, like, was there any particular people you feel really should read it? Dawn Nafus: Yeah, absolutely. We primarily wrote this for ourselves. In a way. And I'll explain what I mean by that. So, oddly, it actually started life in my role in Responsible AI, where I had recently advocated that Intel should adopt a Protect the Environment principle alongside our suite of other Responsible AI principles, right? Bias and inclusion, transparency, human oversight, all the rest of it. And so, the first thing that comes up when you advocate for a principle, and they did actually implement it, is "what are you going to do about it?" And so, we had a lot of conversation about exactly that, and really started to hone in on energy transparency, in part because, you know, from a governance perspective, that's an easy thing to at least conceptualize, right? You can get a number. Chris Adams: Mmm. Dawn Nafus: You know, it's the place where people's heads first go to. And of course it's the biggest part of, or a very large part of the problem in the first place. Something that you can actually control at a development level. And so, but once we started poking at it, it was, "what do we actually mean by measuring? And for what? And for whom?" So as an example, if we measured, say, the last training run, that'll give you a nice guesstimate for your next training run, but that's not a carbon footprint, right? A footprint is everything that you've done before that, which folks might not have kept track of, right? So, you know, we're really starting to wrestle with this. And then in parallel, in labs, we were doing some socio technical work on, carbon awareness. And there too, we had to start with measuring. Right? You had to start somewhere. And so that's exactly what the team did. And they found interestingly, or painfully depending on your point of view, look, this stuff ain't so simple, right? If what you're doing is running a giant training run, you stick CodeCarbon in or whatever it is, sure, you can get absolutely a reasonable number. If you're trying to do something a little bit more granular, a little bit trickier, it turns out you actually have to know what you're looking at inside a data center, and frankly, we didn't, as machine learning people primarily. And so, we hit a lot of barriers and what we wanted to do was to say, okay, there are plenty of other people who are going to find the same stuff we did, so, and they shouldn't have to find out the hard way. So that was the motivation. Chris Adams: Well, I'm glad that you did because this was actually the thing that we found as well, when we were looking into this, it looks simple on the outside, and then it turned, it feels a bit like a kind of fractal of complexity, and there's various layers that you need to be thinking about. And this is one thing I really appreciated in the paper that we actually, that, that was kind of broken out like that. So you can at least have a model to think about it. And Charles, maybe this is actually one thing I can, like, hand over to you because I spoke about this kind of hierarchy of things you might do, like there's stuff you might do at a data facility level or right all the way down to a, like, a node level, for example. Can you take me through some of the ideas there? Because I know for people who haven't read the paper yet, that seemed to be one of the key ideas behind this, that there are different places where you might make an intervention. And this is actually a key thing to take away if you're trying to kind of interrogate this for the first time. Charles Tripp: Yeah, I think it's, both interventions and measurement, or I should, it's really more estimation at any level. And it also depends on your goals and perspective. So it, like, if you are operating a data center, right? You're probably concerned with the entire data center, right? Like the cooling systems, the idle power draw, the, converting power to different levels, right? Like transformer efficiency, things like that. Maybe even the transmission line losses and all of these things. And you may not really care too much about, like, the code level, right? So the types of measurements you might take there or estimates you might make are going to be different. They're gonna be at, like, the system level. Like, how much is my cooling system using in different conditions, different operating conditions, environmental conditions? From a user's perspective, you might care a lot more about, like, how much energy, how much carbon is this job using? And that's gonna depend on those data center variables. But there's also a degree of like, well, the data center is going to be running whether or not I run my job. Right? So I really care about my jobs impact more. And then I might be caring about much shorter term, more local estimates, like ones that, might be from measuring the nodes that I'm running on's power or which was what we did it at NREL or, much higher frequency, but less accurate measurements that come from the hardware itself. Most modern computing hardware has a way to get these hardware estimates of the current power consumption. And you could log those. And there's also difficulties. Once you start doing that is the measurement itself can cause energy consumption. Right? And also potentially interfere with your software and cause it to run more slowly and potentially use more energy. And so, like, there's difficulties there at that level. Yeah, but there's a whole suite of tools that are appropriate for different uses and purposes, right? Like measuring the power at the wall, going into the data center may be useful at the data center or multiple data center level. Still doesn't tell you all the story, right? Like the losses in the transmission lines and where did that power come from are still not accounted for, right? But it also doesn't give you a sense for, like, what happens that I take interventions at the user level? It's very hard to see that from that high level, right? Because there's many things running on the system, different conditions there. From the user's point of view, they might only care about, like, you know, this one key piece of my software that's running, you know, like the kernel of this deep learning network. How much energy is that taking? How much additional energy is that taking? And that's like a very different thing that very different measurements are appropriate for and interventions, right? Like changing that little, you know, optimizing a little piece of code versus like, maybe we need to change the way our cooling system works on the whole data center or the way that we schedule jobs. Yeah, and the paper goes through many of these levels of granularity. Chris Adams: Yeah, so this is one thing that really kind of struck out at me because when you, it started at the kind of facility level, which is looking at an entire building where you mentioned things like say, you know, power coming into the entire facility. And then I believe you went down to looking at say the, within that facility, there might be one or more data centers, then you're going down to things like a rack level and then you're going down to kind of at a node level and then you're all even going all the way down to like a particularly tight loop or the equivalent for that. And when you're looking at things like this, there are questions about like what you what... if you would make something particularly efficient at, say, the bottom level, the node level, that doesn't necessarily impact, it might not have an impact higher up, for example, because that capacity might be just reallocated to someone else. For example, it might just be that there's a certain kind of minimum amount of power draw that you aren't able to have much of an impact on. I mean, like, this is, these are some of the things I was surprised by, or not surprised by, but I really appreciated breaking some of that, these out, because one thing that seemed to, one thing that was, I guess, counterintuitive when I was looking at this was that things you might do at one level can actually be counter, can hinder steps further down, for example, and vice versa. Charles Tripp: Yeah, that's right. I mean, I think, two important sort of findings are, yeah, like battle scars that we got from doing these measurements. And one data set we produced is called BUTTER-E, which is like a really large scale measurement of energy consumption of training and testing neural networks and how the architecture impacts it. And we were trying to get reasonable measurements while doing this. And, of the difficulties is in comparing measurements between runs on different systems, even if they're identically configured, can be tricky because different systems based on, you know, manufacturing variances, the heat, you know, like how warm is that system at that time? Anything that might be happening in the background or over the network, anything that might be just a little different about its environment can have, real measurable impacts on the energy consumed. So, like comparing energy consumption between runs on different nodes, even with identical configurations, we had to account for biases and they're like, oh, this node draws a little bit more power than this one at idle. And we have to like, adjust for that in order to make a clear comparison of what the difference was. And this problem gets bigger when you have different system configurations or even same configuration, but running in like a totally different data center. So that was like one tricky finding. And I think two other little ones I can mention, maybe we could go into more detail later. But, another one, like you mentioned, is the overall system utilization and how that's impacted by a particular piece of software running a particular job running is going to vary a lot on what those other users of the system are doing and how that system is scheduled. So, you can definitely get in the situations where, yeah, I reduced my energy consumption, but that total system is just going to, that energy is going to be used some other time, especially if the energy consumption savings I get are from shortening the amount of time I'm using a resource and then someone else. But it does mean that the computing is being done more efficiently, right? Like, if everyone does that, then more computing can be done within the same amount of energy. But it's hard to quantify that. Like, what is my impact? It's hard to say, right? Chris Adams: I see, yeah, and Dawn, go on, I can, see you nodding, so I want you to come in now. Dawn Nafus: If I can jump in a bit, I mean, I think that speaks to one of the things we're trying to bring out, maybe not literally, but make possible, is this. Those things could actually be better aligned in a certain way, right? Like, the energy that is, you know, for example, when there is idle time, right? I mean, there are things that data center operators can do to reduce that, right? you know, you can bring things into lower power states, all the rest of it, right? So, in a way, kind of, but at the same time, the developer can't control it, but if they don't actually know that's going on, and it's just like, well, it's there anyway, there's nothing for me to do, right, that's also a problem, right? So in a way, you've got two different kinds of actors looking at it in very different perspectives. And the clearer we can get about roles and responsibilities, right, you can start to do things like reduce your power when things are idling. Yes, you do have that problem of somebody else is going to jump in. But Charles, I think as your work shows, you know, there's still some idling going on, even though you wouldn't think, so maybe you could talk a little bit about that. Charles Tripp: Yeah, so one really interesting thing that I didn't expect going into doing these measurements in this type of analysis was, well, first, I thought, "oh great, we can just measure the power on each node, run things and compare them." And we ran into problems immediately. Like, you couldn't compare the energy consumption from two identically configured systems directly, especially if you're collecting a lot of data, because one is just going to use like slightly more than the other because of the different variables I mentioned. And then when you compare them, you're like, well, that run used way more energy, but it's not because of anything about how the job was configured. It's just, that system used a little bit more. So if I switch them, I'd get the opposite result. So that was one thing. But then, as we got into it and we were trying to figure out, okay, well, now that we figured out a way to account for these variations, let's see what the impact is of running different software with different configurations, especially like neural networks, different configurations on energy consumption and our initial hypothesis was that it was based on mainly the size of the neural network and, you know, like how many parameters basically, like how many calculations, these sorts of things. And if you look in the research, A lot of the research out there about making neural networks and largely algorithms in general more efficient focuses on how many operations, how many flops does this take, you know? And look, we reduced it by a huge amount. So that means that we get the same energy consumption reductions. We kind of thought that was probably true for the most part. But as we took measurements, we found that had almost no connection to how much energy was consumed. And the reason was that the amount of energy consumed had way more to do with how much data was moved around on the computer. So how much data was loaded from the network? How much data was loaded from disc? How much data was loaded from disc into memory, into GPU RAM for using the GPU, into the different caching levels and red, even the registers? So if we computed like how much data got moved in and out of like level two cache on the CPU, we could see that had a huge correlation, like almost direct correlation with energy consumption. Not the number of calculations. Now, you could get in a situation where, like, basically no data is leaving cache, and I'm doing a ton of computing on that data. In that case, probably a number of calculations does matter, but in most cases, especially in deep learning, has almost no connections, the amount of data moved. So then we thought, okay, well, it's amount of data moved. It's the data moving. The data has a certain cost. But then we look deeper, and we saw that actually. The amount of data moved is not really what's causing the energy to be consumed. It's the stalls while the system is waiting to load the data. It's waiting for the data to come from, you know, system memory into level three cache. It needs to do some calculations on that data. So it's pulling it out while it's sitting there waiting. It's that idle power draw. Just it could be for like a millisecond or even a nanosecond or something, right? But it adds up if you have, you know, billions of accesses. Each of those little stalls is drawing some power, and it adds up to be quite a significant amount of power. So we found that actually the driver of the energy consumption, the primary driver by far in what we were studying in deep learning was the idle power draw while waiting for data to move around the system. And this was like really surprising because we started with number of calculations, it turns out almost irrelevant. Right. And then we're like, well, is it the amount of data moved around? It's actually not quite the amount of data moved around, but that does like cause the stalls whenever I need to access the data, but it's really that idle power draw. And and I think that's probably true for a lot of software. Chris Adams: Yes. I think that does sound about right. I'm just gonna try if I follow that, because there was, I think there was a few quite key important ideas there. But there's also, if you aren't familiar with how computers are designed, you it might, there. I'll try to paraphrase it. So we've had this idea that the main thing is like, the number of calculations being done. That's like what we thought was the key idea. But, Charles Tripp: How much work, you know. Chris Adams: Yeah, exactly. And, what we actually, what we know about is inside a computer you have like multiple layers of, let's call them say, caches or multiple layers at where you might store data so it's easy and fast to access, but that starts quite small and then gets larger and larger, which a little bit slower over time. So you might have, like you said, L2 cache, for example, and that's going to be smaller, much, much faster, but smaller than, say, the RAM on your system, and then if you go a bit further down, you've got like a disk, which is going to be way, what larger, and then that's going to be somewhat slower still, so moving between these stages so that you can process, that was actually one of the things that you were looking at, and then it turned out that actually, the thing that, well, there is some correlation there, one of the key drivers actually is the chips kind of in a ready state, ready to actually waiting for that stuff to come in. They can't really be asleep because they know the data is going to have to come in, have to process it. They have to be almost like anticipating at all these levels. And that's one of the things we, that's one of the big drivers of actually the resource use and the energy use. Charles Tripp: I mean, so, like, what we saw was, we actually estimated how much energy it took, like, per byte to move data from, like, system RAM to level three cache to level two to level one to a register at each level. And at some cases, it was so small, we couldn't even really estimate it. But in most cases, we were able to get an estimate for the For that, but a much larger cost was initiating the transfer, and even bigger than that was just the idle power draw during the time that the program executed and how long it executed for. And by combining those, we were able to estimate that most of that power consumption, like 99 percent in most cases was from that idle time, even those little micro stalls waiting for the data to move around. And that's because moving the data while it does take some energy doesn't take that much in comparison to the amount of energy of like keeping the ram on and the data is just like alive in the ram or keeping the CPU active, right? Like CPUs can go into lower power states, but generally, at least part of that system has to shut down. So like doing it like at a very, fine grain scale is not really feasible. Many systems can change power state at a like a faster rate than you might imagine, but it's still a lot slower than like out of, you know, per instruction per byte level of, like, I need to load this data. Like, okay, shut down the system and wait a second, right? Like, that's, it just, not a second, like a few nanoseconds. It's just not practical to do that. And it's so it's just keeping everything on during that time. That's sucking up most of the power. the So one strategy, simple strategy, but it's difficult to implement in some cases is to initiate that load that transfer earlier. So if you can prefetch the data into the higher levels of memory before you hit the stall where you're waiting to actually use it, you can probably significantly reduce this power consumption, due to that idle wait. But it's difficult to figure out how to properly do that prefetching. Chris Adams: Ah, I see. Thanks, charles. So it sounds like, okay, they, we might kind of approach this and there might be some things which feel kind of intuitive but it turns out there's quite a few counterintuitive things. And like, Dawn, I can see you nodding away sagely here and I suspect there's a few things that you might have to add on this. Because this is, I mean, can I give you a bit of space, Dawn, to kind of talk about some of this too, because I know that this is something that you've shared with me before, is that yeah, there are maybe some rules of thumb you might use, but it's never that simple, basically, or you realise actually that there's quite a bit more to it than that, for example. Dawn Nafus: Exactly. Well, I think what I really learned out of this effort is that measurement can actually recalibrate your rules of thumbs, right? So you don't actually have to be measuring all the time for all reasons, but even just that the simple, I mean, not so simple story that Charles told like, okay, you know, so I spent a lot of time talking with developers and trying to understand how they work and at a developer perception level, right? What do they feel like? What's palpable to them, right? Send the stuff off, go have a cup of coffee, whatever it is, right? So they're not seeing all that, you know, and, you know, when I talk to them, most of them aren't thinking about the kinds of things that were just raised, right? Like how much data are you looking at a time? You can actually set and tweak that. And that's the kind of, you know, Folks develop an idea about that, and they don't think too hard about it usually, right. So, with measuring, you can start to actually recalibrate the things you do see, right? I think this also gets back to, you know, why is it counterintuitive that, you know, some of these mechanisms and how you actually are training, as opposed to how many flops you're doing, how many parameters, why is that counterintuitive? Well, at a certain level, you know, the number of flops do actually matter, right? If we do actually have a gigantic, you know, I'm gonna call myself a foundation model type size stuff, I'm gonna build out an entire data center for it, it does matter. But as you get, you know, down and down and more specific, it's a, different ball game. And there are these tricks of scale that are sort of throughout this stuff, right? Like the fact that, yes, you can make a credible claim, that foundation model will always be more energy intensive than, you know, something so small you can run on a laptop, right? That's always going to be true, right? No measurement necessary, right? You keep going down and down, and you're like, okay, let's get more specific. You can get to actually where this, where our frustration really started was, you, if you try to go to the extreme, right, try to chase every single electron through a data center, you're not going to do it. It feels like physics, it feels objective, it feels true, but at minimum you start to hit the observer effect, right, that, you know, which is what we did. We were, my colleague Nicole Beckage was trying to measure at an epoch level, right, sort of essentially round, you know, mini round of training. And what she found was that, you know, she was trying to sample so often that she's pulling energy out of the processing and it just, it messed up the numbers, right? So you can try to get down, you know, into that, you know, what feels like more accuracy and then all of a sudden you're in a different ballpark. So these, tricks of like aggregation and scale and what can you say credibly at what level, I think are fascinating, but you kind of got to get a feel for it in the same way that you can get a feel for, "yep, if I'm sending my job off, I know I have at least, you know, however many hours or however many days," right? Charles Tripp: There's also so much variation that's out of your control, right? Like one run to another one system to another, even different times where you ran on the same system can cause measureable and in some cases significant variations in the energy consumption. So it's more about, I think about understanding what's causing the energy consumption. I think that's the more valuable thing to do. But it's easy to like, be like, "I already understand it." And I think there's a, there's like a historical bias towards number of operations because in old computers without much caching or anything like this, right? Like I restore old computers and, like an old 386 or IBM XT, right? Like it's running, it has registers in the CPU and then it has main memory. And it, and almost everything is basically how many operations I'm doing is going to closely correlate with how fast the thing runs and probably how much energy it uses, because most of the energy consumption on those systems Is just basically constant, no matter what I'm doing, right? It's just it doesn't like idle down the processor while it's not working, right? And there's a historical bias. It's built up over time that, like, was focused on the, you know, and it's also at the programmer level. Like, I'm thinking about what is the computer doing? Chris Adams: What do I have controll over? Charles Tripp: But it's only through it's only through actually measuring it that you gain a clearer picture of, like, what is actually using energy. And I think if you get that picture, then you'll gain an understanding more of how can I make this software or the data center or anything in between like job allocation more energy efficient, but it's only through actually measuring that we can get that clear picture. Because if we guess, especially using kind of our biases from how we learn to use computers, how we learn about how computers work, we're actually very likely to get an incorrect understanding, incorrect picture of what's driving the energy consumption. It's much less intuitive than people think. Chris Adams: Ah, okay, there's a couple of things I'd like to comment on, and then Dawn, i might give you a bit of space on this, because, you said, so there's one, so we're just talking about like flops as a thing that people, okay, are used to looking at, and are like, it's literally written into the AI Act, like, things above a certain number of flops are considered, you know, foundational models, for example, so, you know, that's a really good example of what this actually might be. And I guess the other thing that I wanted to kind of like touch on is that, I work in the kind of web land, and like, I mean, the Green Web Foundation is a clue in our organization's name. We've had exactly the same thing, where we've been struggling to understand the impact of, say, moving data around, and whether, how much credence you should give to that versus things happening inside a browser, for example. It looks like you've got some similar kinds of issues and things to be wrestling, with here. But Dawn, I wanted to give you a bit of space because both of you alluded to this, about this idea of having an understanding of what you can and what you can't control and, how you might have a bias for doing one thing without, and then miss something really much larger elsewhere, for example. Can I maybe give you a bit of space to talk about this idea of, okay, well, which things do you, should you be focusing on, and also understanding of what's within your sphere of influence? What can you control? What can't you control, for example? Dawn Nafus: Exactly. I think it's in a sense you've captured the main point, which is, you know, that measurements are most helpful when they are relevant to the thing you can control, right? So as a very simple example, you know, there are plenty of AI developers who have a choice in what data centers they can use. There are plenty who don't, right? You know, when Charles works or worked at NREL, right. The supercomputer was there. That was it. You're not moving, right? So, if you can move, you know, that overall data center efficiency number that really matters because you can say, alright, "I'm putting my stuff here and not there." If you can't move, like, there's no need to mess with. It it is what it is, right? At the same, and this gets us into this interesting problem, again, a tension between what you might look at it from a policy perspective versus what a developer might look at. We had a lot of kind of, you know, can I say, come to Jesus? We had a little moment where we, is that on a podcast? I think I can. Where there was this question of, are we giving people a bum steer by focusing at, you know, granular developer level stuff, right? Where it's so much actually is on how you run the data center, right? So you, again, you talk about tricks of scale. On the one hand, you know, the amount of energy that you might be directly saving just by, you know, not using or not using, by the time all of those things move through the grid and you're talking about coming, you know, energy coming off of the transmissions cables, right, in aggregate might not actually be directly that big. It might be, but it might not be. And then you flip that around and you think about what aggregate demand looks like and the fact that so much of AI demand is, you know, that's what's putting pressure on our electricity grid. Right? Then that's the most effective thing you could do, is actually get these, you know, very specific individual jobs down and down, right? So, again, it's all about what you can control, but there are these, whatever perspective you take is just going to flip your, you know, your understanding of the issue around. Chris Adams: So this was actually one thing I quite appreciated from the paper. There were a few things saying, and it does touch on this idea, that yeah, you, might be focusing on the thing that you feel that you're able to control, but just because you're able to, like, Make very efficient part of this spot here that doesn't necessarily translate into a saving higher up in the system. Simply because if it's, if you don't, if higher up in the system isn't set to actually take advantage of that, then you might never achieve some of these savings It's a little bit like when you're working in cloud, for example, people tell you do all these things to kind of optimize your cloud savings. But if people are not turning data centers off, at best, you might be slowing the growth of infrastructure rollout in future, and like these are, and these are much, much harder things to kind of claim responsibility for, or say that, "yeah, it was definitely, if it weren't for me doing those things, we wouldn't have had that happen." This is one of the things that I appreciated the paper just making some allusions to and saying, look, yeah, this is, you know, this is why I mean, to be honest, when I was reading this, I was like, wow, there is, there was obviously some stuff for, beginners, but there's actually quite a lot here, which is quite meaty for people who are thinking of it as a much larger systemic level. So there's definitely things like experts could take away from this as well. So, I just want to check, are there any particular takeaways the two of you would like to kind of draw people's attention to beyond what we've been discussing so far? Because I quite enjoyed the paper and there's a few kind of nice ideas from this. Charles, if I just give you a bit of space to, kind of, come in. Charles Tripp: Yeah. I've got, kind of two topics that I think build on what we talked about before, but could be really useful for people to be aware of. So one is, sort of one of the outcomes of our studying of the impact of different architectures, data sets, hyper parameter settings on deep neural network energy consumption was that the most efficient networks, most energy efficient networks, and largely that correlates with most time efficient as well, but not always, the most efficient ones were not the smallest ones, and they were not the biggest ones, right? The biggest ones were just required so much data movement. They were slow. The smallest ones, they took a lot more iterations, right? It took a lot more for them to learn the same thing. And the most efficient ones were the ones where the working sets, where the amount of data that was moved around, matched the different cache sizes. So as you made the network bigger, it got more efficient because it learned faster. Then when it got so big that the data in like between layers, the communication between layers, for example, started to spill out of a cache level. Then it became much less energy efficient, because of that data movement stall happening. So we found that like there is like an optimum point there. And for most algorithms, this is probably true where if the working set is sized appropriately for the memory hierarchy, you gain the most efficiency, right? Because generally, like, as I can use more data at a time, I can get my software to work better, right, more efficiently. But there's a point where it falls out of the cache and that becomes less efficient. Exactly what point is going to depend on the software. But I think focusing on that working set size and how it matches to the hardware is a really key piece for almost anyone looking to optimize software for energy efficiency is to think about that. How much data am I moving around and how does that map to the cache? So that's like a practical thing. Chris Adams: Can I stop you Because I find that quite interesting, in that a lot of the time as developers we're kind of taught to kind of abstract away from the underlying hardware, and that seems to be going the other way. That's saying, "no, you do need to be thinking about this. You can't. There, you know, there's no magic trick." Charles Tripp: Right? And so, like, for neural networks, that could mean sizing my layers so that those working sets match the cache hierarchy, which is something that no one even considers. It's not even close in most architectures. Like, no one has even thought about this. The other thing is on your point about data center operations and kind of the different perspectives, one thing that we started to think about as we were doing some of this work was it might make sense to allocate time or in the case of like commercial data center, commercial cloud operator, even like charge field based on at least partly the energy rather than the time, as to incentivize them to use less energy, right? Like make things more energy efficient. Those can be correlated, but not always right. And another piece of it that I want to touch on of that same puzzle is, from a lot of data center operators perspective, they want to show their systems fully utilized, right? Like there's demand for the system, so we should build an even bigger system and a better system. When it comes to energy consumption. That's probably not the best way to go, because that means that those systems are sitting there probably doing inefficient things. Maybe even idling a lot of time, right? Like a user allocated the node, but it's just sitting there doing nothing, right? It may be more useful instead of thinking about, like, how much is the system always being utilized? But think about how much, how much computation or how many jobs or whatever your, like, utilization metric is, do I get, like, per unit energy, right? And you may think about how much, or per unit carbon, right? And you may also think about, like, how much energy savings can I get by doing things like shutting down nodes when they're unlikely to be utilized and more about like having a dynamic capacity, right? Like full tilt. I can use I can do how many flops or whatever, right? But I can also scale that down to reduce my idle power draw by, you know, 50 percent in low demand conditions. And if you have that dynamic capacity, you may actually be able to get even more throughput. But it's with less energy because when there's no demand, I'm like shutting, I'm like scaling down my data center, right? And then when there's demand, I'm scaling it up. But these are things that are requiring cultural changes in data center operations to happen. Chris Adams: I'm glad you mentioned this thing here because, Dawn, I know that you had some notes about, it sounds like in order for you to do that, you need, you probably need different metrics exposed or different kinds of transparency to what we have right now. Probably more actually. Dawn, can I give you a bit of space to talk about this? Because this is one thing that you told me about before and it's something that is actually touched on in the paper quite a few times actually. Dawn Nafus: Yeah, I mean, I think we can notice a real gap in a way between the kinds of things that Charles brings his attention to, and the kinds of things that show up in policy environments, in responsible AI circles, right, where I'm a bit closer, we can be a bit vague, and I think we are at the stage where, at least my read on the situation, is that, you know, there's, regardless of where you sit in the debates, and there are rip roaring debates about what to do about the AI energy situation, but I think transparency is probably the one thing we can get the most consensus on, but then, like, just back to that, what the heck does that mean? And I think we need a little, like a, more beats than are currently given to actually where, what work are those measurements doing? You know, some of the feedback we've gotten is, you know, "well, can't you just come up with a standard?" Like, what's the right standard? It's like, well, no, actually, if data centers aren't standard, and there are many different ways to build a model, then, yes, you can have a standard as a way of having a conversation across a number of different parties to do a very specific thing, like for example, Charles's example, you know, suggested that if we're charging on a per energy basis, that changes a whole lot. Right? But what you can't do is to say, this is the standard that is the right way to do it, and then that meets the requirement, because that's, you know, what we found is that clearly the world is far more, you know, complicated and specific than that. So, I, you know, I would really encourage the responsible AI community to start to get very specific very quickly, which I don't yet see happening, but I think it's just on the horizon. Chris Adams: Okay. Well I'm glad you mentioned about maybe taking this a little bit wider 'cause we've dived quite spent a lot of time talking about this paper, but there's other things happening in the world of AI actually, and I wanna give you folks a bit of space to kind of talk about anything that like, or things that you are, that you would like to kind of direct some attention to or you've seen that really you found particularly interesting. Charles, can I give you some space first and then give Dawn the same, to like say it to like I know, either shout out or point to some particular things that, if they've found this conversation interesting so far, what they might want to be looking at. More data. Charles Tripp: Yeah. I mean, I think, both in like computer program, computer science at large and especially in machine learning, we've kind of had an attitude, especially within deep learning within machine learning, an attitude of throwing more compute at the problem, right? And more data. The more data that we put through a model and the bigger, the more complicated the model is, the more capable it can be. But this brute force approach is one of the main things that's driving this increasing computing energy consumption. Right? And I think that it is high time that we start taking a look at making the algorithms we use more energy efficient instead of just throwing more compute. It's easy to throw more compute at it, which is why it's been done. And also because there hasn't been a significant like material incremental cost of like, Oh, you know, now we need. Twice made GPUs. I don't big deal. But now we're starting to hit constraints because we haven't thought about that incremental energy costs. We haven't had to, as an industry at large, right? Like, but now it's starting to be like, well, we can't build that data center because we can't get the energy to it that we need to do the things we want to do with it because we haven't taken that incremental cost into account over time, we just kind of ignored it. And now we hit like the barrier, right? And so I think thinking about, the energy costs and probably this means investing in more finding more efficient algorithms, more efficient approaches as well as more efficient ways to run data centers and run jobs. That's gonna become increasingly important, even as our compute capacity continues to increase. The energy costs are likely to increase along with that as we use more and more, and we need create more generation capacity, right? Like, it's expensive at some point where we're really driving that energy production, and that's going to be increasingly an important cost as well as it is now, like, starting to be a constraint to what kind of computing we can do. So I think investing in more efficient approaches is going to be really key in the future. Chris Adams: There's one thing that I, that I think Dawn might come in on this actually, is that, you're talking about, it seems that you're talking about having more of a focus on surfacing some of the kind of efficiency or the fact that resource efficiency is actually going to be something that we probably need to value or sharpen, I mean, because as I understand it so far, it's not particularly visible in benchmarks or anything like that right now, like, and if you have benchmarks deciding, what counts as a good model or a good use of this until that's included. You're not going to have anything like this. Is that the kind of stuff you're kind of suggesting we should probably have? Like, some more recognition of, like, or even like, you're taking at the energy efficiency of something and being that thing that you draw attention to or you include in counting something as good or not, essentially. Dawn Nafus: You know, I have a particular view of efficiency. I suspect many of your listeners might, as well. You know, I think it's notable that at the moment when we're seeing the, you know, the the model of the month, apparently, or the set of models of DeepSeek has come onto the scene and immediately we're starting to see, for the first time, you know, a Jevons paradox showing up in the public discourse. So this is the paradox that when you make things more efficient, you can also end up stimulating so much demand... Chris Adams: Absolute use grows even though it gets individually more efficient. Dawn Nafus: Yeah, exactly. Again, this is like this topsy turvy world that we're in. And so, you know, now the Jevons paradoxes is front page news, you know, my view is that yes, you know, again, we need to be particular about what sorts of efficiencies are we looking for where and not, you know, sort of willy nilly, you know, create an environment where, which I'm not saying you're doing Charles, but you know, what we don't want to do is create an environment where if you can just say it's more efficient, then, somehow, you know, we're all good, right. Which is, you know, what some of the social science of Energy Star has actually suggested that, that stuff is going on. With that said, right, I am a big fan of the Hugging Face Energy Star initiative. That looks incredibly promising. And I think one of the things that's really promising about it, so this is, you know, you know, leaderboards when, you know, people put their models up on Hugging Face. There's some energy measurement that happens, some carbon measurement, and then, you know, leaderboards are created and all the rest of it. And I think one of the things that's really good at, right, I can imagine issues as well, but you're A, you know, creating a way to give some people credit for actually looking. B, you're creating a way of distinguishing between two models very clearly, right? So in that context, do you have to be perfect about how many kilowatts or watts or whatever it is? No, actually, right? Right? You know, you're looking at more or less in comparable models. But C, it also interjects this kind of path dependence. Like, who is the next person who uses it? Right? That really matters. If you're setting up something early on, yes, they'll do something a little bit different. They might not just run inference on it. But you're, changing how models evolve over time and kind of steering it towards even, you know, having energy presence at all. So that's pretty cool to my mind. So I'm looking forward to... Chris Adams: Cool. We'll share a link to the Hugging Face. I think they, I think, do you know what they were called? I think it's the, you might be, I think it's, it was initially called the Energy Star Alliance, and then I think they've been told that they need to change the name to the Energy Score Alliance from this, because I think it, Energy Star turned out to be a trademark, but we can definitely add a link to that in the show notes, because, these, this actually, I think it's something that is officially visible now. It's something that people have been working on late last year, and now there is, we'll share a link to the actual GitHub repo, to the code on GitHub to kind of run this, because this works for both closed source models and open source models. So it does give some of that visibility. Also in France, there is the Frugal LLM challenge, which also sounds similar to what you're talking about, this idea of essentially trying to emphasize more than just the, you know, like to pay a bit more attention to the energy efficiency aspect of this and I'm glad you mentioned the DeepSeek thing as well because suddenly everyone in the world is an armchair expert on William Stanley Jevons paradox stuff. Everybody knows! Yeah. Dawn Nafus: Actually, if I could just add one small thing, since you mentioned the Frugal effort in France, there's a whole computer science community, sort of almost at a step's length from the AI development community that's really into just saying, "look, what, you know, what is the purpose of the thing that I'm building, period." And even, and that, you know, frugal computing, computing within limits, all of that world really about how do we get, you know, just something that somebody is going to actually value, as opposed to, you getting to the next, you know, score on a benchmark leaderboard somewhere. so I think that's kind of also lurking in the background here. Chris Adams: I'm glad you mentioned this, what we'll do, we'll add a we'll add links to both of those and, you immediately make me think of, there is this actual, so we're technologists mostly, the three of us, we're talking about this and I work in a civil society organization and, just this week, there was a big announcement, like a kind of set of demands from civil society about AI that's being shared at the AI Action Summit, this big summit where all the great and good are meeting in Paris, as you alluded to, next week to talk about what should we do about this? And, they, it's literally called Within Bounds, and we'll share a link to that. And it does talk about this, like, well, you know, if we're going to be using things like AI, what do, we need to have a discussion about what they're for. And that's the first thing I've seen which actually has discussions about saying, well, we should be actually having some concrete limits on the amount of energy for this, because we've seen that if this is a constraint, it doesn't stop engineers. It doesn't stop innovation. People are able to build new things. What we should also do is we should share a link to, I believe, Vlad Coraoma. he did an interview with him all about Jevons paradox a few, I think, late last year, and that's a really nice deep dive for people who want to basically sound knowledgeable in these conversations on LinkedIn or social media right now, it's a really useful one there as well. Okay, so we spoke a little bit about these ones here. Charles, are there any particular projects you'd like to kind of like name check before we start to wrap up? Because I think we're coming up to the hour now, actually. Charles Tripp: I don't know, not particular, but I did mention earlier, you know, we published this BUTTER-E data set and a paper along with it, as well as a larger one without energy measurements called BUTTER. Those are available online. You can just search for it and you'll find it right away. I think, if that's of interest to anyone hearing this, you know, there's a lot of measurements and analysis in there, including, you know, all the details of analysis that I mentioned where we, had this journey from number of compute cycles to, like, amount of stall, in terms of what drives energy consumption. Chris Adams: Ah, it's visible so people can see it. Oh, that's really cool. I didn't realize about that. Also, while you're still here, Charles, while I have access to you, before we did this interview, you mentioned, there's a whole discussion about wind turbines killing birds, and you were telling me this awesome story about how you were able to model the path of golden eagles to essentially avoid these kind of bird strike stuff happening. Is that in the public domain? Is something, can we link to that? That sounded super cool. Charles Tripp: There's several, papers. I'll have to dig up the links, but there's several papers we published and some software also to create these models. But yeah, I worked on a project where we looked at, we took, eagle biologists and computational fluid dynamics experts and machine learning experts. And we got together and we created some models based off of real data, real telemetry of tracking, golden eagle flight paths through, well, in many locations, including at wind sites, and match that up with the atmospheric conditions, the flow field, like, or graphic updrafts, which is where the wind hits, you know, like a mountain or a hill and it, some of it blows up. Right. And golden eagles take advantage of this as well as thermal updrafts caused by heating at the ground. Right. Causing the air to rise to fly. Golden eagles don't really like flapping. They like gliding. And because of that, golden eagles and other soaring birds, their flight paths are fairly easy to predict, right? Like, you may not know, like, oh, are they going to take a left turn here or right turn there, but generally they're going to fly in the places where there's strong updrafts and using actual data and knowledge from the eagle biologists and simulations of the flow patterns, we were able to create a model that allows wind turbines to be cited and also operate, right? Like, what, under what conditions, like, what wind conditions in particular and what time of year, which also affects the eagles' behavior, should I perhaps reduce my usage of certain turbines to reduce bird strikes? And in fact, we showed that it could be done without significantly, or even at all, impacting the energy production of a wind site. You could significantly reduce the chances of colliding with a bird. Chris Adams: And it's probably good for the birds too, as well, isn't it? Yeah. Alright, we definitely need to find some links for that. That's, going to be absolute catnip for the nerdy listeners who put, who are into this. Dawn, can I just give you the last word? Are there any particular things that you'd like to, I mean actually I should ask like, we'll add links to like you and Charles online, but if there's anything that you would draw people's attention to before we wrap up, what would you pay, what would you plug here? Dawn Nafus: I actually did want to just give a shout out to National Renewable Energy Lab, period. One of the things that are amazing about them, speaking of eagles, a different eagle is, they have a supercomputer called Eagle. I believe they've got another one now. It is lovingly instrumented with all sorts of energy measurements, basically anything you can think to measure. I think you can do it in there. There's another data set from another one of our co authors, Hilary Egan, that has some sort of jobs data. You can dig in and explore like what a real world data center job, you know, situation looks like. So I just want to give all the credit in the world to National Renewable Energy Lab and the stuff they do on the computing side. It's just phenomenal. Chris Adams: Yes, I think that's a really, I would echo that very much. I'm a big fan of NREL and the output for them. It's a really like a national treasure Folks, I'm really, thank you so much for taking me through all of this work and diving in as deeply as we did and referring to things that soar as well, actually, Charles. I hope we could do this again sometime soon, but otherwise, have a lovely day, and thank you once again for joining us. Lovely seeing you two again. Charles Tripp: Good seeing you. Chris Adams: Okay, ciao! Hey everyone, thanks for listening. Just a reminder to follow Environment Variables on Apple Podcasts, Spotify, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show, and of course, we'd love to have more listeners. To find out more about the Green Software Foundation, please visit greensoftware.foundation. That's greensoftware.foundation in any browser. Thanks again, and see you in the next episode.…
E
Environment Variables

1 The Week in Green Software: Transparency in Emissions Reporting 53:14
53:14
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי53:14
For this episode of TWiGS, Chris and Asim discuss the latest developments in emissions reporting, AI energy efficiency, and green software initiatives. They explore the AI Energy Score project by Hugging Face, which aims to provide an efficiency benchmark for AI models, and compare it with other emissions measurement approaches, including the Software Carbon Intensity (SCI) for AI. The conversation also touches on key policy shifts, such as the U.S. executive order on AI data center energy sourcing, and the growing debate on regulating the data center industry. Plus, they dive into the Beginner's Guide to Power and Energy Measurement for Computing and Machine Learning, a must-read for anyone looking to understand energy efficiency in AI. Learn more about our people: Chris Adams: LinkedIn | GitHub | Website Asim Hussain: LinkedIn | Website Find out more about the GSF: The Green Software Foundation Website Sign up to the Green Software Foundation Newsletter News: AI Energy Score | Hugging Face [04:04] A Beginner's Guide to Power and Energy Measurement and Estimation for Computing and Machine Learning [20:00] Executive Order on Advancing United States Leadership in Artificial Intelligence Infrastructure [32:10] AI datacenters putting zero emissions promises out of reach • The Register [45:30] xAI's "Colossus" supercomputer raises health questions in Memphis | TechCrunch [38:22] Events: Practical Advice for Responsible AI (February 27 at 6:00 pm GMT · London) [50:30] GSF Oslo - February Meetup (February 27 at 5:00 pm CET · Oslo) [50:52] Resources: CodeCarbon [06:00] Optimum Benchmark | Hugging Face [06:12] SCI for AI | GSF [06:40] ITU [07:07] Responsible AI Institute [10:24] EcoLogits [15:07] NREL Data Catalog [25:50] Kepler | CNCF [30:14] Environment Variables Ep97: How to Tell When Energy is Green with Killian Daly [33:52] The Problem of Jevons' Paradox in AI's Polarized Environmental Debate | Sasha Luccioni [49:32] If you enjoyed this episode then please either: Follow, rate, and review on Apple Podcasts Follow and rate on Spotify Watch our videos on The Green Software Foundation YouTube Channel! Connect with us on Twitter , Github and LinkedIn ! TRANSCRIPT BELOW: Asim Hussain: There's this assumption out there that we're trying to hunt for the right, true essentialist value of measurement, and it really isn't like that Chris Adams: Hello, and welcome to Environment Variables, brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software. I'm your host, Chris Adams. Hello, and welcome to another edition of This Week in Green Software. I'm your host, Chris Adams. Today, we're tackling an ongoing conversation in software today, predicting, measuring, and accurately reporting emissions data, particularly in AI. And as AI adoption skyrockets, so does its energy footprint. Putting pressure on data infrastructure and sustainability goals. So today we'll be looking at a few new reports, what's going on, and generally doing a kind of roundup of the news and recent events along this. Because it's not all doom and gloom, although there is some. I'm also joined today by my friend and frequent collaborator, Asim Hussain. Asim, can I give you some space to introduce yourself before we do our weekly, well, semi weekly, news roundup? Asim Hussain: Not so weakly, anymore. Yeah. Hi. I'm Asim Hussain. I'm the Executive Director of the Green Software Foundation. So we are a standards organization and our mission is a future where software has zero harmful environmental impacts. And you might not be surprised to hear that we believe one of the best paths forwards is developing standards through consensus of multiple organizations. Because through setting those standards, you can direct billions of dollars into the right places. And if you do it wrong, you can direct billions of dollars into the wrong places. So let's do it right. Chris Adams: Okay. Thank you for that Asim. If you're new to this podcast, my name is Chris. I'm the director of technology and policy now at the Green Web Foundation, which is not the same as the Green Software Foundation. It's a small Dutch nonprofit, although we are members, founding members of the Green Software Foundation, along with a number of other much, much larger technology giants. And I'm the host of this podcast and I'll also be doing my best to compile all the links and stories that we have so that if there's anything that has caught your interest as you listen to this, possibly whilst you're washing your dishes, you've got something to follow up with later. Alright! Asim Hussain: Is it time for my yearly apology for naming it the Green Software Foundation and causing this constant confusion? Chris Adams: I think it might be, but sometimes it works in our favor as well, because when people speak to us, like a scrappy startup, a scrappy kind of wacky little non profit, then they say, "oh, we've heard a bunch about you folks. Oh, we thought you were bigger," you know, so it's, we do have, it opens interesting doors. We sometimes do, I have had the odd conversation where people thought I was the Green Web Foundation. Yeah. So this is, Asim Hussain: Yeah, let's wear the hats that benefit us at any given moment. Chris Adams: Pretty much, yeah. So this is what we're going to have and I think that we are doomed to have this mix up and the fact that we are speaking to each other on a regular basis probably doesn't help us, actually. Maybe we should, I don't know, have some big dramatic fallout or something. Asim Hussain: Oh yeah, let's do like a fake fallout on the internet, yeah. Chris Adams: We're not that keen for engagement, are we, mate? Let's not do that, alright? Okay. So, I was going to ask if you're sitting, comfortably, Asim, but I can see that you're on a standing desk, so I think you're now standing comfortably, presumably, right? Asim Hussain: At attention. Chris Adams: All right, well in that case, shall we start and look at the first story and then see where we go from there? All right, so the first story that's kind of shown up on the radar is the AI Energy Score from Hugging Space. Sorry, Hugging Face, not hugging space, god. Yeah, so this is, this is actually essentially a project that is being spearheaded by folks at Hugging Face, but with also involvement from companies you've heard of like Salesforce and so on, to essentially work out something that might be a little bit like an Energy Star for AI. Now, you probably, it's probably not called Energy Star because Energy Star is a trademark, but the general idea is, essentially, if we're going to have various AI models and things, then we should be thinking about them being efficient, and there are tools available to make this possible, actually. Asim, I know you had a chance to look at some of this, and you've had quite a few conversations with Boris Gamazaychikov the at Salesforce. They're the kind of one of the AI leads. I'm mentioning Boris because he's quite involved in the GSF. There are lots of other people involved with the Hugging Face project, but Boris is the person who we know, so that's why we've got that named. Asim Hussain: He's not, he's, so just to be clear like he's not a member Salesforce is not a member of the green software foundation. But yeah, I've just been chatting to boris obviously because we want to, one of the things we try and do is chat to everybody who's doing something in the AI measurement space so that we can at least try and coordinate and have like a common voice. That's kind of one of the one of the things that we've been doing. Yeah Chris Adams: Cool, and if I understand it correctly, we'll share a link to both the GitHub.io, the kind of public facing site with all this information about how the Energy Score project is working, plus the leaderboard, which has various closed and open source models. It's actually showing how efficient they are at performing particular tasks. We'll also share a link to the GitHub repo, which actually shows how it's made because it's using tools that you may have heard of if you've ever messed around with AI models yourself. So it's using Code Carbon, which is pretty much the default tool that people use to work out the environmental footprint of a training run or anything like that. And I believe the set, the other tool is Optimum or optimal Benchmark. I can never remember, but these two Asim Hussain: that the actual benchmark tool? That's the thing that actually runs the benchmark, yeah. Chris Adams: Exactly. So this is not like wacky stuff. This is stuff that you probably should have heard of or you are likely to come across, to see. And there is actually a Docker container for people who aren't able to publish their entire open models, with the idea being that you can run some of this behind the file, as it were, so you can then share some of the numbers back And, Asim, I can't, while I've got you, I wanted to ask you about this because I know that the, I've been kind of tracking the AI Energy Score project for a few months, but I know there was some work inside the GSF to create a Software Carbon Intensity for AI Asim Hussain: Oh yeah. Chris Adams: these aren't competing, but they do overlap and maybe you could actually share a little bit more to explain what these two things are or even what is this SEI for AI in this context. Asim Hussain: And there's also others as well. So we're talking Sir Joseph, the head of R&D is also sitting in with meetings at the ITU, International Telecoms Union, and so they're working on work themselves. There's EcoLogits from Samuel Rice. There's, there's other ones as well. And I probably just want to preface this by saying something, and I'm going to try and put some words to these thoughts. I've internalized a lot of how I think about measurement and through conversations with others, I just want to make sure, I want to try and get my point across, which is there isn't one true way of measuring everything. It's not like there's one winner and one loser. What it is, is that different measurement systems have different trade offs. They incentivize certain things, they disincentivize other things, they have broader scopes and narrower scopes. And one of the things I've realized is you, it's almost impossible to create a measurement system which ticks every single box. Like it's almost impossible to have a measurement system which has the ability to measure like a broad spectrum of stuff and yet still also be consistent and repeatable and all these other areas, all these trade offs. So yeah, I love AI Energy Score but there's also other ones as well. I just want to preface it by saying every single measure is designed for a particular audience and a particular problem. And I think that's kind of like one of the one of the ways I like to talk to people about it because I do get concerned that people they're always, there's this assumption out there that we're trying to hunt for the right true essentialist value of measurement and it really isn't like that so take all of my feedback on everything just with that, you know that context in mind yeah, so I think and I think that's kind of like one of the one of the ways that we look at it. So what's really good about the AI, do you want me to talk about it? You know a Chris Adams: please do. Yeah, Asim Hussain: I mean Chris Adams: I'm listening to more because I'm, I've got some things to share, but I'm, I haven't heard that much about this. And I haven't been, and I know that the GSF had these workshops going on where people have been exploring this stuff. And I haven't been in those, but I suspect I know you've been in beside them. And I suspect there've been some good, interesting conversations as a can't result. Asim Hussain: I can't dive too much deep into it because we're still in progress and we had the agreement not to, you know, give too much information about in-progress stuff. So if someone has a crazy idea, we're not going to publish it and We'll allow people to have these private conversations But I think there's some stuff I can share that one of the things that's come out from our conversations is there's a really, almost one of the most strongest feelings from the group is for a measure that really has a broad scope for a lot of different AI systems, but also for the breadth of the AI life cycle as well. So, you know, not just inference and also not just training, but also like, the model as it's deployed in an infrastructure. So it's an end to end computation that includes everything across the chain from edge devices all the way over to data preparation. And so there's various scores, so for instance, there's something called the green, the Green AI Index from the Responsible AI Institute, which is also another measure, and that kind of focuses on a pretty broad spectrum. There's AI Energy Score, which is excellent because it is focusing on just the model itself. And so when you think of the life cycle, it's not like, it's not gonna, it's just focusing on the model. And they've made, they've done a great job of making it a type of measure, which is consistent and repeatable. And they've done that by, you know, you've got your model. Here's our, here is the benchmark you run. You have no, you've got to run this benchmark. Yeah. you also have to run it on this particular hardware because you can't just get a better score by just running on a better hardware. You want to try and measure the model. Like you've got to, you've got to, you've got to turn variables into constants to kind of get some sort of measure from that perspective and it's really interesting related to the next thing I'm going to talk about the beginner's guide to, it's a report that's coming out because they, I think they did a really good job they're trying to summarize different types of measurements and I think they put it as a system measurement was kind of very big picture It's kind of what one of the things I think maybe where the SCI for AI is going to be talking about. Then they're kind of job/application specific measurements where you kind of make more of those variables constants. And then there's kind of what we call a code measurements, which are I want to measure, you know, the emissions of this piece of code. In order to do that, you really need to turn a lot of other variables into constants, so you can know that if you turned a for loop into a while loop, what the actual, like, impact would be. And where I'd say AI Engine Scores is in terms of that taxonomy, it lands more on the code one. But I'm not saying that's a, I'm saying that is the only way you can get something that is consistent where you can actually have a model that, that, you can really give a score to. And it does incentivize a lot of things. It incentivizes a lot of the almost code based patterns to improve model efficiency. But it, because of the way it's worked, it won't incentivize other things. Like, it won't incentivize running compute in cleaner regions. Yeah, cause, Chris Adams: different kinds of energy, or different cooling, for example, you're only looking at the, just the code part specifically. Asim Hussain: And that's fine. Yeah, that's fine. Because if you included that, then you wouldn't be able to have a measure that is going to tell you, okay, is Llama better than DeepSeek? They kind of just want to know that from a, you need to turn these things into variables. So, it's very good from that perspective. And I think it's one of the most advanced ones. It's the best one that does it's job. It does do it's job by being a, by, and they admit this, by having like a narrow bandwidth. Chris Adams: There's one card it uses, I think it's an NVIDIA H100. I'm, I believe it's that, but I'm not sure I would know an NVIDIA H100 if it was dropped on my feet, so I need to be very clear that I'm at the limits of my expertise when it comes to hardware there. Okay, and the other thing we should probably mention, though, that this was one of the projects that was announced at the AI Action Summit in Paris that happened earlier on, I believe this month, actually, which has all kinds of announcements, so, in Europe, there is a, I think two, I think it's a 200 billion, yeah, a 200 billion euro fund specifically for rolling out AI across Europe. There was a something that was kind of like a European take on this whole ridiculous Stargate thing. A ginormous French data center thing. Asim Hussain: Yeah. Chris Adams: That was Macron giving him some me too. And there was even actually for civil society, there was 400 million euro fund to kind of try and get an idea of the unintended consequences or talk about how you might reign in some of the worst excesses of this new technology that's being kind of deployed in all these places, sometimes where you're asking for it, sometimes where you might not be asking for it. Asim Hussain: So 0.2 percent of the 200 billion is for Chris Adams: Yeah. It's Asim Hussain: the question of whether this is a Chris Adams: It does speak volumes about our priorities, about who are we serving here, basically, I suppose, or whose needs are being prioritized when you have something like that. But yes, this is, this is some of the kind of ongoing conversations we, I guess, we actually have. there's just two things I just want to check because you used to, you mentioned a couple of projects that people might not be aware of that may be relevant for this conversation. So you spoke about Ecologits, as I understand it, this is if you're using AI right now and you don't have a model, for example, I mean, you don't have like a whole training setup, you can use something like Ecologics to get an idea of inference. So that's, is that the case? Asim Hussain: Yeah, I think, it does have a methodology as well. So you can actually just take their methodology and, I think he actually asked us to use the word estimate, but like, cause it's all not, none of this is direct measurements, right? So estimate the emissions of a model, but they also have like an API. So if you have a named model you can call the API and it will kind of give you information about the, I do believe it's only carbon, it might be carbon and water, I can't quite remember, but it kind of gives you Chris Adams: French, they have, there's like five specific impact, kind of impact factors. There's like water, ADP, like abiotic depletion, something like that. There's basically five things, and one of them is carbon, and one of them is energy, I believe. And this, you don't need to be, like, if you're already using Claude, or you're already using AI, OpenAI, this is just like a one Python package that essentially wraps the function calls you make to, to that API to get some of the numbers back. So, Asim Hussain: I don't think, I don't, I think Ecologist is just for models itself, I don't think it's for, Chris Adams: Oh no, it is for inference. because we, we put a funding bid to the European AI Act Implementation Fund, where they were basically looking for this stuff. And the thing we realized was that if you are, if you're not doing any training, but you're just doing inference, this is one of the Python packages that will give you an idea about the numbers. But it is very much, Asim Hussain: inference only, Chris Adams: yeah, exactly, inference Asim Hussain: That's one of the conversations, yeah. Like the biggest conversation we're having in this side for AI right now is to include training or not to include training. And like one of the things the AI Energy Score and Ecologits is that it doesn't include training. The Green AI Index does include training. And, you know, that's it's a very, It's a very, oh god, it's such a hard question, it's like so much nuance to it. Chris Adams: Well, yeah, because if you're including training, then whose training are you including, right? So if I'm using, say, Llama, should I be saying, should some of Llama's footprint, which was training, and we know, should that be allocated to me, or should it not be? And like, we can point to existing protocols that like say maybe you should, but in this case maybe that isn't. So yeah, this is an open question right now. Asim Hussain: Well if you, this is where my brain is so stuck in this area. Because if like, if you include open sources. I want open sources models in yours. It doesn't incentivize the reuse of models. If you don't include an open sources, if you're saying it's open source, I'm not going to include it. You can be a company that just goes, "I open sourced this model so I don't have any emissions." So there's like so many different ways it can be. This is a very, hard question that we need to solve. I also think it's very interesting because it's a, I think it's, I think it's, the training question is. I, I suspect us figuring out or getting consensus on the training question, a very nuanced discussion and conclusion to the training question will actually help in many, other areas of like, how do you actually measure software? Because I think it's, it's, such a difficult question to answer. I think the solution will inform so many other areas as well, which are kind of slightly simpler. Chris Adams: It's almost as if using generally accepted accounting practices first developed hundreds of years ago might not be all that useful for thinking about how you use open source models and open weight models in Asim Hussain: yeah, advanced technology systems. Chris Adams: Okay, Asim Hussain: it's something to do with cloning. Like, if you can clone something, a click of a button, you can't clone a chip. I don't know. I haven't got fully refined thoughts on this yet. So, let's move on. Chris Adams: We'll wait with bated breath for these, the outputs from the workshops as you do them. All right. So. that gave us a lot of time to chat about that stuff. The other thing I'll just quickly name check for the AI Action Summit was there was a statement called within, the Within Bounds Statement. I'll share a link to that. This was something that, actually my organization worked with or the organization I'm part of. So, Michelle Thorne, who's my colleague and normally sits next to me, she was working with 120 different civil society groups to basically lay out a set of demands to say, look, if we're talking about AI and we're allocating literally hundreds of billions of euros or dollars to this stuff, can we talk about what it's for and who's benefiting from this stuff? We'll share a link to that because it's actually, in my view, quite well written and it does a very good job of actually talking about some of the issues that we might not be talking about all the time as people in industry to see how the rest of the world is actually like having to respond to some of this, I suppose. So we'll share a link to that. But the juicy one now, Asim, is the one that you wanted to talk about, and that we both were nerding out a lot, was A Beginner's Guide to Power and Energy Measurement, an Estimation for Computing and Machine Learning. This is the next story that we have inside it, and I believe you've shared a link to the archive, the archive link for this pre print, because it's a really cool looking paper, and it's publicly available for everyone right now, but it might, I think it's going to be going to some journal, but I'm not quite sure, and figured But Asim Hussain: I thought it got published in the, in an NREL journal. I don't know. Maybe it's not maybe it's not in a real journal or maybe now that I understand how journals what journals are maybe doesn't really matter Chris Adams: So NREL here being the National Renewable Energy Labs of the United States of America. That's what NREL was in this case here. We've shared a link to it and, you did talk a little bit about why you like this, but can I give you a bit more space to talk about why you've enjoyed this? Because you don't need to be a beginner to actually appreciate this as far as I understand it, right? Asim Hussain: No, it goes into a lot of detail. I mean, it says beginner, I'd say it goes a beginner's guide. Probably a little bit of imposter syndrome there, because I'd actually call it, like, it's very well written, so a beginner could start it, but I think it goes into very advanced topics that not many people know at all. So, I think it goes from beginner to advanced. Yeah, I'm quite proud, Akshaya is the lead author of it, and Dawn Nafus is there, these are two people I worked very closely with at Intel. Very proud of this piece of work from them and the people, people over there. I share this with my team, so we're all working on kind of like thinking about how to measure energy. And it's just exciting to see, just see how her and everybody else kind of rationalize this all into a very easy to understand, you know, set of concepts. As I said before, like they, they, you know, the first thing they go through to try and come up with this taxonomy, you know, are you measuring for a system? Are you measuring for a job or are you measuring for code? And I think they've done a really good job of trying to like explain the difference they talk about are you measuring directly versus are you measuring versus proxies? I love the fact that she even goes down and said, there's this idea that we have is there's I always say like everything's a model like you can't, there's actually no such thing as direct measurements. There's just a very advanced model. and she even goes down into, you know, even if you're using a watt meter and not against a wall, you've actually really got to consider like many of the rare areas because you've got to calibrate it. If you don't calibrate it, it's not going to really go, you calibrate a model, right? It's not going to like, you know, actually turn out the right numbers and gives you a lot of cautionary tales, you know, where, what to think through. And it really just goes into just a lot of these. I don't know if it's worthwhile going into all of it, but there's just a lot of detail about the things to consider, you know, idle power draw, you know, not only that, but like when you run things, when you run, we always knew that like, it was challenging to measure when you're on shared infrastructure, but then they go into like other details, which is like, it gets even more challenging because the, like, the information you're getting from the socket might actually contain information from the energy draw from the memory and it's hard to, like, disambiguate all of this stuff. There's ways in which, if you're accessing memory, it increases the idle power of a CPU. There is so much great information here, and a lot of little tips as well. Chris Adams: Yeah, I think I would agree. It's if you are a beginner, there is some stuff that you can take away, but there is a lot of depth inside this. It's, I actually really enjoyed it too. I enjoyed reading it so much that, actually Dawn sent me, she emailed, I think, emailed me at the beginning of this year, actually, saying, "hey Chris, Check out this cool paper" and I really enjoyed reading it and we were going to do an interview. We've actually got an interview lined up with Dawn Nafus and one of the other authors, Charles Tripp, who was writing for this. And I believe was at NREL and then has left NREL because, Asim Hussain: because? Chris Adams: of yeah, basically, this was the way that we could actually get some people speaking about it. Because since we've had a change in administration, if you're a federal employee it's much, much more for you, difficult for you to talk about anything relating to, well, sustainability and technology, which is a real shame, especially when, like, it's useful to be able to draw upon expertise for people who do this kind of stuff, right? So, maybe that's a question we should ask ourselves, like, are we okay with the people we're asking of these questions to not be able to talk to the public about this kind of stuff? But, what we do have, but to go back to the actual paper. I agree with you. I found it really, useful and this hierarchy of interventions was really useful because one of the key things that it kind of highlighted was basically where you have some control and where you don't have some control and give you a real chance to actually say, well, if I'm not able to do this, what, and what are my options? If I'm still trying to make a meaningful and measurable, yeah, change. Because in many cases, you do have to think about some of the trade offs. The things you might do at a data center level to make some parts maybe slightly more energy efficient or maybe more carbon efficient can have knock on effects elsewhere, for example, further down the kind of, the list, like further down the chain, basically. And this is what they do talk about. It's a really fun read if you're interested in AI. There's so much depth and the nice thing is the thing that one thing that's really quite nice about NREL specifically is that they've shared all the data to back up a bunch of this stuff. So in the podcast interview that we have where we dive into this a bit more, we'll be showing there are some links to all the data sets that NREL was using when they were doing all these constant training runs to figure out what their, what the footprint of x might be and everything like that. So it's probably one of the most useful when open data sets we've seen for people who are trying to get an idea about what the environmental footprint of using, I mean, AI directly, what the direct footprint of this might actually be. Asim Hussain: I'd argue this is like a seminal piece, and you know, if there's like, I imagine this is going to be like essential reading for Green Software courses around the world. If you really want to like major software, you should this paper. Chris Adams: Awesome work. I don't work with Akshaya, but I guess, awesome work Akshaya and friends, for that, but probably not just for beginners. So please do not be turned off by the beginners part. It's definitely not just for beginners. There's loads there. Asim Hussain: They probably put beginners in to make sure the beginners read it but advanced people might think "I already know" so I already know tdp so I don't need to know this. Chris Adams: Yes, by TDP, you're referring to the Thermal density. Oh, what does it stand for? But that's Asim Hussain: I thought it's thermal design power Chris Adams: I think it might be actually you're right. This is the amount of power that gets used at certain amounts of utilization, right? So if I'm using the chip at maximum output, it's going to use this much power. But if it's only using half it's going to be something like that. Asim Hussain: Yeah, but it's also like Akshaya that kind of opened my eyes to understanding kind of how these power curves, she goes into detail here like how those, you know, we hear about these power curves which tell you 10% utilization is this, that's 30% is this. If you, i'm not going to go into details if you read the paper and you realize how those power curves are made they are very rough estimates of what it like looks like, you know, like you don't really know you don't really, you just, there's no register which is telling you I'm 50% percent like, you're just seeing how much throughput, you're just seeing how much you, basically... Should I go into it? You basically chuck like a benchmark at it and you keep on hitting you keep on going like okay, dude, it was a website benchmark. Okay, do one hit per second. Okay, it's fine. You keep on doing it until the benchmark can't go any higher and it's now like 500,000 page views a second. "Okay, I can't seem to do more than 500,000. I must be at 100 percent utilization." That's how that calculation works. And then you think to yourself, "Okay, what does 90% utilization mean?" If I did 500,000, I'm just going to do 450,000 requests. And that's like the approximated idea of what 90% utilization means. But, what it really kind of ends up meaning is that it, the, it depends on the benchmark because an AI benchmark will have a different energy consumption, your pseudo 90% than a database benchmark, than this benchmark. When you actually look at the big benchmark providers like, Esper, CERT and all these other ones, they're collections of different types of applications. And the power curve is the average of those. Which is why, like, if you know you're running, and that's why if you're using like a power curve based over, that's what I think it's saying, if you're using a power curve based off of a CERT benchmark, and you're saying that's what your AI consumption is, it might not be. You really want a power curve which has been generated just by running a, an AI workload. Because the AI workload might just trigger different parts of the chip in different ways. It's very complicated. Yeah, and it, so, it's one of the things we were like, talking about, It's actually one of the reasons I kind of really like the way Kepler works. Because Kepler, Chris Adams: Sorry, I'm going to you there. before you go on this, the reason it's, I'm actually, the reason I'm quite happy to give some space for this, is that people who have listened to this might not know that you were literally working at Intel trying to figure this stuff out when you were doing a bunch of the green software stuff, so it's okay, listen, you know, I, like, you do have some prior art in this stuff, right? Asim Hussain: Yeah. Yeah, we're basically diving into all this stuff. And I kind of learned so much while I was over there. How Kepler works is quite interesting. Is, So Kepler is this kubernetes based system which does a whole bunch of things but one really intelligent thing it does is it tries to figure out what your energy consumption is from the actual stuff that's running on the chips that you're running on. So it has like a machine learning model that, I think it's got, I think it's got some, if you start off Kepler with nothing and it doesn't know anything it will tell you energy numbers but it kind of learns and improves and fine tunes itself based upon A, your actual chips, B, how your chips were configured, C, what you're actually running on your chips. So you kind of get a more accurate power reading from Kepler. One of the things I think would be great for them to do is to kind of just take that out of Kubernetes. And, because that doesn't necessarily need to be a Kubernetes piece, but it's baked into that infrastructure. Because that would be generally useful everywhere. Yeah. Chris Adams: We will share links to both of those, and Asim, you're able to find a link for some of this power curve nerdery, that would be very, helpful, because I do know... Asim Hussain: This paper's got it, yeah. Chris Adams: Well, okay, in that case, we'll use that, because I do know that, well, some of the work I'm doing outside of being on podcasts with you, for example, I'm aware of, like, there are people putting together procurement guidelines where they speak specifically about this kind of stuff like please tell us what the figures are going to be for this power curve based on these ideas here and being able to refer to some of the actual literature is actually very helpful for people to understand why a government buyer might be asking for this stuff and why that's being used as one way to figure out some of the environmental footprints of the use of digital services. All right, we'll add some links to that one and then we'll see what we're doing for time. Can I share one? I want to share a story from me. So this one, this is actually, it's not so much about, it kind of is about technology. This is actually an executive order from the USA called Advancing United States Leadership in Artificial Intelligence Infrastructure. We've shared a link to this and the reason I shared this is because I think it's actually because I work in the policy working group inside the GSF and because we speak a lot about the carbon intensity of power and stuff like that. It's often quite rare to find really good, quite well written and detailed examples of kind of policy. And this is one that, for a short, beautiful short period of days, was actually publicly available. So this was, I think, Asim Hussain: I see the link is, oh, no, it's a real link. No, it is way back machine. Chris Adams: It's webarchive.org, whitehouse.gov, briefing room, presidential actions, on the 14th of January. Just before the new guy came in, there was an executive order all about essentially, deploying AI, and this was specifically about if you're going to deploy AI on public land, what, and in the US there's lots and lots of federally owned public land, what kind of criteria do you actually want to require as condition of people being able to put things on your land like this? So just the same way that people who have private land, they can say, you can run a datacenter here, as long as you do X, Y, and Z. This pretty much lays out, okay, here's what you should be looking for. And this stuff includes a bunch of really, in my view, interesting and like very insightful and incisive policy, pieces of policy inside this. So when we talk about the carbon intensity of power, we've spoken before on this podcast multiple times about how in the hydrogen sector, we already have a very rigorous way of talking about how energy can really be green. And done a recent podcast interview with Killian Daly from EnergyTag talking about this idea, like three pillars, the idea that energy has to be timely. So you can't have power at night being greened with like solar because they're two separate times a day. Deliverable, like you need to be able to have the generation on the same grid as you're consuming from because otherwise it's not very convincing that it's really powering it. And additional, you need to have new power coming in. This literally is name checking every single one of these inside this. Like the actual wording they use Asim Hussain: in terms of power, in terms of more generally applying that Chris Adams: this is specifically for data centers. So if all data centers are like, I'll read some of the kind of quotes from this. Basically, like, as part of ongoing work, the Secretary of Defense and Secretary of Energy shall, Blah, blah, blah, blah, blah, will require concurrent like any AI data centers on a federal site will have procured sufficient new clean power generation with capacity value to meet the data centers needs. And they've, literally explicitly said "has to be deliverable and has to be matched on an hourly basis." So those are the three things right there. They've actually been more explicit about additional elsewhere. So this is like the three things that already in place in other industries, for the first time, really laid out for how the, how you should be doing this for AI data centers. So if you're a policymaker outside the USA, just copy this link. This is probably some of the best stuff of particularly relating to policy, to energy policy. When Asim Hussain: it, but does it say, by the way, shall, you know, you know, the shall means, so just everyone who is listing, shall is a very important term. Shall in the standard space. I presume the policy Chris Adams: You don't get not do to Basically what they're saying. Asim Hussain: You gotta Chris Adams: is mandatory if you want to things on federal land. Elsewhere, yeah, Asim Hussain: should is different. The, the, so just to, the reason you're talking about, as I presume it's the what's mandated is clean energy. Or is what's mandated, Chris Adams: yeah, sufficient new clean energy power generation is they use, they, and later on, they actually talk about what counts as clean energy in this because there's a bunch of stuff, it's quite a long executive order, and we've had this new guy come in power, who's basically, who's rescinded every other executive order, apart from this one, even though it's not visible, so there's some stuff inside this, Asim Hussain: into this one. There's something which benefits, benefits something else. Chris Adams: So there is the whole thing here about, for example, this does say, well, if we're going to have clean energy, we're going to call it carbon free, and we're going to talk about not just renewable, like wind and solar, they talk about, say, the deployment of nuclear, which America, in America, people tend to be more receptive to, or in some places at least. So there's a part, there's a part there. But they even talk about, say, if you're going to have fossil generation, it needs to be 90 percent carbon capture, right? Now, this is a very high bar to hit, because there, right now, there's basically nowhere in any kind of at scale operation which is hitting 90 percent capture of this. So if you were to have gas and you were to have this is probably about as rigorous as you can reasonably ask. And if anyone is actually, in the year 2025, when we know all the science available to us, you're not saying something like this, got to ask, okay, who's captured, who is captured here? Because that is a really, like, there, there is just, it's, you need to have this if you're going to be talking about the use of fossil fuels inside this. And really, you probably shouldn't be using fossil fuels at all anyway. But like, this is examples of, yeah, this is what policy does look like. If you're going to do this, do this properly. Asim Hussain: Yeah, but at the same time I think what we're seeing is, I mean, it's interesting that the up, I don't know if I've got time to go into it, but the uptime report talks about the, the increasing demands is forcing organizations to, you know, like you utility, there's so much demand from data centers. It's not really a question of, you know, you've got to use clean energy. It's like, you don't have the energy or you now have to be a good place. You go to demand response. But there's also then driving up pressure for those organizations. They're kind of walking back a lot of the stuff previously and there's a lot of fossil fuel generation being thrown out. Asim Hussain: I have not verified this at all, but today I saw something on my feed. Which said that, I don't like, anyway, which, which said that, Elon's, Chris Adams: You might be about the x.ai datacenter, the one in Memphis, running Asim Hussain: in Memphis, there's gonna be, there's, like 15 Chris Adams: yes! Asim Hussain: to power it. Which, you know, probably is because the utility said to him, "You're not putting an unbelievable load on our grid. We do not have the capacity for you." And he probably went, "ah, I'll build my own gas generators without asking anybody." Chris Adams: There is a bit of a story behind this. So essentially, the, there was a datacenter, the x.ai datacenter was built very quickly by datacenter standards. And usually, if you want to have power for a data center, you're going to have to wait some time if it isn't already available. And, the, basically the approach that was taken was to essentially deploy a bunch of mobile gas turbines to provide the extra megawatts of power such that you could power that. Now the problem is these are really bad for local air quality. So you're shortening the lives of all the people who live around there, for a start, for the sake of this. And, the other thing that, one of the reasons you're able to do this is because, they count as a mobile generators, they're not covered by the same clean air laws. So you wouldn't able to, yeah, exactly. So essentially this is stuff which has a real human cost, right? This is an already marginalized and kind of racialized community that it already has very bad air and has like elevated cases of asthma and all the stuff like that. So there is a real human cost being paid here. And the decision has been made. "We're going to use this because we've decided that's more important than the lives of people around here." So, like, that's essentially what coming down to. Asim Hussain: But also, I mean that, I'm guessing from the fact that this was an active executive order as a, you know, a few months ago that, that wasn't on federal land and therefore, or something like that must be, or Chris Adams: This is somewhat separate. I mean, for a start, this, the, for the things, for the, xAI case in particular, you don't, any of the local air guidelines or the local air kind of, laws about air, about air quality, don't apply to mobile providers. Asim Hussain: Oh, Chris Adams: providers. Yeah. Yeah. Asim Hussain: Even with this executive order, you can always get around it by just playing on mobile? Chris Adams: So this was, this executive order came later. So we've had this things in xAI. That's been something that we saw last summer. All right. This was only published in January and they, and then it was literally on the White House website for seven days before the new guy came in and it down while pointing to the previous one. And it's also worth bearing in mind that executive orders are not law. So even though someone can say they need to do this, that doesn't mean that it overrules existing law, example. So absent any other law, this is what you can ask for. And this is why they're able to say for federal law, this is the things we'd be doing. There's actually a bunch of other really good stuff inside this, in particular, the air quality stuff. So the, as a contrast to saying, "It's okay to use this stuff. Who cares whose lives are shortened?" On the environmental justice, there's a whole piece in this about saying you, if you're going to deploy data centers in public land, then you need to have constant monitoring, all this visit, and have this visible everyone else to see as well. So like these are the things that I think we don't see that you could totally take as examples away from this. And, they've also literally said. If you're going to deploy, you can't deploy in places which have had traditionally poor air quality below this, this air toxic, AirTox Screening. So basically, places which have already been harmed already, you don't get to deploy them in these places anymore. And like, this is why I think this is actually quite well written stuff, because it does take into account all these things which we've had, which have been coming up again and again. So if you were trying to come up with some policy for deciding how you deploy, there is so much you can lift from this yourself, for your own corporate policies or anything like that. Asim Hussain: There's very few benefits to a local community for having a data center built near you, there's very few jobs. There's like very, like, there's a couple of people walking around this giant warehouse and there's all, they've sucked all your electricity, and they, and there's, and you know. I don't know. The data center industry needs to, I was, it was fascinating to me when I was chatting, I was at an infrastructure conference last year and I was chatting to a gentleman, won't name his name, from the utility sector, and he was saying to me something very interesting. He was saying to me, he believes the data center industry, this is before, he who shall not be named apparently, entered office. So, this is before that happened, but he was saying he thinks the data center industry is headed right towards full regulation the same way utilities are regulated. So if you want to do a power plant, you can't just go "Oh, it's gonna make me a lot of money. I'm gonna build a power plant here." You have to go through so many checks and balances. Your profit is limited. Everything is limited. And he was saying based upon the conversations that are happening, you know, you're claiming that this technology is so fundamental to life and existence that it therefore is a commodity, therefore it's something that's you know similar to energy. Energy utilities can't just say "ah we're going to rack up our prices 40, 43 percent because everybody wants it." You've got to, they'd be regulated for that. So he was really putting a very convincing argument to me that if the data center industry is not careful It's going to get regulated that way and then they don't want to get regulated that way. It's not fun, apparently. And so I think things like this really matter. Chris Adams: Yeah. Asim Hussain: Really do matter. Yeah, you to think about it. If you're with a data center, you can't be, you can't not think about the impacts of the region that you're in. You've got to really put effort into where you need to be a positive net benefit to the place you're being installed, you know, locally as well. Chris Adams: So this is actually one thing that, so what I think you're, the argument you're making is that if you can, if you're going to present yourself as a utility, something which is what foundational to everything running on, then you probably, maybe there, then you should expect utility style profits rather than SaaS style profits, Because the margins that you might see, when you're from certain, tech giant companies is like 30 percent for example. That's not the same as utilities might be looking at like around 10 to 15 percent for example. And you have different kinds of oversight being introduced. So yes, this is a conversation that we might have. I suspect it might be longer than we have given the time we have available, but yes, this is something we might point to. Just following on from this, there's a, you did mention this, uptime report, Uptime Institute Report. We'll share a link to that as well. And I think there is, we might be in a situation where we have a bit of a fight on our hands, or we might be seeing a fight taking place because we do see like in Europe, for example, where, which is probably, arguably, the place where you see fights around data center deployment the strongest. We've just seen new laws be published about what criteria you need to actually have if you're going to connect to the data centers. This was published, I think, last week, and we'll share a link to that. Where, in contrast to what we've just talked about here, where the US policy was very clear and was very good, we now see, essentially, a guideline saying you can connect data centers to the grid, but you need to have your own generation and you need to integrate nicely with the grid, but there's no mention of climate change, no mention of local environment or anything like that. This is literally going to likely incentivize even more on site fossil based generation for this, absent no other criteria being in place. So we might see this being challenged, but I think I agree with you. We currently do have this case where, yes, you got all this new technology being deployed,but there is the kind of, we have a fight where there's almost zero regulation and it doesn't feel like it's going to last. It, I can't see how Asim Hussain: You don't think, you don't think the absence of regulation is gonna last? Chris Adams: I think what's going to happen is that if you continue to go through this stuff, you're, what's probably going to happen is that you will end up with so much pushback that you will end up with much, much more heavy handed regular legislative responses to this. Because, right now, there's been this push to kind of, essentially, neuter any kind of meaningful science based or data informed discussion around this. All that does is play into the hands of a much, much more, a much, much more dramatic response later on. So I think it's, if you want to deploy stuff, then this does feel kind of long term, not very helpful for them. But then again, there's a question about, do we need this, how much do we actually need to be deployed? There's probably a democratic discussion to actually have about that. Asim Hussain: Well, we haven't even spoken about DeepSeek and its impact on this whole question and kinda how it has, if I'm going back to that conversation, how that person utilities that their big question was. Because the data center providers, everybody's telling them we need a lot more energy in the future. And they're going, "well, my God, do "we actually put the effort in to try and roll out this new capacity? And then only to find out on the day in the two years later like "ah, we got it wrong I'm sorry, we won't need that." They're asking the and I'm just a, they're asking the "is it BS?" question because they need really to figure it out and I was thinking okay, they might have just been convinced. Then DeepSeek comes along and Now you know everybody's asking the question "huh, will we need this capacity upgrade?" And now, as soon as DeepSeek came along, everybody said, "yeah, that's great. Now we are gonna do even more AI. We do, we definitely need the capacity, but now we can do more with it." And you're like, well, hang on. Because there is oftentimes a thing that goes on in, you have to create the hype to get the funding. You have to create the hype to get the funding. If you want to convince like investors to invest in your organization, if you want to convince them, you have to create the hype. And what DeepSeek's done is it's just popped it and I don't know how much it's popped it because only the investors know how much it's been popped. But it's popped it and I wonder if it's really popped it quite significantly and whether we are going to see like a significant pullback. Is Stargate really going to happen or does it really not, really matter? They just want to hand money out to, it's just a reason to hand out 500 billion because you know, why not? Chris Adams: We can share a link to, there's a good paper from Sasha Luccioni talking a little bit, and friends, about Jevons Paradox. I've actually written a blog post about this as well, particularly for DeepSeek, to kind of make this accessible for people who are trying to understand. Does this, is this going to reduce the footprint or is it going to increase the footprint? Because there's a few different criteria you want to take into account. Just saying, Asim Hussain: pops the bubble, it will decrease the footprint, I think. Chris Adams: That's, this is the thing we can look into and decide. Because the flip side is that if this makes it more likely that they'll, you'll, if this lowers the barrier so that more people are able to use it in more places, that can lead to an absolute increase. So there are different, there are two different, there are different ways and different takes on this and it's very much case of, okay, this is, yeah, this is one thing we'll show a link to. Asim, I think we've gone down a bit of rabbit hole so we should probably look at the events there's anything particularly we have here. Asim Hussain: So, there's a couple events coming up. There is the Practical Advice for Responsible AI on February the 27th at 6pm in London. , it's a UK event. And it's gonna be held in person in the Adaptavist offices, and it's gonna talk about green AI with Charles Humble and AI governance Team with Jovita Tam. There's the GSF Oslo meetup happening on again, February 27th at 5:00 PM. It is in person in the Accenture offices from 5 to 8 PM. And they're going to talk about how to leverage data and technology to drive sustainability initiatives and enhance security measures, dive into green AI, obviously. There's going to be talks from Abhishek Dewangan and Jonny Mauland. I do apologize. Chris Adams: Sorry, Asim Hussain: Read them. I'm sorry, Johnny. Sorry, Abhishek. Details in the podcast notes. And think that's it. I think I'll pass over to you Chris. Chris Adams: Yeah. Okay, then. I think that takes us to the end of what we have for this. I assume if there's a particular free resource you would point people to right now on green software as a final thing, what would you point people to as a parting? Asim Hussain: Oh, honestly, it's that beginner's guide. I don't know if it's I don't know if it's, it is very good, I read the Beginner's Guide to Power and Energy Measurement and Estimation for Computing and the last word. Chris Adams: Wow, Akshaya better be getting a promotion after this, man. This is just like, this is, so yes, this, I agree. It was a really fun read. If you want to basically sound knowledgeable about AI, this is probably the most useful thing to read. And that's as someone who's written a report all about the environmental impact of AI ourselves, where we work. All right, Asim, it's really lovely to see you again, mate. Thank you so much coming on. I hope the people who did listen to this were able to stay with us and we didn't go get too self indulgent. And if we did, please do tell us and we'll make sure that we don't do it too much next time. And otherwise I'll see you in one of the future episodes of This Week in Green Software. Thanks, mate. Asim Hussain: See you later, mate. Chris Adams: Toodle oo! Hey everyone, thanks for listening. Just a reminder to follow Environment Variables on Apple Podcasts, Spotify, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show, and of course, we'd love to have more listeners. To find out more about the Green Software Foundation, please visit greensoftware.foundation. That's greensoftware.foundation in any browser. Thanks again, and see you in the next episode.…
E
Environment Variables

1 How to Tell When Energy is Green with Killian Daly 1:00:37
1:00:37
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי1:00:37
In this episode, host Chris Adams is joined by Killian Daly, Executive Director of EnergyTag, to explore the complexities of green energy tracking and carbon accounting. They discuss the challenges of accurately measuring and claiming green energy use, including the flaws in current carbon accounting methods and how EnergyTag is working to improve transparency through time-based and location-based energy tracking. Killian shares insights from his experience managing large-scale energy procurement and highlights the growing adoption of 24/7 clean energy practices by major tech companies and policymakers. They also discuss the impact of green energy policies on industries like hydrogen production and data centers, emphasizing the need for accurate, accountable energy sourcing and we find out just how tubular Ireland can actually be! Learn more about our people: Chris Adams: LinkedIn | GitHub | Website Killian Daly: LinkedIn | Website Find out more about the GSF: The Green Software Foundation Website Sign up to the Green Software Foundation Newsletter Resources: GHG Protocol [09:15] Environment Variables Podcast | Ep 82 Electricity Maps w/ Oliver Corradi [32:22] Masdar Sustainable City [58:28] If you enjoyed this episode then please either: Follow, rate, and review on Apple Podcasts Follow and rate on Spotify Watch our videos on The Green Software Foundation YouTube Channel! Connect with us on Twitter , Github and LinkedIn ! TRANSCRIPT BELOW: Killian Daly: We need to think about this kind of properly and do the accounting correctly. And unfortunately, we don't do the accounting very well today. Chris Adams: Hello, and welcome to Environment Variables, brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software. I'm your host, Chris Adams. Hello, and welcome to another edition of Environment Variables, where we bring you the latest news and updates from the world of sustainable software development. I'm your host, Chris Adams. When we write software, there are some things we can control directly. For example, we might be able to code in a tight loop ourselves, or design a system that scales to zero when it's not in use. And if we're buying from a cloud vendor, like many of us do now, we're often buying digital resources, like gigabytes of RAM and disk, or maybe virtual CPUs, rather than physical servers. It's a little bit less direct, but we still know we have a lot of scope for the decisions, to control the impact of their decisions and what kind of environmental consequences come about from that. However, if we look one level further down the stack, like how the energy powering our kit is sourced, our control is even more indirect. We rarely, if ever, directly choose the kind of generation that powers data centers that our code runs in. But we know it still has an impact. So if we want to source energy responsibly, how do we do it? If you want to know this, it's a really good idea to talk to someone whose literal job for years has been buying lots and lots of clean energy and is intimately familiar with the standards involved in doing so and who has spent a lot of time thinking about how to make sure you can tell when the energy you're buying really is green. Fortunately, today I'm joined by just that person, Killian Daly, the Executive Director of the standards organization, EnergyTag. Killian, it's really, nice to have you on the pod. Thanks for coming on. Killian Daly: Yeah, thanks. Thanks very much for having me, Chris. great to be on the pod and, an avid listener, also. So it's always nice to contribute. Chris Adams: Thank you very much. Killian, I'm going to give you a bit of space to introduce yourself, and I've just mentioned that you're involved in EnergyTag, and we'll talk a little bit about what EnergyTag does. Because I know you and because, well, I met you maybe three years ago, I figured it might just be, it might be worth just talking a little bit about our lives outside of green software and sustainability. So, we were in this accelerator with the Green Web Foundation talking about a fossil free internet, and you were talking about EnergyTag and why it's important to track the provenance of energy. I remember you telling, we were asked about our passions. And, you told me about surfing and I never ever thought about Ireland as a place where you would surf because I didn't think it was all that warm. So can you maybe tell me a little bit like enlighten me here because it's not the first country I think of when I think of surfing and when you said that I was like he's" having a joke, right?" Killian Daly: Yeah. Well, I do like to joke, but this is not actually one of the jokes, Well, it doesn't need to be warm to surf. You just need to have waves, I suppose. So, yeah, it's something since I was really very young. I've always gone to the west coast of Ireland. Beautiful County Clare near the Cliffs of Moher. Maybe people know of them. And so we go every year. And my cousins, since a very young age, started surfing. We just, you know, solve these big waves and there's other people out there, surfing, bodyboarding and we're like, "Hey, let's try that out. That looks really cool." So, yeah, since I don't know, 6 or 7 years old, I've been going there every year, in summer, also in winter, me and my cousins also go, yeah. We go at New Year's get into the frigid cold Atlantic. And, yeah, it's magic, really. If you have the right, if you have the right wetsuit, you can kind of, you can get through anything, Chris Adams: So there's no such thing as cold was it bad weather, just bad clothing that also applies to wetsuits. Killian Daly: Yeah. Yeah. Yeah. It couldn't apply. Couldn't apply anymore. And obviously, in winter, you get the biggest swells, right? so actually, people probably don't know it, but Ireland has some of the biggest waves in the world. Now, on the west coast of Ireland, you have, yeah, really massive 50, 60 foot waves. Yeah, really all you can get some sort of a, all time surf there. So, so yeah, it's one of one of our better kept secrets. Chris Adams: I was not expecting to learn how to go totally tubular on this podcast. Killian Daly: Yeah, Chris Adams: Wow, that's, yeah, that's... Killian Daly: It's not, not for the faint of heart, but yeah, I would definitely recommend it. Chris Adams: Actually, now that you mention that, and now that we talk about, going back to the world of energy, now that people talk about Ireland as, the Saudi Arabia of wind, and it being windy AF, Then I can kind of see where you're coming from with it, actually. It doesn't make a bit more sense. So yeah, thank you for that little segue, actually, Killian. Okay, so we've started to talk a little bit about energy. And, I know that your, the organization you work for right now is called EnergyTag. But previously, as I understood it, you didn't, you worked in other organizations before. And, you've been working as a kind of buyer of energy, so you know a fair amount about actually sourcing electricity and how to kind of do that in a kind of responsible way. And I think when I heard you, we spoke about this before, you mentioned that, "yeah, I'm used to buying significant amounts of power" in your kind of previous life. Could I just like, could you maybe talk, provide a bit of a kind of background there, and so we can talk a little bit about context and size, because that might be helpful for us talking about the relative size that tech giants might buy and so on, and how much of that is applicable. Killian Daly: Yeah, sure. Yeah, so, I've been thinking about energy for a long time, even before my professional career studied energy and electrical engineering since I was 18 years old and did a master's in that, also. And then obviously in my working life as well. I've been basically always in the energy sector. So before EnergyTag, I was basically overseeing the global electricity portfolio, and the procurement of electricity for a company called Air Liquide, which is basically a large French multinational that produces, liquid air. So, oxygen, nitrogen, all the different parts of air which are, essential, feedstocks into various industries, and they consume a lot of electricity. So, the portfolio my team oversaw was about 35 to 40 terawatt hours of electricity consumption. Chris Adams: Okay. Killian Daly: Yeah, it's a lot, it's more than my home country, Ireland. It's about the same as Google and Microsoft Chris Adams: put together, yeah. Okay, so, wow. And Killian Daly: So, it's pretty big stuff. And obviously, when you're working on something like that globally, looking at various electricity markets operating in 80 countries in these huge volumes, I suppose you, kind of learn a lot about what it means to buy power. Chris Adams: I guess if you're looking at something which is basically as much power as an entire country, then there's going to be like country sized carbon emissions, depending on what you choose to power this from. And I guess that's probably why you, I mean, we, have ways of tracking power. I mean, tracking the carbon emissions from various things like this, I mean, called like the GHG protocol, which is a kind of like the kind of gold standard for talking about some of that stuff. And this is something that I think you have some exposure to and I remember when you spoke to me, I remember us sitting down one time and you were telling me about There's a thing called scope 1 and there's a thing called scope 2, and that scope 2 was actually a kind of relatively new Idea where this came into this. Can you maybe tell me a little bit like maybe you could explain to someone who is Who's heard of, carbon footprinting, and they know there's a thing called scopes. Why would anyone care about scope 2 in the first place? And how does it come about in the first place? Because it seems like it's not intuitive for most people when they first, when they start thinking about carbon footprints and stuff like that. Killian Daly: Yeah. I think the obvious, first thing you need to take into account when you think of like a company's emissions is, well, what are they burning themselves on site? do they have gas boilers burning gas? Are they burning coal to produce electricity? So that's, I think, very intuitive and obvious. But actually that is not the end of the story. And there's actually like a, a very funny anecdote. I put a true anecdote from the legendary Laurent Segalen, who does the Redefining Energy podcast and general energy guru. And he was actually involved in the kind of creation of a lot of the carbon accounting standards that are used today, this Greenhouse Gas Protocol standard, which is basically used by over 90 percent of companies now to report their carbon emissions. It is the Bible of how carbon accounting works, right? and so 20 years back, he basically was, down in Australia and visiting an aluminum smelter. On site, they were explaining, "this is very low carbon product. we hardly burn any fossil fuels on site. This is incredibly, clean production." Chris Adams: The aluminium here, right? big chunks of aluminium. Okay, right. Killian Daly: Aluminum, aluminum smelting. So like one of the, biggest metallic commodities that we have, very energy intensive. and so, he was there on site and just saw these big overhead wires coming in from yonder, from somewhere, right? And he said, hang on, what are the, what are those big cables above? and they were like, "oh, yeah, that's the electricity," obviously driving the smelter because aluminium, it's all about electricity. That's what power is an aluminium production facility. And so he said, well, hang on, where is that coming from? They're like, "oh, no, don't, don't worry about that. That's not our responsibility." Well, it absolutely is, right? so you need to think about where is that electricity coming from? How is that being produced? And in that case, it was coming from a very large multi gigawatt coal power plant right next door. Chris Adams: Okay. All right. So I thought you were gonna say, oh, it's maybe a, something clean, like a hydro power station, but no, just a big, fat, dirty, great coal fired power station was the thing generating all the power for it. And that's where Killian Daly: Absolutely. So, that's kind of the, just a bit of an anecdote is that's why it's so important to think about what we call scope to emissions, the emissions of electricity that I'm consuming, because especially as we electrify the economy, right, more and more emissions are going to become scope 2 emissions. They're going to be related to someone else either burning fossil fuels to produce electricity and to give to a consumer or ideally, using clean energy sources to generate that electricity without carbon emissions. we need to think about this kind of properly and do the accounting correctly. And unfortunately, we don't do the accounting very well today. Chris Adams: Alright, so previously, before we even had that, there wasn't even this notion of scope 2 in the . , you might have just had direct, and then maybe this kind of bucket of indirect stuff, which is really hard to measure, so you're not going to really try to measure it. And okay, so, I remember actually reading about some of this myself, and I always wondered, like, where do some of these figures come, where do, where does even the notion of a protocol like this come from? And one of the things I realized was, particularly with the GHG one, was that they're like, when I listened to Laurent Segalen speaking about some of this, he was basically saying, yeah, this was essentially like Shell, the oil company, who basically said, "we have a way of tracking our own emissions." And, why not use that as a starting point for talking about how we do carbon accounting? And then, scope 2 was a new concept. That was one of the things that they were kind of pushing for. But I suppose this kind of speaks to the idea of, who's in those rooms for those working groups to kind of, that is going to totally change the framing of how we talk about some of this. And I guess that's probably why this, is this a little bit like why you started talking and getting involved with things like EnergyTags so you could take part in those discussions? Because it feels if this is what we're going to use to define how we do this or how we do that just like you have people talking about okay BP had an impact of changing how we think about carbon footprints from, from an individual point of view. But you do need people involved in that conversation to say, "actually, no, that's possibly not the best way to think about this, and there are other ways to take this into account." I mean, is this why you got involved in the EnergyTag stuff? Killian Daly: Yeah, it's one of the main reasons, because I used to do, so, work for one of the world's largest electricity consumers. And so I was responsible for calculating all of the electricity emissions for that company, right? Like doing the scope 2. And so I read the Greenhouse Gas Protocol back to front. That was how the, all the calculations were done. That's what qualified clean and not clean, right? And I remember thinking, "this is an insanely influential document," right? It's kind of in the weeds. It's kind of stayed maybe, to some people, but I was Chris Adams: of tedium around it, here. Killian Daly: Yeah. But the more I've gotten involved in things like regulation and conversations like that, that is where, it's in the annexes, it's in the details that the big decisions are made often. So I remember thinking back then, this is insanely influential and some of the ways that we're allowed to claim to consume clean energy are, frankly, disconnected from reality in a way that is just not okay, right? As in this is far too weak. And definitely, I thought, someday I'd love an opportunity to be able to, say, "hang on, can we, we fix this please? can we do this differently? Can we start to respect some sort of basic realities here?" So, yeah, it was definitely one of the drivers why I joined EnergyTag, which is obviously like a nonprofit that is, has as its mission to clean up accounting, right? And to clean up the way we think about electricity accounting. So, yeah, obviously it's a great honor, I suppose, to be part of those ongoing discussions in the Greenhouse Gas Protocol update process. Chris Adams: So, We spoke before about how there, before there was even no scope 2, right? So that was like, the bar was on the floor. Right, and then we introduced the idea that, oh, maybe we should think about the emissions from the electricity. So that was kind of a bit of a leap forward by one person pushing for this, that otherwise wouldn't have been in the standard at all, right? And I just realized actually now that you mentioned that, we spoke about oil firms being very involved in this and being very organized in this, and I remember people talking about Shell, that's what you use, and how much, and I'm just realising, oh Christ, Shell's in the Green Software Foundation as well. We should, that's something I didn't really think so much about, but they're also there too. So they are organized. Wow. So let's move on. So maybe we could talk a little bit about scope 2 here. The thing I want to kind of get my head around is I'm like, can you maybe talk me through some examples of where this doesn't, this falls down a little bit, where might be a little, stretching your, you spoke about the physicality, the physical reality. where does it need a bit of work, or need some improvement that you're looking to do, looking to address in EnergyTag, for example? Killian Daly: Yeah, so basically, one way of doing scope 2 accounting is basically looking at the energy contracts or the electricity supply, contracts that companies have and saying, well, where are you buying your energy from? How are you contracting for your power? Right? And there's a kind of a number of fundamental issues. One of them is around the temporal correlation, or between when you're consuming electricity and when the electricity you're claiming to consume is being produced. And today, right, we actually allow an annual matching window between production and consumption. And put in simple terms, what that means is that you can be basically solar powered all night long, right. You can take solar energy attributes from the daytime and use them at nighttime, or you could take them from the daytime in March and use them at nighttime in November. At any other time of year. And this just does not make sense, right? Chris Adams: Not physically how the science works for a start. Maybe if I can just dive into that a little bit in a bit more detail because you've mentioned this idea of certificates for example or like claiming like that and as I understand it if I am running a solar farm right I'm generating two separate things. I'm generating power but I'm generating the kind of greenness so these are two independently sellable things which will sometimes be bundled together. That's how I might buy green energy. But under certain rules, they're not. They can be separated. So it's like the greenness that I'm moving or I'm buying and kind of slapping onto something else to make it green. Is that? And if it's at the same time, it's kind of okay. If it's from totally separate times of day, you do like you mentioned where you're saying this thing running at night runs at solar, is running on the greenness from a solar farm, which is stretching the, well, our imagination, I suppose, and your credulity, I suppose. Okay, so that's one example of this is something that you wanted to get, wanted to get fixed. Are there any other ones, or things that you'd point people to, because Killian Daly: I think you know the. The other, the other aspect, I think that's pretty, problematic in today's standards is so we've talked about time and the other big one is space, right? Today we allow consumers to claim to use green energy or clean energy over vast geographical boundaries that really don't respect the physical limits of the grid. So, for example, the whole U. S. is considered to be one region, right? So you can buy green energy attributes produced in Texas and say that you're using them in New York. So you could be 100 percent power by Texas solar in New York. Or if you're in Europe, Europe is considered of one region. So you have really absurd cases where you can be powered by Icelandic hydro in Germany, and Iceland has never exported any electricity to anyone. There's no cables leaving Iceland. So, that just doesn't make sense. And this has real consequences because what we're trying to do is obviously drive consumers to buy green energy. If they're doing it in this way, then they're kind of, in some cases, pretending to buy green energy rather than actually going and buying green energy and incentivizing more production of green energy and clean flexibility that's needed to integrate that solar and wind, at every hour of the day. So, that time and space kind of paradigm is maybe a good way of thinking about, some of the fundamental issues here. There are other ones. I don't know how far we want to go into the rabbit hole, but that's two very high level, and hopefully very kind of understandable examples of the problems we have with today's carbon accounting. Chris Adams: Yeah, I think I understand why that would be something we would address, and so presumably this is the thing that EnergyTag's looking to do now. You're basically saying, well, the current system is asking you to make quite spectacular leaps of faith. And there are certain places where you do want to do leaps of faith and be super creative, but accounting might not be where you want to be super creative or super jumpy. That's not always where you want to have your innovation. So that's, this is, so you're saying, well, let's actually be, make this more reflective of what's really happening in the world. So that we've got like some kind of solid foundation to be working on. So, Exactly. Killian Daly: And just maybe on that point, this is not what we advocate for is not, it's not anything radically new, to be honest, because the way electricity markets work today, the way electricity utilities deliver power to customers, just you know, let's say pure gray electricity on electricity markets. It is based on fundamental concepts of time matching. Power markets work on a 16, sorry, a 60, 30 or 15 minute, like balancing period. In Australia, it's 5 minutes. In Europe, there's things called bidding zones. So that's the area over which you can buy and sell electricity. And all of this is to kind of capture these fundamental physical limits of the power system. You have to balance it in real time. And there's only a certain amount of grid capacity. And so you need to realize areas over which it's reasonable to trade power or not. So all we're saying is, make the green energy market much more like the real power market. So we're actually, if anything, trying to make it a bit more common sense, whereas today, we're, quite detached from some of those basic limits that Chris Adams: Ah, I see. Okay. So in fact, in some ways, there are some kind of comparisons where you could plausibly make where people there's a push right now for people to talk about treating environmental data with some of the same seriousness as financial data and apply some of the same constraints it sounds like something a little bit like that so if people are going to have basically take into account the physical constraints when they're purchasing the actual power part, they should think about applying their same ideas when they're thinking about the greenness of it as well. You can't kind of cheat, even if it makes it a bit easier, for example. Killian Daly: Yeah, well, exactly. And, ultimately, what are we trying to do here? Is the purpose so that certain consumers can say that they have no emissions, or is the purpose to set up an incentive system so that when those consumers actually. Do you say they have no emissions that they've gone through all of the challenges of grid decarbonization? So they've bought renewables. So they've invested in storage. So, fine, you can consume solar power at nighttime if you put it in a battery during the daytime. They're thinking about, demand flexibility. Are they consuming a bit less when there's less wind and sun? They're hard challenges, right? We need to do a lot more of those type of things, and a proper accounting framework will make sure that in getting to zero that you have to think about and take all of those boxes. Whereas today, you can just be 100 percent solar powered and obviously that's just not going to lead to the grid decarbonization in the real world that we want to see. Chris Adams: Maybe if you're in space it might work, but mostly no. Okay. Killian Daly: Mostly, no. Yeah, Chris Adams: Okay, so we spoke a little bit about why there are some problems with the existing process, and like you, we've spoke a little bit, hinted at some kind of ways you could plausibly fix this. So do you, could you mind just talking me through some of the key things that EnergyTag is pushing for in that case? Because it doesn't sound like you're trying to do something totally wacky, like, say you're never allowed, sorry, you're, it's not like you're asking for something like a significant change, like you're not allowed to split the greenness from power and or stuff like that. It sounds like you're still working inside the current ways that people are used to buying power and do all that stuff at the moment, right? Maybe you could tell me about how it's supposed to work on the newer schemes that you're working with. Killian Daly: Yeah. So basically what we're advocating for is that, if you're gonna claim to use green energy based on how you contract for power, then, well, you have to temporally match, right? So you can only claim to use green energy produced in the same hour as your consumption. Not in the same year, Okay. number 1. Number 2 is we need to think about the deliverability constraints, right, and this geographical matching issue. And what we're saying is that, for example, in Europe, Europe is not a perfectly interconnected grid. And so you shouldn't be able to claim you're consuming green energy from anywhere else in Europe, you should be doing it, in the same bidding zone or, at least at a Chris Adams: There needs to be some physical deliverable, physical connection to make it possible. Okay. Killian Daly: Or fine, you can go across border, but you have to show that actually the power actually did come across border and that you're not violating like fun. You're not importing, 10 times more certificates than you are real power between 2 countries, right? So we need to have those, limits put in place. And another thing that we think is important is that there needs to be some sort of controls on individual consumers just buying a load of certificates, for example, from very old assets. And I'm totally relying on those to be 100 percent green. For example, if I'm in Germany, right, and I just sign a deal with a hydro power plant, that has existed for 100 years and I'm time matched and I'm also within Germany, spatially matched, and I'm claiming to be 100 percent renewable Chris Adams: it's not speedytransition if it's a hundred years old, that feels like that's stretching the definition of being an agent of that. Okay. Killian Daly: that's another thing to kind of, you know, having this 3 pillar framework. Sometimes we call about, and that is very important. I think for an existing consumer, it is legitimate to claim a certain amount of that existing power, but that must have a limit, right? You can't just be resource shuffling and "well I'm the one who's taking all the green energy" and everyone else is left with the, fossil that needs to be controlled also. Chris Adams: All right. I think I follow that. So basically, so timely has to be more or less the same time, right? Deliverable, like you need to be able to demonstrate that the power could actually be delivered to that place. So deliverable there. And this other one was like, additional, like we need to transition, so you can't look at something which is 100 years old or 50 years old and say "I'm using that, I'm fine." There is this notion of bringing new supply stream to kind of presumably displace or move us away from our current fossil based default, which is not great from a climate point of view, right? Killian Daly: Exactly. And I think one way, there's a really, a good friend of mine, who's in the Rocky Mountain Institute, Nathan Iyer, smart guy. We've worked a lot on US federal policy topics, and he actually has a really, good analogy about this stuff. BYOB, right? So, yeah, of these 3 pillars. So, like, when you're going to a party, you need to bring your beer to the party on time. You can't bring it yesterday, you need to bring it when the party is happening. You need to bring it to the party, not to another party. And it needs to also be your own beer. You can't just be taking someone else's. And it's it's kind of like a bit simplified, but it's a good analogy, I think for what we're trying to get out here. It's if we get everyone to start like thinking that way and acting on those kind of fundamental principles, obviously, we're going to end up being much more effective in deeply decarbonizing our power systems. Chris Adams: So, decarbonization of the grid communicated through the power of carbonated beverages, basically. Wow! Killian Daly: What could be better? Chris Adams: I think it's, well, it's topical, at least it's still talking about CO2, just on slightly different scales, actually. I quite like that, actually. I might borrow that one myself, actually. Okay. So, there's one thing that you mentioned then. So this notion of, we spoke a little bit before about there's this idea of greenness that could be split, you're still keeping that, so you're not, saying, there's no ban on saying you're not allowed to sell power, that is unbundled from that, there is, that is still a kind of key idea of flexibility, could you maybe, I mean, cause from someone who isn't familiar with it, they might say, "why do we even have this, idea of being able to have separate these in the first place. Doesn't this make things much more complicated?" I mean, I might be going down into the weeds, but is there a reason for that? is it just because that's how it's such a big change there that, or it's really hard to make that, to get people to shift to a new way of doing things or, what was that, what's the thinking around that part there? Killian Daly: Well, basically, right, anytime you want to claim or have a contract, whether that be an unbundled or a bundle PPA contract, Chris Adams: Power Purchase Agreement, right? Killian Daly: Yeah, a power, like a long term power purchase agreement, for example, right? so anytime you have a contract for a specific type of electricity, you need an accounting mechanism or a tracking mechanism that kind of sits on top of the grid and allocates generation to consumption, because obviously, the way that the grid actually works, is that electrons are just oscillating around the place. there's not really a methodology to physically trace this individual electron started here and went there, right? And so, much like power markets do, and they have mechanisms for contractually allocating power between different buyers and sellers, as long as it's matched in time and space, that's a fundamental premise of our power markets work, we're basically borrowing that concept, but attaching the greenness attribute, Chris Adams: Ah, Killian Daly: and saying "provided that this system, of detaching greenness from the power is respecting temporal and geographical matching requirements, deliverability requirements, sufficiently, then that should be the basis of legitimate green claims and that essentially creates a market mechanism for financing renewables. If you don't do that, then you cannot have a green power market basically, right? You,= don't have a way of differentiating buyers who are contracted for green power and those who are not doing anything. So, yeah, for example, a few years ago in Air Liquide, we only did this, we didn't look at what contracts we were sourcing. We just did this location based accounting where you take an average of all the generation in the grid. Which is another way of looking at electricity emissions and a very valid way of doing it. But obviously one disadvantage that has is that it basically leaves all consumers passive. They have no incentive to do anything in terms of driving electricity decarbonization. So that's why we need these, these mechanisms of essentially having tracking Chris Adams: systems. Oh, okay, I see. So, if you, if there's no recognition, if I'm working at a large company, why would I, why would I choose to buy something green if I can't be recognized for me doing something, doing that green step? And, so the downside of the location based approach is that yes, it gives you one single answer, but it takes away this idea that organizations which have honestly massive amounts of resources can influence or speed up a transition. That's what it seems to be a kind of it's trying to respect that reality or at least acknowledge that this is what we expect of organizations if they're that powerful. Killian Daly: And one person, I know you've had Olivier Corradi from Electricity Maps on before they've done, some very good blog series on this topic. They're obviously have insanely deep knowledge of grid emissions is really no one better that I've come across. And they did a very kind of simplified explanation of this stuff. And you have the location based method, which is like maximizing physical accuracy and then you have the market based method, which is trying to maximize incentives and financing. And what this 24/7 accounting framework that we're advocating is basically trying to make those things meet in the middle, right? Today we have a market based system that is too much focused on, I would say, flexibility, making it easy for people to say they're green. and so has led to very valid criticism. And what we're trying to do now is bring that market based mechanism back closer to the physical realities of the grid, Chris Adams: Oh, I see. Killian Daly: But keeping the, incentive system, because if you don't have that, then, well, I don't really see the point in even doing the exercise. Chris Adams: Okay. So there's two things that I wanted to kind of just see if I could maybe dive into a little bit on that then. So it sounds like this whole notion of not having this stuff tied to each other is to reflect the fact that people have all these complicated ways to purchase power in the first place. So in my world as a cloud, as like someone working as a cloud engineer, right, I might buy computing by the hour, but I might also buy it, in advance for three years, for example, for a lower price, and that, that provides a bit of stability for whoever's running my server, but this kind of, this is an example of me having multiple different ways of being able to buy something, and essentially, some of that unbundling there is actually trying to capture the fact that there is, there are all these complicated ways to arrange to pay for something, and this is one way that we can use to value some of the Flexibility and stuff you said before. So for example, you spoke about you can't run something on solar power, right? But if you had a battery, you can capture that and then use a battery bit like a time machine to kind of run at night almost right so but therefore you're trying to but that's more expensive than just making some claims. So you need to have some way to recognize the fact that it takes a battery and a bunch of extra smarts to run something at night from that. That's what you're trying to go for with that, right? Killian Daly: Yeah, exactly. And again, basing things on how power markets contractual, they have ways of already have contracted with allocating power between generators and consumers. I think the biggest issue with unbundling, so, selling the energy attributes and the power to different people. Actually, I think what the fundamental problem is the lack of time matching and deliverability requirements. That's where unbundling has gone wrong. Because it's, it said, "we're going to take the green attribute from this energy in Norway, and we're going to allow it to be used at any time of year, anywhere in Europe." That's insane. That's where it starts to get completely insane. I don't have any particular problem with you producing it in one hydro plant, and selling the power into a power pool. and then that being consumed in Norway in the same hour. That's literally how power markets work on a short term power market. Everyone bids into a common pool. And why not just put the attributes into the same pool and well, they, all have the same properties anyway. So it doesn't make a difference. It's the only way you're ever going to have liquidity, right? so I don't see any fundamental issue with, that. The fundamental issue is with the annual matching and the Chris Adams: the physics beyond breaking point, essentially. Killian Daly: And that's, I think that's why I'm bundling, it's got such a bad name, right? And I think that's actually been fair, but I do think that it's not that bundling around bundling or necessarily the issue is, kind of the Chris Adams: like those three pillars you mentioned. Okay, gotcha. Thank you for indulging me as I went down that thing, because I didn't know the answer to that, and I've always been wondering. Okay, so, we spoke about this thing called EnergyTag. We've spoke a little bit about how it's supposed to work and how it's basically an improvement on some of the approaches before. And, maybe we could talk a little bit about who's using it? Is anyone, adopting it? maybe we could go from there, because this sounds like a cool idea, but there are many, cool ideas. That no one is paying attention to. And I suspect that would be quite a demoralizing conversation if that was the case. So, yeah, I mean, who's using this and where, are there any kind of big name adopters you might point people to or anything like that? Killian Daly: Yeah, so, yeah, two of the leading ones that kind of come to mind immediately, obviously, especially for software folks like yourselves or Google and Microsoft, they have 24/7 clean energy targets by 2030. Basically, they're committing to buying clean power for every hour, their data centers are consuming electricity, everywhere in, in which they're operating. So they're two of the most, I would say, advanced, ambitious, corporate climate commitments in terms of scope 2 electricity procurement, at least. And they're obviously two major buyers. And they've been signing some really interesting deals as well. So there's, gigawatts now already of these 24/7 or close to 24/7 PPAs signed, 80, 90 percent firmed, portfolios of renewables, and that's game changing, right? that's something we've seen emerge in the last few years where traditionally, the way of buying renewables has been "I'm going to buy a solar contract, and I'm going to blend that into whatever I'm buying elsewhere." And that's fine, right? But it's only giving you maybe 20, 20 percent of your electricity on an annual basis. Now, we're seeing new contract structures that are blending together. Solar, wind, batteries, and getting maybe 80, 90 percent like of a flattened, Chris Adams: so that's what I mean by firmed then, so firmed is this idea that it's basically it's when you say, so if it's not firmed, it's like I'm gonna buy the same amount totally without thinking about when it's matched, but if it's firmed then I am trying to think, I'm taking the steps necessary so that I can make a much more credible claim that the power I'm using is coming from generation or from stored amounts of power or something like that. Ah, Killian Daly: And that's, as I said, there's gigawatts of deals done already to date. Are there people doing this hourly matching stuff? Yes, absolutely. Check out our website. There's 30 projects there, with millions of megawatt hours of hourly matching being done. So, this is not 40 organizations or something doing it 5 continents. So, This is not rocket science, right? This is literally taking meter data. That's very common, hourly production and gen data. You could do it on an Excel file with three columns if you wanted, and matching those things together and seeing where we're at. So it's absolutely demonstrated and leaders are doing it. Is everyone doing this? Is this now the status quo way of doing it? No, absolutely not. And that's what we work every day to try change, right? so we're still, I would say, relatively in the early days of this transition, but, as far as I'm concerned, it's kind of inevitable for credibility reasons, transparency reasons also for pretty fundamental economic reasons. Companies going out there and committing to buy loads of energy that is unmatched to their consumption profile. They're leaving themselves open to a lot of risks. So, what if you say, okay, I'm just going to buy a load of solar. That has no connection to how I actually consume electricity. You're leaving yourself open to a lot of volatility that we're seeing electricity markets today. A lot of super high prices in the evening. For example, when you're, when your solar contract is not delivering you anything, then what do you do? Right? you have all this gas volatility and exposure. So it's not just about decarbonization. It's also about things like electricity price hedging. So there's kind of various, I think, fundamentals that mean that. We are going to move in this direction. Chris Adams: okay, so So if I understand that final point that you've basically made is if I want to do this kind of matched thing for example, or if I want to, if I want to be buying some power, one of the advantages of doing like a longer term deal is that there's a degree of stability. So let's say, I don't know, a one country decides to invade another country and then totally make gas prices go through the roof. I'm somewhat insulated from all that stuff so that it's not gonna massively destroy, it's not gonna destroy the, make impossible to kind of pay my own bills, for example. And like we've seen those of examples of that over the last few years, for example. So there's a bit of insulation from that kind of stuff. Yeah. Killian Daly: Exactly. So now we do get into kind of contracting mechanisms here. It's a little bit similar to what basically, if you're committing to a fixed price, for example, for a number of years, if you sign like one of these PPAs and you commit, let's say, to a 10 year fixed price for power. And if you're committing to like a affirmed profile, let's say 90 percent matched, that has a very significant hedging value. So it means that basically you fixed like a lot of your power price. So no matter what happens, if, there's a massive spike in gas prices and power prices go through the roof. You're protected against that. We actually worked on a really interesting study on this a couple of years back or 18 months ago that said. With Pexapark, who are like PPA analysts, and they basically showed that like a 10 megawatt consumer in Germany could save over 10 million euro, in the best of cases, and at least millions of euro in a given year by signing these 24/7, or close to 24/7 power purchase agreements with clean electricity assets, because one thing that clean energy has as an advantage in an ever more uncertain world is that the costs are basically known up front. You know how much money you need to build a wind turbine to build a battery up front. It's all capex heavy. And that means that renewables can basically Give you a fixed price up front where honestly, gas cannot, because, most of their costs are operational. It's about buying the gas when you need it to. Chris Adams: And there's a constant flow is not Okay, I guess with the sun, I mean, there's maybe a scenario where, I mean, it's not like there's a Mr burns style blackout of the sun kind of thing, right? if you're relying on something where no one has control over, no one can, kind of blockade the wind or blockade the sun. That's where some of the stability is coming from, right? Killian Daly: Yeah, exactly. Right. so you have those things, and you know that those fuel sources basically don't cost anything. Right? so you're all your costs are in construction, materials, all things you basically know, largely upfront, and that does enable you to provide long term contracts, typically way beyond the terms that fossil fuel generators can offer. And so it can protect you for, the consumers willing to take that long term price risk. It can really offer really significant hedging benefits. not above alternatives. Chris Adams: Buy that on like the spot market as it were or buying something just like on the regular market. Okay. All right. So, so you mentioned a few large companies doing that stuff and outside of technology, I know that I think it's the federal government. They've, it sounds like you said one or two things, which are quite interesting. There is this idea that 100 percent is obviously really, good. Right. And that's what you want to head towards. But given there are some places where aren't, they're not going, they're not shooting for 100 percent straight away, for example, they might be going for 50 percent or 60 percent or something like that. This is something that is kind of okay to do, or that's okay to start at. Cause I think I heard about the government, the US government had a plan for something about this by 2030 or something. Killian Daly: Yeah. So basically, what we, we started the conversation talking about accounting. So I think the first thing you need to do is get, the accounting right. So that when you say 50, it means 50 or when you say 100, it means 100 because if you're just saying 100 and it means 50, then well, you're screwed, right? You have a bad system. So, I think, actually being at 70 percent renewable, but saying that out loud Chris Adams: 70%. Yeah. Killian Daly: and addressing the, the basic fact that you're only there that's much better than kind of saying I'm 100 percent renewable on some annualized basis and kind of like misleading people about where you're at with, decarbonization. Chris Adams: So it's better to be a real 70 than a fake 100, basically, yeah? Killian Daly: Yeah. And, so, you have, electricity, like suppliers, for example, who are, there's like Good Energy in the UK, Octopus Energy in the UK, most of the electricity suppliers now in the UK, in fact, are, offering these like hourly tariffs. And, I don't think any of Chris Adams: it was only one or two that did that. Whoa. That's Killian Daly: Now, I think this year it'll become more, a kind of a norm, where they will offer this alongside their a hundred percent renewable tariff. And none of those hourly tariffs are gonna start off being a hundred percent renewable, but it's bringing that extra bit of transparency, which I think is great. And, the likes of good energy, they're already offering to thousands of customers, right? This is not just the Googles and the Microsofts who have their long term targets on this. This is already being offered to thousands of customers around the world because electricity suppliers are basically taking. They're doing all the work. They're just giving the consumer the number on some dashboard saying, this is how much matching you have. if you look at the Octopus Energy example, it's quite interesting. They have a tariff called Electric Match for some of their B2B customers and they're actually basically reducing your price of power. when you're more matched, so that's quite cool, yeah, they're charging you less the more that your demand is matched to their generation. Right? And I think that's quite a cool gamification of this. They're saying basically try to consume when there's more wind and sun in the UK, you'll be more matched and we'll cut, we'll cut your rates because obviously it's sort of, it costs them less to deliver that in the first place. So that's. That's the type of cool mechanism. Chris Adams: So, I swear, every single time I speak to energy people, and they say, "oh yeah, the price is totally changing." Then I think of one level up, when we're like paying for cloud, and it's the same price all the time. Someone's making a bunch of money off us doing all the kind of carbon aware computing stuff, because if the price is going, low, I would expect to see those numbers go low. This feels like something we might want to have a conversation about inside the tech industry then, if they are, if there's savings being made here, because it feels like it would be nice if those were passed on, I suppose. So, all right, let's speak, go on, Killian Daly: I think just very importantly, of the, the more I think one fundamental truth that we're going to see, it's already the case in some parts of the world, but this is going to be an essential truth of the transition. The more renewables you have, the more volatility you're going to have in power prices. And the more flexible you can be in your consumption. It is going to be very rewarding economically, if you can consume, at the times of day when there's loads of wind and sun, power prices are going to be very low and you're going to get rewarded for that. If you can't, if you can only be base load, then that is going to cost you. Chris Adams: Ah, okay, alright. Okay. Alright, that's it, that's a useful thing to take into account. And so, we spoke before about, scope 2 and stuff like that, and you spoke about this idea that you're defining this standard. Now, EnergyTag is a standard in its own right, but, as I understand it, it's not like you're stepping outside of this. You are still engaging with the protocols and all the stuff like that right now, yeah? Killian Daly: Basically, so yeah, EnergyTag is a nonprofit. we do a couple of different things. we're obviously focused on this area of electricity accounting, electricity markets and better green energy claims and all that. And so yeah one of the things that we do is we have a voluntary standard for hourly energy tracking because one of the kind of blocking points we have today, is that the way we do this tracking with these energy certificates, it tends to be on a monthly or even an annual basis globally. And sometimes we don't have the information on the certificates to do this hourly matching. So we're trying to un debottleneck that particular technical issue. Think about how do we track through storage, like doing some novel things there. So we have a standard for that, but that's only one of the building blocks, I would say of this much larger question of, like, how do companies do electricity accounting or how do they do carbon accounting more generally? Our standard is there to work on that specific topic, but actually a lot if not most of what we do today is like working on policy advocacy around the world, working on global standards and basically advocating for those to change because ultimately it's the meta-levers, regulations, standards. Once they change, then we're just there to help technically put that all together with some voluntary standards as long as they're needed. But it's not our aim to be the world's next greenhouse gas protocol. That's really not in our wheelhouse. What we want to do is make sure that global standards and regulations are as good as possible. Chris Adams: Oh, I see. Okay, so that, so if we go for a concrete example of this. So, in Europe, if you want to do a hydrogen project, which is, in some ways, kind of a bit like an AI project in that it's like a building that uses loads and loads and loads of power in one place, right? Really dense. If you're going to make, green hydrogen, for example, you're taking water, adding loads of electricity to split that, and that's incredibly energy intensive. So you've probably want that, if you want the green hydrogen to be green, probably only use green energy. And one of the things you told me about before was, yes, we won that fight so that any, and if people want to get any of the subsidies from the government to kind of do this green energy thing, they need to have those three pillars style approach, right? That's what, that's an example of your strategy, yeah? Killian Daly: Yeah, so this is actually the reason I what really brought me into EnergyTag, it was a Greenhouse Gas Protocol thing, but basically are the key to one of the world's largest hydrogen producers. Right? And so I got put onto this topic a few years ago, which I found incredibly important and fascinating and, maybe not well enough understood. It's like, when we're going to produce hydrogen using electricity, we need to really make sure that the electricity is squeaky clean, because of the efficiency issues and losses that you just inherently have with electrolysis. And so, just to give a quick example, Jesse Jenkins lab in Princeton University, a guy called Wilson Ricks, who is a rock star of power system modeling, they model this right? And they show that in the US, if you basically use today's carbon accounting rules, this annual matching stuff, and you built out a hydrogen sector based on those rules, you will have hydrogen that is twice, maybe even three times as bad as today's fossil fuel hydrogen production. and you'd be calling it clean and subsidizing that production. Totally insane, just literally wasting money. And so it's actually really, important. Billions of dollars of subsidy are going to go into hydrogen in Europe and in the United States. And so we worked a lot with NGOs, advanced companies and other partners to advocate for these strong requirements on green electricity sourcing for hydrogen, both in the US and also in Europe, and we won on both fronts, which has Chris Adams: Oh, the US way as well! Killian Daly: Yeah. Yeah. Yeah. Yeah. And it hasn't been, so both of those are legislation in Chris Adams: place. They're in! Yay science! Killian Daly: Yeah, that's the legal way now to qualify for the tax credit in the US. In Europe, there's a phase in period on the hourly part to 2030. So, in 5 years or whatever. But anyway, projects built now, they have to be designed to comply with that. And so, Chris Adams: if you know, it's going to be in the law of five, you're just going to make sure you Killian Daly: going to start doing it now, right? more or less. yeah, so that's, yeah, obviously, this is kind of like hundreds of millions of tons of CO2 per year on the line between good and bad rules and that, that's kind of a concrete example of, why these things matter. Right? Like accounting sounds boring sometimes. I definitely thought it was boring before I realized like, "Oh my God, I'm working for a huge power consumer and this is changing everything." So yeah, it's definitely super, super important that we get this stuff right. Right. Chris Adams: Okay, so we spoke about, it sounds like you've done the work with Air Liquide and you've kind of essentially laid the groundwork to move from a fossil based hydrogen thing to hopefully a greener way of making hydrogen, which ends up being used in all these places. And now it seems like you've got the, okay, you said Google and Microsoft, same power usage as Air Liquide in a single year. Maybe they might've changed, but back then, there's, so it looks like we're seeing some promising signs. For that over here. So maybe, I mean, if we, want to see that, what do we need to see at a policy level? Do you need to have, government saying, "if you want to have green energy for data centers, you need to be at least as good as the hydrogen, industry." Is it something like that you need to do? Because what you've described for the hydrogen thing sounds awesome, but I'm not aware of that in the, kind of IT sector yet. That's something that I haven't seen people doing yet. Killian Daly: That is also coming, right? So hydrogen has just been the first battleground or the first palce, I think. Interestingly, actually, on the 14th of January, just before the inauguration of Donald Trump, as US president, so the Biden administration issued an executive order, which hasn't yet been rescinded. Basically on data centers on federal lands and in that they do require these 3 pillars. So they do have a 3 pillar requirement on electricity sourcing, which is very interesting. Right? I think that's quite a good template. And I think, we definitely need to think about, okay, if you're going to start building loads of data centers in Ireland, for example, Ireland, 20 percent 25 percent of electricity consumption in Ireland is from data centers. That's way more than anywhere else in the world in relative terms. Yeah, there's a big conversation at the moment in Ireland about "okay, well, how do we make sure this is clean?" How do we think about procurement requirements for building a new data center? That's a piece of legislation. That's being written at the moment. And how do we also require these data centers to do reporting of their emissions once they're operational? So, the Irish government, is also putting together a reporting framework for data centers and the energy agency. So the Sustainable Energy Authority of Ireland, SEAI they published a report a couple of weeks ago saying, yeah, they, you know what, they need to do this hourly reporting based on contracts bought in Ireland. So I think we're seeing already promising signs of legislation coming down the road in other sectors outside of hydrogen. And I think data centers is, probably an obvious one. Chris Adams: So people are starting to win. Wow, I didn't realize that. I knew somewhat about that there was an executive order that there was a bit of buzz about. But I didn't realize that, set the precedent. So, yeah, we should do what that massive industry over there is doing because that's now the new baseline that, that's where the bar should be. We should do that as well, basically. Killian Daly: Exactly, because that those hydrogen rules, it's actually what it actually is. Well, actually, the whole debate was about is what is clean electricity procurement? What does that mean? What does it mean to use clean electricity? And that has been defined now in hydrogen rules and that can be copy and pasted to any large new load. Well, if you want it to be clean, we already know the answer. It's in legislation, Chris Adams: It's how to tell when energy is green, Killian Daly: MIT, IEA, the who's who of energy experts have all modeled this and they've all found that this is the way to do it. So, there's a template there, right? And it's, if you're going to go against that, it, yeah, well, obviously, then you're, obviously sacrificing the integrity of your accounting schemes. Chris Adams: Wow! That was, we spoke about how to tell when energy is green, and you've, We seem to be ending on a high, I didn't realise we'd actually got to that. That's really, awesome. You've really made my day for that, Killian. Thank you so much for coming on and diving into the minutiae of carbon accounting for electricity, but also ending it with a slightly less depressing piece of news, which I'll take in this current political climate, Killian Daly: just to interject before I say goodbye, there's one, one really, it's good to end on a positive note, I suppose, in this mad world we live in. There was a project announced recently. I think people should go check it out in the Middle East in UAE, where basically for the first time, they're going to deliver basically, around the clock solar power. So 1 gigawatt of solar, all night long because they're basically, building a massive battery and a huge solar farm, and basically all year round is going to deliver, green electricity at under 70 us dollars per megawatt hour, which is extremely competitive. So, I think solar and storage, what they're going to do together is going to be, is going to change the world. Right? I really think that is going to happen faster than people think. They're going to start to kill gas. So, yeah, I think green energy economics, despite what politicians will want to do with their culture wars, I think will at the end of the day, hopefully, answer some of the questions we're trying to solve here. So, yeah, thanks so much for having me on. It's been a real pleasure. Chris Adams: Brilliant, thank you so much for that mate, and may the fossil age end. That's really, that's so, so cool to actually see that, I totally forgot about the Masdar thing, which is the city. Yeah, and we'll share a link to that so people can read about that, because if you care about, I don't know, continued existence on this planet, then yeah, it's probably one to, good one to read about. Killian, this has been loads of fun, thanks a lot mate, and next time I'm in Brussels I'll let you know, and maybe we can catch up for, have a shoof or something like that. Take care Killian Daly: Yeah. A hundred percent. Thanks. Bye. Chris Adams: Hey everyone, thanks for listening. Just a reminder to follow Environment Variables on Apple Podcasts, Spotify, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show, and of course, we'd love to have more listeners. To find out more about the Green Software Foundation, please visit greensoftware.foundation. That's greensoftware.foundation in any browser. Thanks again, and see you in the next episode.…
E
Environment Variables

This episode of Backstage focuses on the Impact Framework (IF), a pioneering tool designed to Model, Measure, siMulate, and Monitor the environmental impacts of software. By simplifying the process of calculating and sharing the carbon footprint of software, IF empowers developers to integrate sustainability into their workflows effortlessly. Recently achieving Graduated Project status within the Green Software Foundation, this framework has set a benchmark for sustainable practices in tech. Today, we’re joined by Navveen Balani, Srinivasan Rakhunathan, the project leads and Joseph Cook, the Head of R&D at GSF and Product Owner for Impact Framework, to discuss the journey of the project, its innovative features, and how it’s enabling developers and organizations to make meaningful contributions toward a greener future. Learn more about our people: Navveen Balani: LinkedIn Srini Rakhunathan: LinkedIn Joseph Cook: LinkedIn Find out more about the GSF: The Green Software Foundation Website Sign up to the Green Software Foundation Newsletter Resources: Impact Framework | Green Software Foundation [00:00] The SCI Open Ontology | Green Software Foundation [04:27] SCI for AI - Addressing the challenges of measuring Artificial intelligence carbon emissions | Green Software Foundation [06:57] SCI Guidance [12:07] CarbonHack [13:03] Impact Framework Github Page [17:58] IF Explorer [20:18] IF Community Google Group [23:42] Events: Kickstarting 2025: A Community-Driven Sustainable Year (February 13 at 5:00 pm CET · Utrecht) : [24:21] Advocating for Digital Sustainability (February 19 at 6:00 PM GMT · Hybrid · Brighton) : [25:10] Day 0: MeetUp Community GSF Spain (February 20 at 6:00 PM CET · Online) : [25:33] Digging Deeper into Digital Sustainability (February 20 at 6:00 pm AEDT· Melbourne) : [25:59] Practical Advice for Responsible AI (February 27 at 6:00 pm GMT · London) : [26:27] GSF Oslo - February Meetup (February 27 at 5:00 pm CET · Oslo) : [26:46] If you enjoyed this episode then please either: Follow, rate, and review on Apple Podcasts Follow and rate on Spotify Watch our videos on The Green Software Foundation YouTube Channel! Connect with us on Twitter , Github and LinkedIn ! TRANSCRIPT BELOW: Chris Skipper: Hello, and welcome to Environment Variables, where we bring you the latest news from the world of sustainable software development. I'm the producer of this podcast, Chris Skipper, and today we're excited to bring you another episode of Backstage, where we peel back the curtain at the GSF and explore the stories, challenges and triumphs of the people shaping the future of green software. We're no longer gatekeeping what it takes to set new standards and norms for sustainability in tech. This episode focuses on the Impact Framework, also known as IF, a pioneering tool designed to model, measure, simulate, and monitor the environmental impacts of software. By simplifying the process of calculating and sharing the carbon footprint of software, IF empowers developers to integrate sustainability into their workflows effortlessly. Recently achieving graduated project status within the Green Software Foundation, this framework has set a benchmark for sustainable practices in tech. Today, we have audio snippets from Naveen Balani, Srinivasan Rakhunathan, the project leads. And Joseph Cook, the head of R&D at GSF and product owner for Impact Framework, to discuss the journey of the project, its innovative features, and how it's enabling developers and organizers to make meaningful contributions toward a greener future. And before we dive in, here's a reminder that everything we talk about will be linked in the show notes below this episode. So without further ado, let's dive into the first question about the Impact Framework for Naveen Balani. Naveen, the Impact Framework has been described as a tool to model, measure, simulate and monitor the environmental impacts of software. Could you provide a brief overview how this works and the inspiration behind creating such a framework? Navveen Balani: Thank you, Chris. And thanks to all the listeners for tuning in. Let's first understand the problem we're solving with the Impact Framework. Software runs the world, but its environmental impact is often invisible. Every CPU cycle, every page load, every API call, these all contribute to energy consumption, carbon emissions, and water usage. Yet, without the right tools, measuring and managing this impact remains a challenge. This is where the Impact Framework comes in. It's an open source tool designed to transform raw system metrics like CPU usage or page views into tangible environmental insights, helping organizations take action. Built on a plugin based architecture, it allows users to integrate, customize, and extend measurement capabilities, ensuring scalability and adaptability. More importantly, the Impact Framework helps realize the software carbon intensity specification, making sustainability reporting transparent, auditable, and verifiable. Every calculation, assumption, and methodology is documented in a manifest file, ensuring that impact assessments are replicable and open for collaboration. At its core, the Impact Framework is built on a simple yet powerful idea. If we can observe it, we can measure its impact. And once we can measure it, we can drive real change, reducing emissions, optimizing resource use and building truly sustainable software. Chris Skipper: What were some of the most significant technical or organizational challenges you faced during the development of the Impact Framework and how did you and the team overcome them? Navveen Balani: The Impact Framework wasn't just built, it evolved. It was shaped by real world challenges. Lessons learned and the need for a scalable, transparent way to measure software's environmental footprint. The foundation of the Impact Framework was laid through previous projects and ideas. Starting with SCI Open Data, which tackled the lack of reliable emissions data, and SCI Guide, which helped organizations navigate different datasets and methodologies. Another critical component was the SCI Open Ontology, which defines relationships between architecture components, establishing clear boundaries for calculating measurements. Alongside these foundational efforts, real world use cases from member organizations applying software carbon intensity measurement played a crucial role. These practical implementations tested SCI in diverse environments, refining methodologies, and ensuring that SCI calculations were not just theoretical, but applicable and scalable across industries, but data alone wasn't enough. We needed to scale measurement across thousands of observations. Sustainability assessments had to be continuous, automated, and seamlessly integrated into software development. This led to key innovations like aggregation, which enables organizations to condense vast amounts of data into meaningful, structured insights, rolling up emissions data across software components to provide a holistic system wide view. Technology, however, was just one piece of the puzzle. Adoption was equally critical. To accelerate real world impact, we opened up the Impact Framework to our annual Carbon Hackathon event. Where teams worldwide build projects that pushed its capabilities. This was a turning point, validating its flexibility and refining it through community driven development. At its core, the Impact Framework is built on transparency. Unlike black box solutions, every input, assumption, and calculation is fully recorded in a manifest file. Making assessments auditable and verifiable. This commitment to openness has been crucial in building trust and driving adoption. Chris Skipper: Looking ahead, what are the next steps for the Impact Framework? Are there specific new features or partnerships on the roadmap that you're particularly excited about? Navveen Balani: That's a great question, Chris. Looking ahead, the Impact Framework is entering an exciting new phase with a major focus on expanding measurement capabilities for AI. Right now, we're working on the SCI for AI specification. which extends software carbon intensity to both classical AI and generative AI workloads. Measuring AI's environmental impact comes with a new level of complexity. AI isn't just another software workload. The environmental footprint varies significantly depending on whether you're training a model from scratch, fine tuning a large language model, or simply using an AI API like ChatGPT or Gemini. Each scenario has different compute demands. Memory requirements and energy consumption patterns, making standardized measurement both challenging and essential. Through the Impact Framework, we aim to tackle this by developing new plugins and contributions that enable precise measurement of AI related energy use, hardware efficiency, and emissions across training, fine tuning, and inference workloads. These capabilities will collectively evolve, through community participation with researchers, developers, and organizations, contributing to refining methodologies, expanding data sets, and ensuring that AI measurement remains transparent, auditable, and standardized. This collaborative approach will allow organizations to quantify, compare, and optimize their AI workloads. Making sustainability a key consideration in AI deployment. Beyond AI, we are also exploring new partnerships to further enhance the Impact Framework's adaptability. Collaboration with cloud providers, software vendors, and sustainability researchers will be crucial in ensuring that the framework evolves alongside industry needs. Our goal is to make environmental impact measurement not just an option, but a fundamental part of software and AI development at scale. Chris Skipper: Moving on, we have some questions for Srini. Srini, IF emphasizes composability and the ability to create and use plugins. Could you explain how this innovative approach has enabled more accurate and flexible environmental impact calculations for different types of software environments? Srini Rakhunathan: Absolutely. The Impact Framework's emphasis on composability and the use of plugins is actually a game changer for different environmental impact calculations. If you notice that the framework is highly modular, making and allowing users to create and integrate various plugins. What it means is you can tailor the framework to fit the specific needs of your software and it doesn't matter what type of software you have, whether it's cloud based, on prem or hybrid. What is also advantageous is that the plugin ecosystem has a wide range of tasks. For example, it has something around data collection, it can do impact calculation, it can do reporting. It can do also very, very specific tasks like math functions and aggregation functions. What this means, you can mix and match plugins to create a mashed up pipeline that reflects your environment, whether you are running your software on web, cloud, mobile, doesn't really matter. As long as you know what your software boundaries are, you will be able to combine these plugins and create your own, um, pipeline, if you will. And that pipeline will help you, uh, create your calculation pipeline that can either run one time or run as a batch or, you know, run based on certain triggers. What it also means, and if you notice, there is also manifest files, and we will be talking more about it later in this conversation, is that the manifest files ensures that you have a repeatable way of calculation. I mean, you mash up these different plugins and you create a pipeline and you embed it in a manifest file and it's repeatable. So what I think is this framework's capability of composability and plug in can help you make very, very accurate impact calculations. Chris Skipper: How have collaborations with organizations like Accenture and Microsoft, as well as the open source community, contributed to the success of the Impact Framework? Are there any standout moments or partnerships you'd like to highlight? Srini Rakhunathan: Thanks, Chris. That's a great question. So the cornerstone of the success of Impact Framework has been collaborations. And this has been ongoing from the time this project was conceptualized. Bear in mind that when we, like Naveen, who's there also with us, and I, along with the Joseph and Asim started thinking about the project. The initial vision of the project was very different. So we started off with something called SCI Guide, where we wanted to collate datasets across the open source community to help calculate emissions from software. And we built the SCI Guide and that transitioned into something called CarbonQL, which is a primitive version of what we see today in the Impact Framework, which is more like how do we make sure that it is easier for users or developers to calculate emissions from software and the learnings that Naveen, Joseph, I and Asim went through to come up with the initial version of Impact Framework and the amount of work that the team has put together to get it to graduation state is amazing and it speaks volumes about the collaborations that has gone ahead into the building of the tool. One particular highlight I want to call out is every year, GSF organizes what they, what is called the CarbonHack. And in 2024, the CarbonHack focused on getting the open source community to come and build tools. On top of Impact Framework, either extension of the tool or building content or newer areas where the Impact Framework can be used. And you would be amazed at the amount of contributions that came in and newer use cases that looked at calculating emissions, not just from carbon, but from water and other forms of renewable resources was also identified. And that's great. That, I believe, was a standout moment for the tool. Chris Skipper: The IF documentation highlights the use of a manifest file and a CLI tool to calculate environmental impacts. Could you walk us through how these tools work and how they lower the barriers for developers to adopt sustainable practices? Srini Rakhunathan: Definitely, we can talk about both the CLI tool and the manifest file. These are actually cornerstone capabilities built within the Impact Framework, and they help us to calculate the environmental impacts. What happens is, the manifest file contains a list of of the software's infrastructure boundary encoded as YAML files. It's in the standard YAML format, and it contains every bit of component that is part of the software, whether it's front end, middle tier, back end, database, API, everything encoded as what's the hardware used, what's the utilization, what's the telemetry involved. So much so that it can be used to give us an input to the Impact Framework CLI tool that calculates emissions. The use of the file enables transparency and rerunability. That means it can allow anyone to re execute the manifest file and everyone will come up with the same calculations. The second piece that we spoke about, which is a CLI tool, it's a command line tool, which means it can be used to run on any environment. It processes the manifest file and computes the environmental impacts. So the way it works is developers can pass the path to the manifest file to the CLI tool, and it'll take care of the calculations. The tool has capabilities to do phased execution and that allows efficient and flexible use of the framework. Chris Skipper: And finally, what lessons have you learned from working on this project that might benefit other teams looking to build tools or frameworks for sustainability in tech? Srini Rakhunathan: Thanks for asking this question. At an overall level, I would like to respond to this question by focusing on lessons learned from two aspects. The first is the execution model and the second will be the technical design. In the execution model space, this project is a good example of how open source collaboration works. The team used GitHub extensively, and most of the meetings were asynchronous. And the engineers and the product managers and everyone who worked on the project worked through GitHub. And collaborated extensively using the open source tools available, which is a great model for scale. The second aspect we should look at from an execution model, and which is a success story here, is how the team used customer feedback as inputs to make the product better. There were constant, if not many sessions with many customers with whom the team worked to engage with them and understand what the requirements are for building a tool that can help them calculate emissions and use that feedback into the process, into the backlog to make the tool better. The second aspect of lessons learned will be on technical design. And here I would want to call out that. The whole concept of building a plugin ecosystem and make them composable such that it can, you have a, you know, you have a set of plugins that you deliver to the community, like a base framework, and then you allow extensibility. So that's a great model, which can help tools that can use sustainability as a calculation engine. And then the second piece is, which is also equally important. As you do this. You also make sure that you have extensive and good documentation that can help anyone who's coming on board understand the framework and be able to get on board and run with building a new plugin as soon as possible. The IF code, the GitHub site, if you go there, You will have a link to the docs page. And if you read through the docs, it's very, very self explanatory and will allow anyone who can come in and who's interested in building a plugin, do that at the fastest possible time. So these are, in my mind, lessons learned both from an execution model and the technical design aspect. Chris Skipper: Moving on, we now have some questions for Joseph. Joseph, the Impact Framework recently achieved the status of a graduated project under the GSF. What does this milestone mean for the project, and what were some of the key factors that led to its graduation? Joseph Cook: The Impact Framework graduation was a huge milestone because it represents the moment when the project is considered sufficiently mature that it no longer needs to be incubated and instead it can largely be handed over to the community. We consider the software to be feature rich and stable enough that people can integrate it into their systems, and in order to graduate, the project had to meet a quite stringent set of requirements, including demonstrating that Impact Framework had real world users, and that we had addressed community requests and bug reports, and that we had suitably comprehensive test coverage, and that the documentation and the onboarding materials were all fit for purpose. Now that milestone has passed, development activity is going to be much more ad hoc and driven by the community, rather than following a development roadmap that's defined by Green Software Foundation. Our efforts at the GSF will now be in driving adoption instead. Chris Skipper: How does the Impact Framework engage with the broader tech community to encourage adoption? Can you tell us what steps the GSF is taking to include the community as part of the IF development? Joseph Cook: Impact Framework is used by all kinds of organizations, but it also has a thriving open source community. And most of the discussion with the community happens on GitHub, either through issues or on the discussion board. But we also have a Google group where we share updates and collect feedback. Open source development on Impact Framework is really fundamental. It's really baked into the very core of the project. Instead of trying to ship Impact Framework with all the built in features to connect to thousands of different services and systems that people want to measure, we instead focused on making it really easy to build plugins, and then encouraged an open source community to develop, where people create their own plugins for all the features that they care about, and share them with each other on our Explorer website, which is like a free marketplace for Impact Framework plugins. This model actually makes the Impact Framework much more robust and much more stable because we have a much greater diversity of voices influencing what Impact Framework can do and what it can connect to. It decentralizes the development of the project without compromising the core software, and it also means that our small development team doesn't shoulder the burden of maintaining a huge code base with lots of different brittle connectors to third party APIs and services. And going forward, we want to keep this community thriving and see thousands more Impact Framework plugins listed on the Explorer. Chris Skipper: How do you see the Impact Framework setting new benchmarks for environmental responsibility in tech? Are there specific metrics or practices that you believe will influence industry standards? Joseph Cook: Impact Framework is a lightweight piece of software for processing what we call manifest files. These are YAML files that follow a simple format that captures the architecture of the system that you're studying. All the observations that you've made about that system and all of the operations that are applied to your data. I like to refer to these files as executable audits because they mean that you don't just report emissions numbers anymore, you actually show you're working too. And this enables the community to fork and modify your manifests and challenge you. And through iteration, you can come to crowdsourced consensus over your environmental reports. We would love to see this radical transparency become the gold standard for environmental impact reporting for software. Not only that, but manifests can be the basis for experimentation or forecasting, and help decision makers to assess the environmental benefits of implementing some change. Imagine you're challenged about why you chose some specific action. Your manifests are your evidence. And we think this combination of transparency and reproducibility, composability, and openness is a unique selling point for Impact Framework, and it could transform the way projects and organizations report their emissions and introspect their own operations. Chris Skipper: For listeners who are interested in getting involved with the Impact Framework, what are the ways they can contribute or support the project? Are there specific skills or areas where the community can make the most impact? Joseph Cook: If you would like to get involved in Impact Framework, there are many ways to do so. If you're a developer, you can head to the GitHub, where we have plenty of open issues, including some specific good first issues to help people get started. If you want to build plug ins, then you can download our template and use that to bootstrap your way in, and then submit your plug in to the Explorer using a simple typeform on our website. We always appreciate updates to the documentation too, and if you're interested in integrating Impact Framework into your systems, we'd You can always reach out to research at greensoftware. foundation to discuss it with us directly. We're always happy to help. If you just want to test the water or you have general questions about Impact Framework, you can start discussions on our GitHub discussion board or communicate via our Google group, IF-community@greensoftware.foundation. Chris Skipper: Awesome. So I'd like to thank Naveen, Srini, and Joseph for their contributions to this episode. Before we finish off this episode, I have a few events that need announcing. Starting us off, we have an event that will be happening today, the date of the publication of this episode, February the 13th, 2025 at 5 p.m. CET in Utrecht, Netherlands. Any Netherlands based listeners, you're invited to a Green Software Community Meetup today from 5pm until 8pm at Werkspoorkathedraal. Join us for a free in person event to kickstart a more sustainable year in tech. You'll hear insightful talks about reducing your software's energy footprint, scaling down for greener computing and building a grassroots digital sustainable movement. This is a great opportunity to connect with like-minded professionals, share ideas, and be part of a growing Dutch community that's dedicated to building a greener tech future. Food and drinks are provided free of charge. Next up is an event in Brighton in the UK, happening on February the 19th from 6:00 PM to 8:00 PM at Runway East, which features Senior Digital and Sustainability Manager for OVO, Mark Buss, speaking about the challenges with advocating for digital sustainability within his company. The talk will also be live streamed, so we will have a link in the show notes below for that. Next up for any Spanish listeners, we have the first ever meetup of the Green Software Community in Spain that will be happening online at 6pm On February the 20th, Dia Zero, Comunidad, Meetup, Green Software Foundation, España will be a chance for you to discuss how to collaborate with other people passionate about climate change and green software. And we'll have a link to that in the show notes below too. Next up down under in Australia on February the 20th at 6pm AEDT in Melbourne, we have Digging Deeper into Digital Sustainability. How to design and build tech solutions. This will be happening at ChargeFox. Katherine Buzza will be talking about the impact that software is having on the world's carbon emissions, and how to align your career in tech with the decarbonized future we can all play a role in creating. Next up, another UK event on February the 27th at 6pm GMT in London. Practical Advice for Responsible AI will be held in person at the Adaptivist offices. Talks about Green AI with Charles Humble and AI Governance with Jovita Tam. Click the link below to find out more. And finally, on our events list, we have GSF Oslo will be having its February meetup on the 27th of February at 5pm in person at the Accenture offices from 5 until 8pm. Come along to find out how leveraging data and technology can drive sustainability initiatives and enhance security measures and dive into green AI. Talks from Abhishek Dewangan and Johnny Mauland. Details in the podcast notes below. So that's the end of this episode about the Impact Framework project at the GSF. I hope you enjoyed the podcast. To listen to more podcasts about the Green Software Foundation, please visit podcast.greensoftware.foundation, and we'll see you on the next episode. Bye for now!…
E
Environment Variables

In this episode, we go behind the scenes of the Carbon Aware SDK, a groundbreaking tool enabling developers to reduce software emissions by running workloads where and when energy is greenest. Featuring insights from Vaughan Knight, chair and project lead of the SDK, the episode dives into its origins, real-world applications, challenges, and milestones, including early contributions from UBS and Microsoft and its recent 1.7 release with NPM and Java libraries. Learn about how the SDK supports Software Carbon Intensity (SCI) metrics, practical examples of carbon-aware workload scheduling, and the roadmap for expanding developer resources and geolocation-based solutions. Learn more about our people: Vaughan Knight: LinkedIn Find out more about the GSF: The Green Software Foundation Website Sign up to the Green Software Foundation Newsletter Resources: Carbon Aware SDK If you enjoyed this episode then please either: Follow, rate, and review on Apple Podcasts Follow and rate on Spotify Watch our videos on The Green Software Foundation YouTube Channel! Connect with us on Twitter , Github and LinkedIn !…
E
Environment Variables

In this episode of Environment Variables , host Chris Adams sits down with Mark Bjornsgaard of Deep Green to explore a transformative approach to data center design and sustainability. Mark shares insights into how Deep Green reimagines traditional data centers by co-locating them in urban areas to provide heat reuse for facilities like swimming pools, district heating systems, and industrial processes. They discuss the challenges of planning and policy, the rise of high-density computing driven by AI, and the potential for data centers to become integral components of community infrastructure. Tune in to learn about the intersection of digital innovation and environmental responsibility, and how new business models can turn waste into opportunity. Learn more about our people: Chris Adams: LinkedIn | GitHub | Website Mark Bjornsgaard: LinkedIn | Website Find out more about the GSF: The Green Software Foundation Website Sign up to the Green Software Foundation Newsletter Resources: Mark Bjornsgaard on LinkedIn: Dell's OCP Solutions Propel AI Innovation [07:52] Civo [37:31] Real Time Cloud | GSF If you enjoyed this episode then please either: Follow, rate, and review on Apple Podcasts Follow and rate on Spotify Watch our videos on The Green Software Foundation YouTube Channel! Connect with us on Twitter , Github and LinkedIn ! TRANSCRIPT BELOW: Mark Bjornsgaard: The government does need to legislate. There is just not enough structure and there's not enough impetus for people to do the right thing. But the also, and particularly in the UK, what the government needs to do is planning is a huge, huge hurdle. I never really understood that until we'd be working with Deep Green for, you know, building data centers. It is breathtaking how Kafka-esque the planning system in the UK is. It's just, It's beyond insane. Chris Adams: Hello, and welcome to Environment Variables, brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software. I'm your host, Chris Adams. Okay, Mark, a few years back, when people were asked what a data center was, if they knew what one was at all, they might talk about some kind of thing, room, cupboard full of a few machines, maybe in a rack inside a unused room inside a building, for example. But these days, in the 2020s, people are more likely to talk about a warehouse full of hyperscale kind of data servers in a building, which is maybe the size of a football field or larger, for example, the kind of things that are run by massive firms like Google, Microsoft and Amazon, for example. Now, as I understand it, you work with data centers, too, but they can take a rather different shape and interact rather differently with the built environment. So for those who've never heard of Deep Green, or how the stuff you're doing is different, give a kind of brief introduction to like how your approach to like building data centers is and how that has an impact on how it works with the surrounding area, for example, communities. Mark Bjornsgaard: Yeah. So, as you say, most data centers are built in the middle of nowhere, and the vast majority are built without heat reuse. So the vast majority simply eject the heat that comes out of the computers. Data centers, we know, two to four percent of the world's electricity supply, and computers themselves are incredibly efficient electric heaters. So 97 percent of the electrons that go into a computer come out as heat. So you've got us as a species, us in a climate emergency, taking two to four percent of the world's electricity supply, converting it into heat, and then ejecting it into the atmosphere, which 10 years ago, that might have sounded kind of plausible or even sort of necessary. But in a world, as I said, in a climate emergency, that doesn't look so clever. So the difference between Deep Green and every other data center, most other data centers is we are building the data center where the heat can be reused. So very hard to transport heat, but relatively easy to transport electrons to take the data center to where the heat's required. So that's what we do. We build smaller data centers, co-locate them where heat's required. Now that might be a laundry, it might be a distillery, it might be food production, it might be antibiotic production, it might be a swimming pool, but more often than not, it's what's called a district heating system. So these large centralized heat networks that through super insulated pipes supply heat to large areas of different cities. That sort of principle, that district heat systems and heat networks, we're not very good at them in the UK specifically, but we are, the government is certainly planning for us to get a lot best them in the years to come. So, that's where we're anchored. We, you don't build them in the middle of nowhere, you build them where they're required. There's a further, there's a further caveat and a sort of, a kind of context to this, I suppose, if you'd like. Up until the point where AI started to become part of our everyday lives, those normal data centers aren't on very much. They're only on 20, 30 percent of the time, and they don't actually generate very good waste heat. So you can certainly forgive the great, the good of the data center industry for not necessarily trying too hard to reuse heat in the old world. But in the world that's coming where we've got these incredibly dense racks of NVIDIA and other chips, where, you know, she utilising a massive, huge amount more energy than previously the datasets had. That, it's at this point where those are on 70, 80 percent of the time, and they're generating an enormous amount of heat, and the heat's relatively high grade. It's not high grade heat as class within, but it's good low grade heat. So at this point, then the ability to reuse heat becomes a real thing. And that's why we exist. Chris Adams: Ah, I see. Okay, so there's a couple of things I'd like to unpack if I may. So the first thing you said was, okay, so there used to be data centers if they were going to be built in a kind of hyperscale thing. You're looking for kind of cheap land and then that's why they're often kind of miles away and probably maybe near things like say a grid connection or fiber connection or something like that, all right? So that was like one of the previous approaches, but the downside of that is that, well, you've, you might have all this heat, but no one's able to use it, so you just vent it into the sky, so it's basically wasted in that way. So the other, another way you could do this is you can actually build these, where they kind of interact more, where they're kind of more complementary to the kind of urban fabric, as it were, and then you can use that. But the thing that we've seen, one of the reasons that's been stopping that before is that essentially the data centers might have generated some heat, but it wasn't enough heat. So, you said low grade, and when you talk about low grade heat, that's like maybe 40 degrees, 50 degrees? Like, maybe you could expand on that, what that might mean, because I think for people who've never heard of the world of heat reuse, they don't know what high grade heat or low grade heat might be or what some of these uses might be, for example. Mark Bjornsgaard: Yes. Yeah. No. It's so as you say, low grade heat in industrial settings can be as high as a couple of hundred degrees. So when you say a data center is going to be producing heat at 45, 50, 55 degrees, then that doesn't sound very warm at all. That said, 30 percent of all of the economy, 30 percent of all of the industry can use that very low grade heat. So for example, a swimming pool very reliably loses a degree of temperature every hour. And it only needs to be 30 degrees. So if you've got, if you're trying to push heat from a, from one side of heat exchanger into another, if you've got kind of pool temperature water at 25 degrees, one side of that's the heat exchanger, and you've got, you know, our heat at 55, the other side, then heat flows the right way. When it comes to district heating systems and heat networks, the old ones, actually, again, they weren't very, it was quite difficult to plug data centers into them because those old heat networks were quite high heat. They needed heat at 80, 90 degrees. So if you were a data center and you said, I'll give you heat at 35 degrees, it really wasn't that useful. Now, fifth generation district heating systems, the ones that we're building in the UK and the ones that are beginning to be built elsewhere in the world, they can use very much lower temperature heat because the buildings themselves are better insulated. So the whole, the kind of what we think of as ecology, industrial ecology, the kind of ecology starts to, to make sense because lots more offtakers can use this relatively low grade heat, Chris Adams: Ah, I see. And you also said one other thing about, this is kind of one of the kind of flip sides of massively more dense compute. Here's one thing we've spoken about before. People talk about, okay, there is like worry about data centers, basically, or like AI data centers being massively more dense. Like the examples, I think I saw you share a link on LinkedIn, which kind of blew my mind. Like, some of these new racks from Dell can have like half a megawatt of Mark Bjornsgaard: half a megawatt per rack. Chris Adams: and like, I couldn't really kind of picture what that was. I know it's about 30, it's around 30 times minimum, or around 30, more than 30 times what you might have for an enterprise data center rack. So like, that's quite a lot of energy there. But like, can you maybe just like, what does half a megawatt even look like for most people, because it's really hard to Mark Bjornsgaard: it's really, yeah, it is, it's really, it's sort of so vague, it's very hard to get your head around, isn't it? So, I always like to think of it in terms of your boiler on your wall at home. So that's going to be about 10 to 20 kilowatts, right? Your boiler at home. So that one Dell rack is, produces 50 times the amount of heat on the basis that on the basis that 97 percent of the electrons that go into it come out as heat. That 500 kilowatt rack is producing anywhere between 30, 40, 50 times more heat than the boiler on the wall of your house. And so, an unfathomable, you know, amount of kind of heat. Then if you look at it in the context of a normal data center, if you go into a conventional data center now, you might have rack densities of between 7 and 12 kilowatts a rack. So when you're talking about densities of again, kind of, you know, 20, 30 times. the density of compute in a single space. Now for us, we love that because we have the opposite problem of every other data center. We're space constrained, not power constrained. So if we can go to a swimming pool and we can heat a very large swimming pool with only two racks of gear, like a megawatt of, that for us is amazing because we spend much less money on building a data center, fencing, security, containers, all the other gubbins, fire suppressant systems, all the other gubbins that you'd have around a data center, when you compress them and you squidge them down, you make them much easier to deploy in the fabric of our communities and society. And then you get this really crazy kind of stats where I was in a data center in Sacramento, a couple of weeks ago, and you got this massive data hall, it's meant to be one and a half megawatts. It is one and a half megawatts of power, but the whole hall is empty. There are just three or four racks just at the end of the hall because those racks are 130 kilowatts a rack. And so they've built a data center. The physical shell of the data center is built for those rack densities, but they don't need all of that space. So actually what's going on at the moment in the data center industry is we believe is this sort of giant misallocation of capital where people are building data centers in the old way, when they actually should be building them for the world that's emerging, which is this really high dense, these rack densities that look nothing like conventional data centers. Chris Adams: So you, okay, that's interesting, and I'd like to come back to some of the things you said there about what the implications of massively more dense compute might actually be. But you also said a few things interesting about this idea of saying, you know, community involvement and things like that. Because one thing that I've never heard anyone else talk about in the data center industry or even the kind of like tech IT industry talk about was this idea of a, borrowing the idea of a social license to operate. This is an idea that people talk about in say fossil fuels and oil majors and stuff like that. And you said, well, this is one way that we can actually essentially keep that social license to operate by actually offering a much, much more kind of equitable deal with the communities we're kind of trying to integrate with rather than having this kind of like standoffish approach. Maybe you could like talk a little bit more about that, because I don't really hear people saying that much about data centers. They usually say, "well, you should be grateful because without us, you wouldn't have your cat pics without and and and..." It does feel like it's kind of missing a huge power of why people might push back against data centers or why they even talk about why they, you know, whatever the deal is when someone comes in and says, "Hey, can we build a bunch of digital infrastructure in your part of the world," for example. Mark Bjornsgaard: Yeah, I mean, as you say, we talk a lot about a social license to operate because, and we believe that in the future, you will get more and more pushback from communities around having data centers in their backyard, because you've got these huge sheds which are hogging and clogging transmission grids. So these transmission grids to be built by public money and then their commercial enterprise, yeah, dumps down there and says, "well, I want 100 megawatts" and then suddenly you realize that half the streets in the area can't put in heat pumps because there's no more grid capacity in the substages or they can't have electric cars. So, we think that social license to operate will be increasingly important in the future. No doubt. But the also the other, I guess the other on the other sort of flip side of this is that datacenters don't really employ anyone, right? I think the datacenter industry is a bit naughty when it says, "oh, you know, we're going to build a datacenter, we're going to employ 4,000 people." It's like, that's actually not true. You might employ 4,000 people while it's being built, but the reality is once a datacenter is up and running, the number of people who have to be employed in the actual vicinity are very low. But if you build a data center and then you say "I'm going to reuse the heat with a aquaculture park or a distillery or a laundry," suddenly then you then produce genuine net new jobs in a local area. So not only is the kind of the environmental bit of the social license talk very important, we think increasingly data centers are going to be looked on as having to be good citizens in terms of, you know, employment and doing the right thing with the community and we've already seen a lot of this, right? We've had moratoriums on data centers in the Netherlands and in Ireland and Singapore. We think we're in this sort of grace period in the transition. In the next 3 to 5 years electrons, then the amount, the number of electrons are going to become very constrained. We're not actually yet in the bottleneck, but in the next three to five years, we're going to start going to that period of time where they just genuinely are not enough electrons to go around. And we are going to have to make genuine choices about what we do with scarce electrons. And at that point, we believe, that if you're a data center and you're not doing the right thing, then, you know, you're the very least your operations going to be severely curtailed. Stroke, you're going to be in the midst of a full scale culture war, which you just don't want to go anywhere near. Right? Chris Adams: Okay, so you said a couple of things which I think might be worth exploring or kind of diving into there because a one of the key things I think I'm getting from you is that, yes, you might be able to kind of force some changes through quickly or you might say like, okay, well, I think one of the key things is that we need this transition itself to be sustainable and if you are able to kind of maybe push through some changes now you'll end up with so much pushback that you won't be able to sustain that state of changing as we end up like essentially moving away from fossil fuels a society based on electrification in many cases. Mark Bjornsgaard: That's exactly. Yeah, exactly. So, yeah, I think what we see is that we see that. We are energy and software folk and we're venture capitalists by trade. We see, we don't see the data center industry as a, we don't take it as sort of face value. What we see is 70 percent of the UK's total energy budget being the heating of spaces. So what, we're looking at from the other end of the telescope, we're saying, well, how could we, how can we best, what's the fastest, quickest way of heating all our shops and offices and factories? And the reality is, the quickest, fastest way of doing that is using computers as electric heaters. The fact that they happen to be there as data centers is almost, you know, that's kind of just a happy circumstance for us. We're solving what we see as a, as the meta problem, if you like. And just seeing what tools and capabilities we have to be able to solve that problem. Chris Adams: Okay all right so this is actually one thing that you... Because I think this is the thing that some of us forget about when we just think about IT like okay there's other transition, other changes that need to take place and before we, before you came on to this, I remember I saw you did a talk about these kind of for the wicked problems related to climate. And I wonder if you might get a kind of maybe kind of expand on some of that because I think it's quite a useful context to help people who are thinking about their role as a technologist. But, okay, like, why would you even care about heat reuse, and why would you care about anything other than just the efficiency of your code directly, rather than this kind of wider, more systemic view, for example? Mark Bjornsgaard: Yeah. Of course, we are. We all see our worlds in kind of what's in front of us, and that's completely understandable. As you say, we frame heat reuse and the electrification of heat, as you say, in context of what we think of as four wicked problems. So and these wicked problems make out make up roughly about 50 percent of the entire transition. So if we solve these four problems, then we will be somewhere around 50% of the challenge of the transition take place and those problems are the heating of, of spaces, so all of our homes and offices, the industrial use of heat, so all industrial processes need to be de decarbonized and kind of electrified, and then we think of, controlled environment agriculture and what's going on with how we grow stuff, the sustainability movement is rapidly kind of moot, sort of casting its eye across agriculture is realizing that actually how we feed 8 billion people on this planet is actually kind of some like 70 to 80 percent of all of our food is intensively farmed and based on fossil fuels. And then the fourth wicked problem is carbon sequestration. So how do you, actually sequester carbon out of the atmosphere? That is also a problem around heating. If you take those four wicked problems, they can all be somewhat or completely solved with data center heat, with low grade heat on it. And so we're sitting there saying, well, look, if those datacenters are going to be built anyway, if we already need to spend between 10 and 20 percent of our entire electricity budget for our country on data centers, then all logic says you build those data centers where you can use the electron twice. The electron can do its funky thing in the data center. We can have all that utility. And then so long as you've done in the right way, like we're doing it, you can just pass on 97 percent of that electron in the form of heat for it to then be used in those four wicked problems. So to us, that is, there's sort of a beautiful, immutable logic there, particularly in a world where you haven't got enough electrons. If you had bountiful, you know, fusion, fission, whichever the good nuclear bit is, if you had a bountiful electricity supply, then you might not be that bothered. But the reality is in the next 10, 20 years, we're going to be so constrained by the amount of electricity that we have, we're going to have to get really good at being as efficient as we can. Chris Adams: And I suppose it's actually, I mean, in the I mean, I'm calling you from in Germany, where most of our, almost all of our heating is still coming from basically combustion, burning like gas and stuff like that, for example, which is expensive. And even when you look at the UK gas again is one of the, what was the, I think it's the largest source of heating in the UK by quite a long way. And these are two things which are volatile and where you're exposed to all kinds of changes in prices and things like that. And this is one thing that we probably do need to move away from. So that seems to be one thing like you're kind of, this is one of the approaches that you're looking at doing here, I suppose. This is one thing I should ask you about then, because we spoke a little bit about this being a thing that we, that is valued and this is like a shift in how the role that digital infrastructure plays in kind of like the wider societal role. We've also spoken about in the UK, there is this goal to get entirely off, essentially have like some as close to as possible as a fossil free grid by 2030, which basically mean getting rid of a bunch of this heating from burning fossil fuels, right? Now that's a really ambitious goal. And like, as an as someone who grew up in London or grew up in the UK, I'm like, "wow, this is really cool." This is like, I'm really impressed by that kind of ambition. And it's also one thing we've seen where a number of larger providers have basically said, "well this 2030 goal, it was a nice idea, but the moon has moved," to quote president having Brad Smith at Microsoft saying, "Oh, yeah, we were not pushing for 2030 anymore." And I kind of feel like if there is this goal of 2030 in the UK, for example, and we have very similar goals in other parts of the world. Like what needs to happen at policy level to actually make this possible for the actual data center or the kind of digital infrastructure there because right now, I'm not aware of the kind of support or how policy kind of values this kind of different way of thinking about the role that digital infrastructure plays. But we have seen with new government, basically in the UK, they do seem to be very keen on having a massive rollout of infrastructure. So. what's the deal here? Is it gonna be, how do we make, how do we square this circle basically? Mark Bjornsgaard: It's not, the declaration of data centers as critical infrastructure isn't quite as good news as it looks. So the, so that is that predicated on regulatory capture and if you declare data centers as critical infrastructure, you can then basically run ride roughshod over any local objections. So the fact that the labor government announced that isn't necessarily a good thing. It's probably the opposite. In Europe, we've got the EED, we've got the European Energy Directive, I think it is, and by an Energy Efficiency Directive, which is, which effectively says that certainly in Germany by 2028, you won't be able to build a new data center without reusing 20 percent of the heat. So there is a, there is already a, some sort of regulatory framework out there that's saying "you've got to do the right thing. You've got to have, you've got to use green electrons. You've got to reuse the heat." So that's good. The reality is, as we all know, governments probably have to use carrot and stick. So you probably have to do a little bit more stick and a little bit more carrot. Those people who are being good citizens and reusing heat should get some brown points and should get some economic benefit from that. And those who aren't, increasingly should be penalised. I mean, now you'd expect us to say that because obviously we're on what we think of as the right side of history. So I think the short answer is the government does need to legislate. There is just not enough structure and there's not enough impetus for people to do the right thing. But the also, particularly in the UK, what the government needs to do is planning is a huge, huge hurdle. I never really understood that until we'd be working with DeDeep Greenor, you know, building data centers. It is breathtaking how Kafka-esque the planning system in the UK is. It's just, it's beyond insane. it's crazy. So you've got regulations like, because you're leased of a council on a district heating system means that you only got that lease because you said you'd use green energy. If you put a data center within the environment of your district heating system, because we've got generators that kick in if, you know, for redundancy and resiliency, that then means that you're in contravention of your lease. So instead of somebody just going, "yeah, that's a shit idea, let's not do that. Put across through that. That's an unfathomably complicated year long process." We've had to put one pool we're trying to qualify, we've had to resubmit planning seven times. So this is just, I mean it's beyond rank stupidity, it's just a madness in this country, in the UK at least, around, we hate success in this country. We just hate success. This will be the third business that we develop in the UK and then scale in the US because in this country it is, yeah, we just can't get out of our own way. It's really sad. And, you know, everyone says, "oh, we'll try and change." It's like, it's very simple. It's like, you either want people to do this or you don't. Do you know what I mean? Like no amount of meetings or nice coffees or platitudes or strongly worded emails. Do you know what I mean? Like, it's very fucking simple. Can I build a data center or not? If I can't, then I can't. You know what I mean? Like it is, yeah. So this country is, it's very difficult to do here. And I suspect in a lot of Europe it is. So we need government to get out of its own way and clear a path for us. Chris Adams: So you said a couple of things that I think maybe we could just go into a bit more detail before we move on from there. Because you said one of the things was, things like the, there is one like regulation, the energy efficiency directive, which is It's one of the ideally one of the drivers of transparency for organization for people operating digital infrastructure, like they'll, you know, as a result, you know, for you to comply with this, you need to be able to listen information like the carbon intensity of the power, how much your, you know, how clean the power is, for example, how much of it is coming from, say, fossil fuels, how much water you're using and things like this. And presumably, like, these are some of the metrics that you might be able to kind of look good on, as it were, or this kind of way of building infrastructure might look a bit better, for example, like, if you're reusing some of the heat, I suppose, then does that have an implication on maybe how much water might be used, for example, and things like that? Mark Bjornsgaard: Yes. And you've got to be very careful that it's not whack a mole that you don't, you know, you don't drop your PUE, but then you raise your, so you use evaporative cooling, you might drop your PUE or your, the energy use, you know, the Power Utilization Effectiveness of your data center, but then you massively increase the amount of water you use. So there is a balance. There is a balance to be struck across all of these metrics. That's why there isn't one perfect kind of measure, if you like. Certainly in our case, we don't use any water, so the way that we cool, the way that the direct chip cooling and, the types of cooling we use, we don't use any water and, you know, there really isn't, as far as I understand, and I'm not an expert in terms of a techie expert in this area, but, really using water is a question of just how much margin you're prepared to sacrifice, you know, it is perfectly possible to cool the data center without using any water. It's just you make a small amount more money on each data center if you use water and people again, the great and good of the data center industry are always be good environmental citizens. They could choose to use no water and just spend a little bit and make a little bit less money. Okay. Chris Adams: You, ah, so you said something quite interesting there about how So you're using essentially liquid cooling as one way we can, as I understand it, liquid cooling in cars is way more efficient than air cooling in cars, which is why we've moved over. Presumably it's the same kind of idea here. So that's, that would result in a more efficient system that you'd be looking at using here. Okay. And that, okay. That helps me understand how that might actually fit into heating a swimming pool or something like that. So if you've got an efficient way to move the heat from one place. to another place and like the whole point about you know people use water for heat storage and stuff like that it makes total sense I can see why you'd have like a nice chunky kind of like sink I suppose and if you if these are the things that you're doing then I suppose there's a chance to be more transparent, I suppose, with the kind of figures you're using for this. So this might be, okay, that's, okay, that's interesting. All right, so if I could, I'd like to ask you a little bit about this AI question, because the approach you're describing here, of having lots and lots of distributed, having series of smaller data centers, like, built into the kind of fabric around us, that seems quite a bit different to the massive, centralized, gigascale data center, kind of paradigm that people talk about so I want to ask like if this is, I've always assumed that you need to have massive centralized data centers to do some of the kind of. AI workload stuff because you need to have these things network with each other. The way you're describing it sounds like that might not be the case. You know, the things not being in the same building might not be the showstopper that people initially thought it was. Could you maybe talk a little bit about this? Because this suggests like a kind of post cloud way of thinking about computing, for example. And I want to ask, like, do you actually need a data, a mega cluster? Or is there a, an alternative that you're suggesting here? Mark Bjornsgaard: The truth is at the moment you need the mega clusters. So we, when we think of training large language models, those need to be done at the moment, those mega clusters need generally need to be all in one place. The trouble is, as data centers grow bigger and bigger, and as you build gigawatt data center campuses, and even larger, when we get, when we think of the trillion dollar cluster, the amount of compute we're going to need to, kind of enable artificial general intelligence, I think we're going to need something like 100 gigawatts of power, right? 100 gigawatt data center, which is, now, when you build, start to build data centers in these sizes, You Actually start to have a distributed problem anyway because you physically can't each sort of node running a version of the model has, it's so far away from the other node. You've got a distribution problem almost by default by size. If that make if that makes any sense. So we've certainly got to be better at networking the architectures around large language models. And, there isn't very much academic research on this, there is a bit. We're doing a lot of work with NVIDIA and Nokia around this. The Chinese, we think, are doing a lot more work around this than other people, which is in itself interesting as we see a race to AGI emerging. So certainly the networking between data centers is going to become increasingly important. See, in the last six months, you've seen Microsoft spending billions laying massive fiber pipes between its AI data centers because it's trying to use these, you know, even 100 megawatt data center needs to be kind of physically clustered with another 100 megawatt data centers. But that's also all in the world of training. Now, of course, when that, and that's where the models are learning, and that's great, and that's going to go on. The world that will emerge is obviously mostly going to be inference. So when you think of a world of AI in 10 years time, actually 90 percent is going to be inference, 10 percent is going to be training. So we are, at DeepGreen, we're not necessarily trying to win the large language model, massive cluster game. What we're building is, the compute substrate for the future, where there will need to be thousands of megawatts of smaller data centers, smaller cluster sizes, much closer to where we all live and work. So we're, this substrate, this compute substrate will be required in the future. Chris Adams: Okay. All right. So, so basically, what I think you're saying, or what I'm kind of taking away from that, is that it was almost like a typology of different kinds of digital infrastructure that you might think about. So rather than just being one model, which is inherently better than the other, you probably would need to have different setups, depending on the different kinds of roles that you might actually be having. And it's, you can kind of see people talking a little bit about this with the whole idea of like edge computing, but it sounds like for certain things you do need, you may, there may be a world where you do have big box Walmart-style out of town data centers doing certain things because, and you just, and you may have to accept that there's, you're not able to use some of the waste heat or you may need to like co locate things to use that and like have some kind of clusters and I guess China's, you can see some examples of people co-locating energy generation with industry and things like that. But then there's this other kind of like other end of the scale, which is a more distributed thing. And that's something that you're looking, that you're looking at building, like, the kind of data centers that might actually integrate with, say, cities and things where they're closer to where it's actually being used. But the, you're trying to go for a more kind of integrated approach by making as many of the outputs, the waste outputs, something that can be reused by other people for example because presumably there's a cost to like heating a swimming pool like it's non zero if you need to do that and if you've got the heat coming from what you're using then that's something economically benefit that's something that you might write into like currency benefits agreements and things like that. Mark Bjornsgaard: Yeah. If you think about some of the inference work use cases that are already emerging, whether that's, you know, you integrating, you interfacing or chatting, maybe your kids are talking to a chatbot and they're trying to learn about they've got some visualization, some rendering visualization, which takes a lot of GPU compute. That will be, those GPUs will be, it is better that they are co-located, or they're located somewhere closer to where the user is, particularly in the US, where they'll see, or other countries, and not just the US, but, you know, across Europe and other large continents, large land masses, you want the compute to be physically closer to people. So, you know, where they're living and working. So that that is very important. But of course that world is just emerging. So at the, but that said, there are already a, there's already a lot of refining training. There's already a lot of people who are taking the outputs of the very large language models and then applying their own data to them and then refining, training them. And then there's a whole bunch of other use cases around medical science and fluid dynamics and all the other stuff that the robots are gonna do for us. That world is now, as we know, emerging fast. That's the world that we're really building for smaller compute clusters, much closer to where people live and work. And then, as you say, then you start to change the economics about how society works. You know, in the UK, we're spending 1.5 billion pounds heating our swimming pools every year. Really, we shouldn't be spending anywhere near that. Because those, pools should be being heated by recaptured heat. If we allow ourselves to build the data center infrastructure in the right way, the interesting thing about the UK particularly and other countries is that there's lots of fiber in the ground. So when we first started building a data center, we talked about them following the fiber. Now, data centers don't really need to do that. There's plenty of fiber around. You can pretty much build a data center wherever you like. Now you have to, now people are saying they're following the heat, sorry, the power, but the third generation, the third phase of data center development, we see is people following the heat. So first of all, you went to where the fiber is, then you went to where the power is. that's the era we're in now, but very quickly you're then now going to build data centers where the heat's required. Chris Adams: i see where there's presumably like someone who like an offtaker who would use that and then be in favor of something being set up in their neighborhood or in as part of their project, they're getting a bit set up. Okay, so you said one thing that was, I think, quite interesting from there about, okay, there's loads of fiber, there's more fiber than we thought, like all this kind of dark fiber from 20 years ago, the last boom and bust, there's people might reuse some of that. And some of this has, this could feel a little bit kind of academic or maybe not, it might feel a little bit like, "okay, what's happening in the future?" But As I understand it, some of this stuff is like, what if I'm a, if I'm a developer, I think, "oh, this is kind of cool." I like the idea of actually being able to run infrastructure, run kind of the code or run my applications in somewhere like this, in this kind of environment, because I think it's maybe more interesting. Or, and if I can have the same convenience and same, the same kind of experience as a developer deploying code, as then why, you know, I might try this out. Is it something that people can use? Like, is there like. I mean, if I'm used to, like, deploying things into, like, virtual computers, I mean, virtual private servers or Kubernetes, is there something like that? How do I actually try out some of this or use some of this stuff, for example? Mark Bjornsgaard: Yes, it's because we're we are just a dumb datacenter operator. We are making our capacity of our datacenters available. Then that's the physical space in our datacenters for people like Amazon and Microsoft and Google and loads of other people to come and put their kit in our datacenters. So the minute you put your kids in our datacenter, then it will be doing something useful with the heat. So as you say that there are a few cloud providers who already partnering with our main partner who have been incredibly supportive to us for years is a platform called Civo. So yeah, again, a UK business paying UK tax. If you as a developer want to run, you want a cloud service that is every bit as good as AWS or Google or Amazon or Azure, and you want it to be green, then just go to Civo. And then you will be, Civo are using our data centers. So you as a developer, you shouldn't have to make any compromises at all, right? You shouldn't have to worry about any of this stuff. This should all be abstracted away. And in time will be where you can just be assured that when you're running code, it's running in the most environmentally, you know, it's being run in the most sustainable way possible. Now, part of the problem with the large clouds is that their reporting, their ESG reporting, their sustainability reporting is pretty shunky, stroke, complete bullshit. So I think that's part of the problem that I think a lot of cloud services at the moment aren't really taking this very seriously. And what is certainly very hard as a developer or as an end user of a cloud platform to know how green or not your cloud is. The reality is any cloud platform that's claimed to be green just by using green electrons is ignoring 90 percent of the problem, right? 90 percent of the carbon in a data center is in the kit itself. The scope, what's called scope three, the carbon that has been used to manufacture the computers themselves. So however much you jump up and down and say, "I'm doing really well because I'm buying green electricity or I'm buying" that's pretty much. I mean, it's not Chris Adams: 10 percent rather than the other, Mark Bjornsgaard: exactly so really, as, we all get better at this and as reporting becomes better and as greenwashing gets, people start to come down on greenwashing, as developers, as a whole community, we will have much, much better visibility about how green our clouds really are, but the reality is a green cloud, it comes down to the carbon in the compute and what you're doing, what you're doing to mitigate and reduce and remove that carbon. Chris Adams: Okay. Alright, so maybe this is one thing that, so, there's one thing, there's one project that we work on in the Green Software Foundation that may be relevant for this. There's one project called the Realtime Cloud Project, where there is an effort to basically work out the carbon intensity for on a kind of per hour for every single region that we have. If this is something that, I mean, it would be wonderful to have groups like Civo or people like that share something like this. Because the whole effort is to have some standardized data sets, some standardized numbers that you can trust and you can optimize for. And if what you've described is basically saying that yeah, running stuff inside infrastructure here is essentially somewhat fungible compared to running in other infrastructure here. But if the number, if you're able to kind of reflect that in a lower carbon intensity or lower embodied energy or lower water usage then or any of the any other metrics that are available then that feels like a useful thing to actually allow people to be able to do and it sounds like that is something people can do today rather than having to this being a conversation about 2026 or 2027, by the sounds of things. Mark Bjornsgaard: Well, to be clear, we're still, we're bringing our capacity online now. So we'd be a year in sort of designing since raising the money from Octopus designing building and now getting shovels in the ground and actually getting our data set the first wave of data centers built. So we've not done, we deliberately not said anything about this because we didn't want to be kind of part of the problem. We want to be very much part of the solution. Whatever we will be reporting next year will be, you know, we'll be holding our hands up saying this is. This is as good as it gets the moment and we're going to improve it. But I think it's incumbent on all of us to be very transparent about that. I think that's it. No one's trying to be perfect. No one's going to get kind of shot down for not being perfect. I think it's much more about the attitude you bring to it as a business rather than being, you know, "this is the law and I'm telling you it's like this" when we all know that's not true. But I think it's much better to be more tentative about it and say, "look, we don't know everything, but, you know, we think our scope three is this, and we are removing it using these removals." And if somebody says, "I don't like those removals, I think they're nonsense." And whilst you say, "well, okay, but we are paying, you know, $250 a tonne for that carbon, so they're not complete bullshit." You know what I mean? I think it's in the, in this next phase, it's all about hopefully not giving each other too hard a time, but actually getting a bit more transparency and a bit more kind of clarity on where we are, because only then can we then start chipping away at it, right? Chris Adams: Yeah. And like in the UK, we have very, clear targets for the very least like 2030 to get there, for example. Mark Bjornsgaard: Quite, which is incredibly short Chris Adams: It's very, it's like, it's almost tomorrow, isn't it? Yeah. Mark Bjornsgaard: I'm so old that the years pass like days these days, but yeah, five years doesn't feel very long at all, frankly. Yeah. Chris Adams: I could definitely sympathize with that because we are a non profit focusing on a fossil free internet by 2030. So that is very, acute for us as well. All right, Mark, I've really enjoyed chatting with you. And I've learned a bunch from us, like wonder or wandering through the world of digital infrastructure and stuff, we're just coming to the end of the time. So I want to ask, like, is, I mean, if you, are there any projects or things you want to kind of point people's attention to, or people, if people want to find out more about the work you're doing, where should people be looking, for example? Mark Bjornsgaard: Yeah. If you're a developer, go to Civo. They're amazing people. It's an amazing platform, as I said. And the fastest, quickest way of supporting us is by using Civo. Buying Hewlett Packard Enterprise, Hewlett Packard GreenLake AI. So we're landing whenever you buy HPE kit in the UK and hopefully the US, you will have the option to land it in a Deep Green data center now. So increasingly, developers and businesses can make green choices just by searching out our partners, you almost certainly never come to us directly. You're going to be consuming cloud services by a third party, but asking your cloud service providers to land that kit in our data center is the fastest, quickest way of helping us. Yeah. Chris Adams: Brilliant. Well, in that case, I'll speak to other friends to see if there's a way to filter any kind of like cloud providers for heat swimming pool as one of the kind of like features when I'm looking for my cloud computing in future. Mark, this has been fun. I really enjoyed it. Thank you so much for making the time, especially given like getting hit with COVID last week and everything like that. So once again, thank you again for this and yeah, this is great. Take care of yourself and have a lovely week. All right, Mark. Mark Bjornsgaard: Thanks very much for having me. Thank you. Chris Adams: Hey everyone, thanks for listening. Just a reminder to follow Environment Variables on Apple Podcasts, Spotify, Google Podcasts, or wherever you get your podcasts. And please, do leave a rating and review if you like what we're doing. It helps other people discover the show, and of course, we'd love to have more listeners. To find out more about the Green Software Foundation, please visit greensoftware.foundation. That's greensoftware.foundation in any browser. Thanks again, and see you in the next episode.…
E
Environment Variables

1 Finding Signal Amongst the Noise in Carbon Aware Software 35:10
35:10
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי35:10
In this episode of Environment Variables, host Chris Adams is joined by Tammy Sukprasert, a PhD student at the University of Massachusetts Amherst, to dive deep into her research on carbon-aware computing. Tammy explores the concept of shifting computing workloads across time and space to reduce carbon emissions, focusing on the benefits and limitations of this approach. She explains how moving workloads to cleaner regions or delaying them until cleaner energy sources are available can help cut emissions, but also discusses the challenges that come with real-world constraints like server capacity and latency. Together they discuss the findings from her recent papers, including the differences between average and marginal carbon intensity signals and how they impact decision-making. The conversation highlights the complexity of achieving carbon savings and the need for better metrics and strategies in the world of software development. Learn more about our people: Chris Adams: LinkedIn | GitHub | Website Thanathorn (Tammy) Sukprasert: LinkedIn | GitHub | Google Scholar Find out more about the GSF: The Green Software Foundation Website Sign up to the Green Software Foundation Newsletter News: On the Limitations of Carbon-Aware Temporal and Spatial Workload Shifting in the Cloud | Proceedings of the Nineteenth European Conference on Computer Systems [03:25] On the Implications of Choosing Average versus Marginal Carbon Intensity Signals on Carbon-aware Optimizations | Proceedings of the 15th ACM International Conference on Future and Sustainable Energy Systems [22:12] Resources: Tammy's GitHub [19:00] CarbonScaler : Leveraging Cloud Workload Elasticity for Optimizing Carbon-Efficiency | Proceedings of the ACM on Measurement and Analysis of Computing Systems [33:19] If you enjoyed this episode then please either: Follow, rate, and review on Apple Podcasts Follow and rate on Spotify Watch our videos on The Green Software Foundation YouTube Channel! Connect with us on Twitter , Github and LinkedIn ! TRANSCRIPTION BELOW: Tammy Sukprasert: With that one hour job with perfect knowledge of one year, we can reduce the carbon emission of the whole world by 37%. Chris Adams: Hello, and welcome to Environment Variables, brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software. I'm your host, Chris Adams. Hello, and welcome to Environment Variables. Where we bring you the latest insights and updates from the world of sustainable software development. I'm your host, Chris Adams. One of the oft repeated quotes when people talk about sustainability in software is that if you can't measure it, then you can't manage it. And when it comes to working out the carbon footprint of a software application, a significant portion of the footprint comes from what we refer to as the carbon intensity of the electricity in use, i.e., how green it is. And there are various steps you can take to make the same application using the same code, you can make it greener by running it where the grid is greener. So if you were to choose to run it in Iceland, that's one example. Or you can choose to run the grid, run the application at different times when the grid is greener, like when the sun is in the sky and your solar panels are wearing away. But how much greener can they get? And what else could we need to think about when trying to adopt a ways or ideas like this? Enter our guest for this episode today, Tammy Sukprasert, a PhD student at the Laboratory of Advanced Software Systems and Sustainable Computing Lab at the University of Massachusetts Amherst. Tammy recently authored the paper on the limitations of carbon aware, temporal, and spatial workload shifting in the cloud, which examines how shifting computing workloads across time and space can help cut emissions. Tammy, we're going to spend a bit of time talking about why you chose to work in this field. But to begin with, can I give you a bit of space to introduce yourself and what you do? Tammy Sukprasert: Hi, Chris. Thanks for having me here. I'm Tammy Sukprasert. I'm a PhD student from the University of Massachusetts Amherst. I work on cloud and edge computing with a specific focus of decarbonizing computing. I'm currently calling you from Amherst, Massachusetts, and it's nice out here. Chris Adams: Cool. That's nice. We've had a, it's snowing in Berlin, so I'm a little bit jealous, actually. Hi folks. If you are new to this podcast, my name is Chris Adams. I am the Director of Technology and Policy at the Green Web Foundation. And I'm also the, one of the chairs of the Green Software Foundation Policy Working Groups. And also, the host of this podcast. Now, before we dive into the conversation with Tammy, if you're listening to this for the first time, here's a quick reminder. We will try to link to all the papers and all the links and all the projects on GitHub in this, and there will be extensive show notes as well as a transcript if there's anything you particularly missed. And I think that's pretty much it. Tammy, are you sitting comfortable? Tammy Sukprasert: Yep. Nice. Chris Adams: In that case, I guess I'll begin. All right. We've linked to this in the show notes, but the paper title, On the Limitations of Carbon Aware Temporal and Spatial Workload Shifting in the Cloud, does kind of give a clue about what this research might actually be about. But for those who are new to this idea, would you mind bringing listeners up to speed about what workloads are, what workload shifting is, when we talk about carbon aware computing? Tammy Sukprasert: Sure. So to understand what workload shifting is, we need to have some idea of why we can shift the workload in the first place. So carbon intensity is based on the contributions of the different energy sources in the electric grid, right? So at different point in time, the demand changes. So there is different contribution of different sources. That's why there's variation in carbon emissions. So there will be a high carbon period and low carbon period. And because of that, instead of running the workload during the high carbon period, you can actually schedule the workload to the lower carbon period or lower carbon region. So some of the workload, you can delay the start time. The workload could be machine learning or some batch jobs. And instead of running right away when it was dispatched during the high carbon period, you can delay the start time and run it during the low current period. And at the same time, there are also, there is another type of workload that you can move or shift the workload around. That could be a web request or an inference request. And instead of running your workload at your own region, you can look into other locations that have lower carbon intensity and migrate the in it. Chris Adams: So if I, so let's say I'm using like maybe a chat bot or like, or I'm using something like maybe chat GPT and I am in, say, Germany, maybe it's dark, it's not very windy and it's not very sunny, for example, and most of the power is coming from coal being burned on the grid, for example, I might, rather than my request being served in Germany at the same time, it could plausibly be, say, forwarded to somewhere else in the world, as long as it's fast enough. So, it might get forwarded to, say, Denmark, which is super windy instead. And that would mean that it would be slightly greener, for example. That's what you were referring to when you spoke about the inference. And then the other thing you mentioned before was like a machine learning job or like a video encoding thing. That's something that I might not be seeing myself. But it's something that probably needs to happen within like a few days or something like that. So it's important, but it's not urgent. And because there's a bit of flexibility, I can choose when to do that to minimize the environmental impact of the extra amount of demand being put onto the grid. Is that what you're, I think that's what you're saying there, right? Tammy Sukprasert: Right. So it's just basically align your job schedule with low carbon period. Yeah. That's the key idea of the shifting. Chris Adams: Gotcha. And then, so you spoke about there's one, which is if I'm doing something through time, that's like the temporal thing. Like I either bring it forward or wait till later. And then there's a spatial idea, which is me just moving it somewhere else. It might be happening at the same time, but it might be happening in Denmark, for example, or Iceland rather than in Germany. Yeah? Tammy Sukprasert: Yes, that's correct. Chris Adams: Okay, cool. So, okay. We've got a good idea about what some of this might be. And a question I might ask is like, why is this interesting to you? Like what, how do you end up finding out or even kind of wanting to research this in the first place? Tammy Sukprasert: Yeah. So there are many works that look into the benefits of reducing carbon reduction based on time shifting or spatial shifting, but it happened in a limited setting. i.e., a small number of regions or specific type of jobs, so people only look into spatial shifting or people only look into temporal shifting, or maybe they only look into a few number of regions but we were wondering, what if we look into both spatial and temporal and with the big picture of the whole world. So instead of looking to into a few regions, we look into 123 regions that we have in our data set and we want to see what is the broad impact of temporal and spatial shifting as a whole. Chris Adams: I see. Okay. So thanks Tammy. So for this research paper, as I understand it, you decided to see how much, what kind of savings you really can achieve with things like Carbon Aware Computing. And a little bit about what kind of conditions might be necessary for these savings to be possible. So would you mind expanding on some of this? We can start simple, fast, simple first, and then we can work our way up. So yeah, let's see, what were the first things we started with? And what were the first, what was like the ideal scenario for the savings? And we can go from there. Tammy Sukprasert: All right. So with the current state of the world, right, the average carbon intensity is about 368 grams per kilowatt hour. And to achieve as much savings as possible in terms of carbon reduction, right, you will want to migrate your workload to Sweden, which is the region with the lowest carbon intensity in our data set. And migrating all the workload to Sweden, you can actually achieve 96 percent carbon reduction for the whole world. Chris Adams: Okay, so what you're talking about there is you've basically gone from an average figure for carbon intensity of electricity to much, much cleaner electricity. And that's in this kind of ideal scenario, that's what you've essentially done. You've moved all of the computing jobs to the cleanest possible electricity there. That's what we've done there. This is the ideal scenario. So where do we go from here then, for example, are there other constraints and things we know we need to take into account when doing this? Tammy Sukprasert: Great. So of course, Sweden cannot take all the workloads in the world, right? So we were like, okay, instead of just moving everything to Sweden, what if we have capacity constraints? So we look into the scenario where every region in the world has an idle capacity of 50%. We're trying to be generous here because we want to understand the impact of the idle capacity on carbon reduction, right? So with every region having 50 percent idle capacity to absorb the job from other regions, instead of achieving, so now no one can actually migrate. So now not everyone can migrate to Sweden, right? Some other regions have to migrate to somewhere else. So, with that, the savings from 96 percent global reduction. Drops to 51 percent. Chris Adams: Okay. Tammy Sukprasert: if not everyone can go to Sweden. Yeah, Chris Adams: All right. That's still not bad. And when you're talking about capacity, you're referring to the fact that say, maybe there's a, like you've used the word region here, and for region, I think that's like a cloud region, like say AWS West or something like that. That's what you're referring to there. And there's maybe a certain amount of reserve capacity they have to hold back. And that's what you're referring to there. So the idea that maybe different cloud places, different cloud data centers have a bunch of spare capacity and that's what they'd be using to move everything there, right? So, okay. Okay. Well we never actually talked about latency constrains Tammy Sukprasert: as well, right. So let's say for example, a web request, you need some service level objective or SLO to respect, to be respected, right. And so we look into that as well. And with, so now we have capacity constraints. So the scenario gets more and more realistic, right? So from 96% you added a capacity constraint, and now the saving drops from 96% to 51%. And we also look into a more realistic case where we think about web requests that have some latency constraint, where there's some service level objective that has to be respected. And so on top of the capacity constraints that we have, that we achieve 51%, we added a 50 milliseconds capacity constraint, and that further reduced the carbon savings to 31%. So in the real life scenario, we are really far from the 96% that we want to aim for, right. Chris Adams: So if I understand that correctly, basically there is a speed, the speed of light is fast, but it's not infinite. And therefore there are certain parts of the world where you definitely need to get a response back in time. And that's why you've introduced this kind of 50 millisecond kind of budget. So it has to be, your ping, your request has to come back in that kind of time budget. And that basically places a second constraint. And even with these two constraints, this is essentially talking about, okay, these are the carbon emissions that can be reduced. By moving things to the various regions that are available based on the capacity of all these other places, like Sweden and then the next cleanest one and the next cleanest one. That's what you're referring to there. All right. Okay. I think I understand that part there. And that honestly, 31 percent still sounds pretty good, to be honest. But if we look at the figures for what, 2%, if we're looking at maybe, A hundred million tons of CO2 each year, and 30 percent of that is 300, is 30 million tons. That's not bad. That's more than at Google, for example. So, okay. That's okay. So that is interesting, then. So this is one of the high level findings you found, assuming you could do this in this kind of decreasingly idealized scenarios. And eventually we get to a point where, okay, this is actually something that you might plausibly try adopting in, or you might be kind of advocating for in certain regions, for example. Tammy Sukprasert: Right. Yeah. The point that we're trying to make is that as you added more constraints, the gap between the ideal case of 96%. Your achievable goal widens. So that's what we're trying to show in this paper. Chris Adams: Okay, cool. And when you're talking about the regions here, these are largely the regions that are inside the electricity maps. Was it the electricity maps dataset or was it just the list of all of the regions for the biggest cloud hyperscalers? I wasn't quite sure when we were looking at this, cause there's a list of them, right? Tammy Sukprasert: Right. So we used a dataset from ElectricityMaps. Shout out to ElectricityMaps. Thank you for the dataset. The dataset has 123 regions worldwide, right? But on the dataset, we group them up, we filtered the regions that overlap with the cloud region, and look at all exclusively the results for the cloud regions. Chris Adams: Ah, I see. So you created this way to make these comparisons basically by saying, maybe there's one data center, which we see in the cloud, like say Amazon AWS West, which is a lot of people refer to as like Oregon West 1. And because we know that a data set of carbon intensity from electricity map says, yes, this is Oregon. You've been able to look at the numbers then in that way, right? That's where some of this is referring to. Tammy Sukprasert: Yeah, so we did a mapping between the electricity map data with the location of the cloud region. Chris Adams: Okay. All right. So, and that, and when we're looking at those numbers there, so you mentioned this figure of 96%. Was that looking at just location or was that looking at anything to do with time as well? Because I wasn't quite sure about that part there. Tammy Sukprasert: So the 96 percent is just spatial shifting. So we have a separate result for temporal shifting where everyone in the world, every region in the world can schedule their workload based on one year ahead data. So everyone in the world can schedule their workload if they know about Chris Adams: perfect forward knowledge. Yeah. Tammy Sukprasert: yeah, perfect, knowledge for one year ahead. And with that, we look at the extreme case, the most ideal case where the workload is a unit job, one hour job, to understand what is the best case scenario for temporal shifting, right? So with that one hour job with perfect knowledge of one year, we can reduce the carbon emission of the whole world by 37%. Chris Adams: That's just temporal, not looking at location as well, right? Tammy Sukprasert: Yes, so we have the results for temporal shifting that if we give every region a perfect knowledge of their carbon intensity a year ahead to plan their workload, what is going to be the best scheduling scenario for the future? Temporal shifting, right? So with everyone having the perfect knowledge for a year, you can reduce the carbon emission of the whole world by 37%. Chris Adams: Ah, okay. So you're looking around maybe 30 percent when we were looking at purely locational, and then we're looking at just purely time. It's around, it's relatively similar, basically, but these are relying on. A kind of visibility that people don't really have a lot of the time, but, and, okay. So the next question I'm kind of asked is, it possible to look at time and space for this to get an idea of what the savings might be next from that then? Tammy Sukprasert: Yeah. So we also look into that in our paper. So if you look at spatial and temporal shifting combined, the result actually shows that spatial shifting dominates the carbon reduction. This is simply because when you move the workload to the lowest region possible in your data set, right, to achieve the savings, that region is already low in carbon intensity, so time shifting doesn't make much of a difference. Chris Adams: Ah, I see. Okay. So it's, basically the clean regions tend to be clean most of the time anyway, rather than being kind of spiking up and down for example. So that's what it seems like you're suggesting there, right? Tammy Sukprasert: Right. It still varies, but the variation between the high carbon period and low carbon period is relatively small. Chris Adams: Okay, well, that kind of makes sense. Cause I mean, now that when you lay out like that, I don't really think about it until you framed it that way, but like Iceland is usually green because it's running on geothermal, which is like pretty standard. Like it's steady. And even when you look at like, say Sweden, for example, there's like a wind and everything like that, but there's lots of hydro and stuff like that. So again, it's not nearly as spiky as, say, Germany, where we are the land of like wind is, we're land of coal and solar. We have lots of coal, which is high carbon intensity, and lots of solar, which is very, low intensity. And flicking back and forth between these things means that we might have big swings, but on average, it's not particularly low compared to Iceland or Sweden, for example. Huh. Tammy Sukprasert: Correct. Yeah. Chris Adams: Oh, right. Wow. I, that's, in retrospect, it kind of seems obvious when you, but things are only obvious with when you look at it like that. And one thing you shared with me before we spoke about this was that some of this stuff is actually like, if people wanted to kind of explore some of these calculations, is this online somewhere? Is it like a GitHub repo or something where you can like poke around at some of these things? Tammy Sukprasert: Yeah. So all the simulations in this paper, it's open source. So please check my lab website, my lab GitHub for the simulations. Chris Adams: Okay, cool. All right. I think I've got the link here. So that's, this is from, so there's literally a repo called decarbonization potential. That's the one you're referring to here, right? On GitHub. Tammy Sukprasert: Yes, that's correct. Chris Adams: Brilliant. Okay. We'll definitely add that in the show notes because people who aren't like frantically exploring this themselves, it's where it's, right there. Okay. So that was one of the first pieces of research. Essentially that there are some savings that can be made. It's around like the 30 percent mark in a kind of perfect world with location and sort of about the same with temporal. And if I understood it correctly, combining the two doesn't deliver massively more savings than that, right? It's still never more than half this kind of intervention that you could possibly make, right? Tammy Sukprasert: Right, yeah, combining the two doesn't give you double the benefits, because the benefits are dominated by spatial migration, but not much of the temporal, if you combine them together. Chris Adams: Okay. Thank you. I'm really, glad you actually spoke about this because we can now have some of the numbers. To basically talk about the fact that, yeah, we still need to do other things. You can't just like leave your code and make no changes. That might get you some of the way. And if you're looking at Temporal, it'll get you 37 percent of the way in a perfect world. But you still need to make some other changes if you want to kind of reduce the environmental footprint further. Brilliant. Okay. Thank you for that. So we talked about some of the savings you can get in your previous paper. The fact that there's maybe around the 30 percent figure. And if you can move everything through space, you get around maybe 30 ish percent savings. If you look at, if you have perfect knowledge forward for the year, then it's maybe slightly higher than 30%, but it's in the same kind of ballpark. And if you were to look at moving all of your computing jobs through time and space, you can't just double this number. It's still going to be a meaning, it's going to be more than 30%, probably less than 50%. So that's one of the figures that we have. We'll share a link to the GitHub repo for people who are curious about this and want to see if they know what jobs they ran last year, they could see what kind of savings they could have achieved. So that's one thing. And we've spoken so far about some constraints that we have, but there's a few more constraints that we need to take into account. So for example, so far, we've been talking about how much, how many spare servers we have, like data center capacity inside this. But there are other constraints that we need to also think about, which are a little bit further down the stack, as it were. So there may be a certain limited amount of green energy, at which point when you have more demand than that, you might need to have some other forms of generation come on stream. And like, this is something that I think you explored in one of the other papers. So maybe we could talk about that. So, okay, this other paper that you spoke about, maybe we can just like, let us know the name and then we'll see where we go from there. Tammy Sukprasert: Right, so this paper, titled, On the Implications of Choosing Average vs marginal Carbon Intensity Signals on Carbon Aware Optimizations, basically, average vs marginal for carbon aware optimizations, right. So this paper came from the fact that, okay, People have been suggesting, let's shift the workload through time, let's shift the workload to different locations, but we never actually agree on which carbon intensity signal to use for carbon aware optimization, so as the title suggested, there are two types of carbon intensity signals that are mainly used, namely average carbon intensity signal and marginal carbon intensity signal. So for average carbon intensity signal, just think of it as a snapshot of the grid at that point in time, right? And the way it's calculated is the weighted average of carbon emissions weighted by their production, Chris Adams: Okay. So if I just check, I just want to start you there. So make sure I keep keeping up with you. So there's two ways you can measure carbon intensity, like how green electricity is. And this first one, this average one is basically saying, well, I've got maybe two coal fired power generators and one wind farm, so therefore I'll apply double the weighting of the coal versus one of the wind farm. That's kind of what, that's a simplified version, but that's essentially how you work out an average figure, right? Tammy Sukprasert: Right, right, but marginal carbon intensity signal is different. The way it's calculated is the carbon intensity with respect to the change in demand. So let's say just now you said you have two wind farms and one coal, but the next unit of demand is going to be served by gas generator. So then the marginal carbon intensity signal is the current intensity signal of that of the gas generator. Chris Adams: I see. Okay. So rather than looking at the average, it's almost like the kind of consequences of me doing a particular thing. That's what we're looking at there, right? Tammy Sukprasert: That's correct. Chris Adams: Okay. And this, so now we've got this. I hope if you're listening and you're struggling, this is really hard. So, thank you for staying with us so far. So this was the general, this is what we were looking into. And, as I understand it, this incentivizes different actions, or if you were looking at this, you might choose to move things to a different region or choose to run a computing job or do something at a different time. That's been my understanding of this. Is this is what you looked into then? Tammy Sukprasert: Right, so the paper look into the fact that if you follow one signal as a scheduling signal, you might end up in more carbon emission based on the perspective of the other signal. Yeah, so it turns out like you cannot just follow one signal and hoping that you will do well based on the other signals perspective as well. Chris Adams: Ah, okay. All right. So this adds another layer of complexity to this then. So if I understand it, I could be following one and that gives me some idea here, but there are certain places where they can be different. They can have different signals. So like some places might be the same, but there are certain parts of the world where I might have quite radically different signals between these two. That's what I think I'm hearing from that. Tammy Sukprasert: Right, because the two carbon intensity signals are calculated so differently, so in, within one region, the signals are generally not correlated. So when you schedule for one signal, let's say, for example, I use in the marginal carbon intensity signal as a scheduling signal, right? And I place a workload in this low carbon period based on marginal, but within the same time period, someone else is like, looking from the perspective of the average carbon intensity signal, they'll be like, "Hey, I wouldn't place my workload here because it's high carbon period right now." So it has some conflicting decision making. Chris Adams: And, presumably when you looked in the, when you're doing this research, were there particular parts of the world where you see wild spreads between these two places? Like there's some places that it's quite safe, right? Tammy Sukprasert: So in the paper, we look into, Arizona and Virginia for this kind of conflicting scheduling. So Arizona has fluctuating average carbon intensity signal, but really flat marginal and vice versa for Virginia. So let's just take Arizona, for example. Like if. You want to schedule based on marginal carbon intensity signal, you wouldn't do anything because it's flat. You can just place a workload wherever you want. But if you want to schedule the workload based on the average signal, you'll be like, I would place my workload at this particular time slot because it had the lowest carbon intensity signal during the day. Chris Adams: Ah, I see. Okay. So this suggests that you're going to need to be really explicit about which kind of signal you're following. And, there are certain parts of the world where it, you're more exposed to the differences between this, for example. That's what I think I'm hearing there. Wow. that sounds, yeah. Sustainability in software does not get easy. Okay. So that's one of the things we were looking at here. And, it sounds like that you've spent quite a lot of time looking into this, looking at this whole field then, and, presumably when people are taking their first steps to trying to work out the environmental impact of software, for example, would you suggest, is there like an order of things you might start with this? Cause this feels like relatively advanced, high level, complicated, calculations here, and is it possible to kind of look at the environmental impact of software without this straight away? Like, can you add this a little bit later, perhaps? Maybe there's like some rules of thumb or some approaches you might suggest as a researcher who has looked into this and tried to understand the environmental footprint of some software and said, "well, okay, you might want to just look at the total amount of energy used or the total amount of resources used first, before you look at, say, this carbon aware stuff. And if you can look at carbon aware, then maybe look at location first" or something like that. Cause this feels like kind of exciting, but this also feels like it gets complicated very, very quickly. Tammy Sukprasert: So when I started working, on carbon intensity signals, I find that the average carbon intensity signal is easier to understand simply because you just look at the overall picture of the grid and you take the average of the energy sources, right? But for marginal carbon intensity, it was interesting concept for me. You look into the carbon emission based on the change in demand, but I was having a hard time understanding this because in a practical sense, I feel like it's going to be challenging of understanding which power plant is actually serving my compute workload. Like, it's not transparent enough. Chris Adams: I see. So there's almost like a counterfactual you're, comparing it against like a, how do you know if someone, I think you, we spoke about this sort of like there's a power stack, right? Like, yes, I've switched off, I've stopped pulling power from the grid, for example, but, how do I know that no one else has pulled power from the grid at the same time? Is that what you're kind of getting at there? Tammy Sukprasert: Right. For marginal carbon intensity for me, the concept is actually good. Like, you're responsible for the carbon emission that you triggered, right. But, In, reality, like you don't know which power source is serving your demand and whether in the next time it's to serve by the same force. So for example, like I plug in my laptop only, maybe I could, my laptop maybe is fulfilled by coal, but someone, let's say, Chris, you unplug your lab, right? Maybe now you left the, now your, the demand decreases is my laptop still, my laptop power is still fulfilled by coal? Like I don't have that. So... Chris Adams: Ah, I see. Okay. Alright. That makes, no, that makes a bit more sense. And I kind of, I think I understand why, I think I follow basically the reasoning between why you might start with one before starting with the other one. Because I think I agree with you on that. I found the average a bit easier for me to get my head around two as well. And, marginal does sound really cool, but I don't think I'm very confident explaining it to other people. And I think that, I think my experiences seem to echo yours, actually. I'm glad you said that because I did wonder if it was just me and that does make it a bit easier for me too. I feel a bit better about myself now, actually. Thanks for that, Tammy. Okay. So, this has basically been your day job for the last few months, diving into the world of carbon signals and things like that. Is this some of the continued research you're doing, or are you looking into other fields now beyond software carbon intensity and working out the differences of carbon, working out the, potentials of carbon aware computing here? Tammy Sukprasert: So I'm still working on carbon aware computing stuff. Currently I'm working on a web service that harnesses renewable energy and I have to think about how we should handle the workload when there is no renewable energy available. Chris Adams: Okay. All right. So one thing this does seem to suggest is that if we're just looking at carbon in here, that's not showing us the whole picture. And even when we just look at carbon. We end up with quite a, we can end up with like difficult or conflicting signals for this. So it may be that we need to, we might need to expand the way we think about as software engineers, we think about the next layer down and say, like, are there other things we take into account beyond just looking at marginal or looking at average? Maybe there's something else we need to do or another way of thinking about the grid and how our interactions as software engineers kind of work with it and how that can have an impact there. Tammy Sukprasert: Right. So I think we need to move beyond the static signal and instead maybe look into other characteristics to take into consideration when doing carbon aware optimization, maybe in future direction, maybe we would agree on some other signal that captures the long term impact of the grid, like average carbon intensity signal and the current, like the instantaneous change in carbon intensity, like marginal. So yeah, apart from optimizing for carbon efficiency as a community, I think everyone should keep in mind about like, we need a better metric to capture this carbon emission. Chris Adams: Okay. Thank you for that. Tammy, this was a ride for me. Every single time I come to trying to understand the environmental footprint of software, I think I understand that there's a whole nother set for this. And you've really opened my eyes to this. Tammy, if people are interested in this field, are there any other projects or work that you've read about recently that you'd like to draw people's attention to? Tammy Sukprasert: Yeah, I think you should look at Carbon Scaler. I think that's one of the things I Chris Adams: Oh. Tammy Sukprasert: recommend people to check it out. Chris Adams: Okay, we'll have to share a link to that because that's totally new to me. I've never... I'm not aware of that one actually. Tammy Sukprasert: So yeah, it's a system that reacts based on the available carbon intensity, and you scale the workload based on that. So you don't have to shift the workload. Chris Adams: Okay. All right. And if people want to find out more about the work that you're doing, where should people be following? Is there maybe, is there a website or are you on LinkedIn? Like what's the best place for people to direct people's attention if they wanted to follow up and read actually some of the work that you've been publishing and talking about here today? Tammy Sukprasert: Yeah, so, I'm on LinkedIn. You can search my name up, Tammy Sukprasert, or T Sukprasert for the link, yeah. Chris Adams: Brilliant. All right. Well, Tammy, thank you so much for giving us some of your time and sharing what you've learned from here. It's been absolutely fascinating. And we now finally have some numbers about what we can achieve with carbon aware computing. At least we have some numbers now to work with. So thank you once, again for this, and I hope you have a lovely week. Cheers, Tammy. Tammy Sukprasert: Chris, cheers. Chris Adams: Hey everyone. Thanks for listening. Just a reminder to follow Environment variables on Apple Podcasts, Spotify, Google Podcasts, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show and of course, we'd love to have more listeners. To find out more about the Green Software Foundation, please visit greensoftware.foundation. That's greensoftware.foundation in any browser. Thanks again and see you in the next episode!…
E
Environment Variables

1 The Cloud and the Climate: Navigating AI-Powered Futures 35:39
35:39
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי35:39
Environment Variables host Chris Adams is joined by Jo Lindsay Walton, a senior research fellow at the Sussex Digital Humanities Lab and co-author of the report The Cloud and the Climate: Navigating AI-Powered Futures. They delve into the intersection of climate and AI, exploring the environmental impact of AI technologies and the challenges of decarbonizing the ICT sector. Jo discusses key takeaways from the report, including the importance of understanding AI's direct and indirect impacts, the nuanced roles of big tech companies, and strategies for critically assessing claims of AI-driven sustainability. This insightful conversation highlights the need for interdisciplinary approaches and robust collaboration to navigate the complex relationship between technology and climate action. Learn more about our people: Chris Adams: LinkedIn | GitHub | Website Jo Lindsay Walton: LinkedIn | Website Find out more about the GSF: The Green Software Foundation Website Sign up to the Green Software Foundation Newsletter News: The Cloud and the Climate: Navigating AI-Powered Futures [01:15] Microsoft files patents for carbon capture and grid-aware workload scheduler - DCD [07:54] Potential of artificial intelligence in reducing energy and carbon emissions of commercial buildings at scale | Nature Communications [16:30] Resources: Digital Humanities Climate Coalition | Data Culture & Society [02:08] Breakdown of carbon dioxide, methane and nitrous oxide emissions by sector - Our World in Data [10:29] The climate impact of ICT: A review of estimates, trends and regulations [10:51] If you enjoyed this episode then please either: Follow, rate, and review on Apple Podcasts Follow and rate on Spotify Watch our videos on The Green Software Foundation YouTube Channel! Connect with us on Twitter , Github and LinkedIn ! TRANSCRIPT BELOW: Jo Lindsay Walton: There's this great metaphor that Arvind Narayanan and Sayash Kapoor have in their book, AI Snake Oil. They say, "imagine we just talked about vehicles. We didn't talk about bicycles or cars or buses or trains. And we tried to talk about the climate impact of vehicles." It would be very difficult to do. And that's essentially what AI discourse does, right? Chris Adams: Hello, and welcome to Environment Variables, brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software. I'm your host, Chris Adams. Hello, and welcome to Environment Variables, where we bring you the latest news and updates from the world of sustainable software development. I'm your host, Chris Adams. Like seemingly everyone else in the industry, we've been talking about AI a fair amount recently, and earlier this year, in September, the Sussex Digital Humanities Lab published their report, The Cloud and the Climate, Navigating AI Powered Futures. It's not a small report, weighing in at around 190 pages, and it has a number of key messages we'll be exploring in this episode. Also, one of the previous guests from back in September, 2023, Jo Walton was one of the authors of this report, and he was a nice enough to make some time to join us today on the pod. So, Jo, thank you so much for coming onto the pod again. Can I give you a bit of time to introduce yourself and what you do in your day to day for people who missed the last episode back in September? Jo Lindsay Walton: Hello. Yes. Thanks. Thanks so much for having me. So I'm a senior research fellow in arts, climate, and technology at the Sussex Digital Humanities Lab. My day to day is 90 percent playing with my cat, but I am also part of the Digital Humanities Climate Coalition and the newly launched Climate Acuity Initiative, which does facilitation and CPD training around climate and tech in hopefully fun ways involving storytelling and games and things like that. And yeah, it's just, it's really nice to be back on this wonderful podcast. I feel like the host of SNL. Chris Adams: Thanks. So, just before I check, when you say CPD, CPD is continuous professional development. People who want to build understanding of climate into their professional life, right? Is that what it is? Jo Lindsay Walton: Yeah, that's it. Exactly. And it's, really I guess, part of my work is at the intersection of climate and technology, but I'm not personally super technical. Most of your listeners probably have a lot more technical knowledge than I do. What I am really interested in is communicating around these issues and education as well. So I'm raising them for all the stakeholders for whom they might be important. Chris Adams: Brilliant. Okay. And, on the subject of other three letter acronyms, I've just had my cat walking myself, so if it walks across the, when we're recording, please do forgive it. It's just, that's what he does sometimes. Okay. Folks, if you are listening and you've never heard my voice before, I am Chris Adams. I am the executive director of the Green Web Foundation, which is one of the members of the Green Software Foundation. The Green Web Foundation is a nonprofit based in the Netherlands, focused on reaching an entirely fossil free internet. By 2030, I also work as one of the policy working group chairs inside the policy work in, inside the GSF, as well as being the host on this podcast as well. All right, then. So just before we dive in, if we speak about a particular paper or a report or a link, we will add these in the show notes. And if you, if there's something you're missing, please do send us an email or get in contact us because we do our best to keep these available and like useful resources for people. All right, then. Jo, are you sitting comfortably? Jo Lindsay Walton: Very comfortably. Thank you. Chris Adams: All right, then. I think I'll begin. All right, then. So before we dive into the report and some of the key takeaways, which we'll be going into more detail, can we talk a little bit about why you decided to put some time into this report in the first place and how this entire project came about, please? Because I know that you're, you said yourself, you're a researcher and in the Unix School of Media Arts and Humanities at Sussex University. And this came from the Digital Humanities Climate Coalition. Now, most software developers, when they think about AI and reports, it might be something that's within the industry. So I want to give you a bit of space to talk about why it's interesting or why it's relevant to have people who aren't inside technology, who aren't like practitioners per se, talking about some of this. Because I think there's a different, a couple of perspectives that you might have that is worth. Making clear for people, for example, or some techniques that humanities people might actually have that, the developers or techies might not be so cognizant of. Jo Lindsay Walton: Oh Yeah. Absolutely. So the report, as well as the kind of DHCC toolkit, which is an online resource, these are, they're very much community projects and they have an open source ethos and a part of that is an aspiration to interdisciplinarity. The report itself is a kind of stretch goal or spin off from a small innovate UK project that I was doing with GreenPixie, who are this fabulous cloud carbon data company. Yeah. And we were basically exploring how to talk to a wider set of stakeholders about the cloud and about the climate. So not just IT people, but also, for example, chief sustainability officers, people who need to know about this stuff. That might not be quite so up to speed on the technical detail. And over the course of that, it grew apparent to me that there was a gap really for an accessible resource that didn't oversimplify and that really tries to be a bit holistic. Can you really understand one bit of it without understanding the big picture? Can you, I don't know, understand how your little piece of software that you're trying to optimize is going to have an impact without thinking at least a little bit about carbon accounting and the greenhouse gas protocol and carbon offsetting? Can you really understand how green a data center is without understanding a little bit about how energy gets into the grid and then gets into a data center and the kind of energy procurement rules around that? Chris Adams: Okay. I hadn't actually realized that you've been working with GreenPixie and just for people who are new to that term, GreenPixie is a UK-based SaaS provider of essentially carbon accounting tooling for cloud, just like, so if you're using Amazon's or Microsoft's or Google's cloud carbon calculator, they provide something very similar, but with a more kind of open methodology that allows them to be comparable to each other. Really nice to know that there's, I didn't realize that you two would be working together on that. And that's cool, actually. Jo Lindsay Walton: So, I mean, they, particularly that collaboration informed the green ops section of the report. But as you allude to, there is this attempt throughout the report to also bring in DHCC type perspectives, that kind of humanities flavor, really drilling into the details of the cultural factors. So not just how we communicate things, but also how we imagine things, I guess. Big tech and tech communities don't just have direct impacts. They also shape the way that we imagine the future. So Google is not in the business of building kind of direct air capture, giant reverse hairdryers that are sucking carbon out of the sky. That's not something they do, but they do influence the way we think about technology and climate. And so they also influence the way that we think about things like greenhouse gas removal technologies. Chris Adams: Although, earlier on this year, we saw that Microsoft patented, actually, some of the use of some particular things around carbon capture in data centers to use some of the waste heat to actually separate captured carbon, so it can be actually stored in other places. So, there's maybe more links than we actually had, yeah, exactly. Jo Lindsay Walton: That's really interesting. Chris Adams: Yeah, I'll share, we'll share a Jo Lindsay Walton: link in the show notes for sure. Chris Adams: Definitely. All right. Okay. So that gives me a good idea and then provides a bit of context to where this was and for people who are not used to the UK, Innovate UK is one of the government funding agencies that has provided some of the funding for some of this. So that's where that has come from. All right. And so maybe we should talk a little bit about the report. So there's a number of takeaways. In fact, I counted more than five when I was running through the report. So there was a lot there, right? And there are some things which probably don't need too much attention because we're, because of the listenership. So for example, we probably won't spend too much time dwelling on one of the takeaways being we're in a climate crisis or the other one bang, yeah, that digital has a physical basis. These are things that we can assume that people have internalized already, right? But there was actually some nuance to this because. While people do talk about that, the kind of magnitude of the numbers might not be something that people are quite so comfortable about. And also, it's an area of contention in many cases, many places. And as someone who's been looking at a lot of the literature, I figured it might be interesting to have a bit of space to talk about one of the other takeaways, which you shared was basically the ICT sector is not a leading contributor to global warming, but it still must decarbonize rapidly. Now, I think It'd be useful to unpack some of this because a lot of the time, a lot of the stories do talk about either data centers as like this new monster or new kind of like media baddie, for example. And it seems like there's, you've got a kind of more nuanced take on this and I wanted to give a bit of space for you to allow you to talk about some of that. Jo Lindsay Walton: Yeah, I mean, coverage of the drivers of global warming is totally out of proportion to what those drivers actually are. We've seen data centers be in the mainstream media quite a lot recently. So I think maybe that's falling victim to that a little bit. Where do most emissions come from? Food production, for example, is absolutely huge. And we hear a little bit about food miles. But food miles are not a massive part of it. A much bigger determinant of the impact is, "has the food come from a cow or from a nut?" Constructing and heating and lighting homes, road transport, fugitive emissions, fossil fuel companies basically being a little bit sloppy as they extract these fossil fuels and letting them escape. There's a good, a lovely breakdown on Our World In Data, which is maybe we can put in the show notes as well, although a little bit dated now. ICT? What is the impact of ICT on global warming? Would like to offer a provocation and hope that maybe one of your listeners can, prove me wrong. I think nobody knows. I think nobody knows ICT's impact on global warming. There's that 2021 Freitag et al. estimate that gets quoted quite a lot, but it's been a very, busy four or five years. I feel like I've lived through the AI singularity. And there's more complexity than that, right? When you factor in secondary and tertiary impacts, both good and bad, from the digital, then you're in the realm of deep uncertainty. There is unlikely to be any expert consensus. Even so, despite that complexity, it's not controversial that tech needs to decarbonize along with everything else. It's all hands on deck. Everybody's on board with that. All the big companies have these ambitious pledges. What's concerning me a little bit is how that discourse is shifting. So for example, Microsoft in 2020 sets out its pledge to achieve net zero. Chris Adams: Moonshot. The zero carbon moonshot you're referring Jo Lindsay Walton: yeah, yes, And we talk about that term moonshot in, in the report, actually, cause it's a, it's an interesting metaphor. And the moon has, is now said to be Chris Adams: 5 years further away. Yeah. Jo Lindsay Walton: The moon has moved five times. So actually I think that's incorrect. I think the moon. The moon has been vaporized. The moon, as in Neil Stevenson's science fiction novel, Seveneves, the moon no longer exists. The target has already been missed. And that happened this year. "Okay, how is that possible?" you're asking. Does Microsoft have a time machine? How can they fail their net zero pledge of 2030 in 2024? Well, that's the way that net zero pledges work. They are about cumulative emissions. They're not about a snapshot of emissions at a particular date. They are about the pathway from the date of the pledge 2. 0 staying within a given emissions budget, right? So you could draw a descending line graph and it's about the area under that line, not about the point at which the line intersects the axis. And to their credit, Microsoft absolutely was transparent about this back in 2020. They showed the linear descent to zero. And by my estimates, that budget was burst sometime this year. maybe now, maybe as we are recording this podcast. And poverty is no effect. The concerning bit is that this isn't being talked about more openly. It's much more this discourse, as you say, of "okay, now we have AI." In 2020, we didn't know about that, but now we have AI and AI has these sustainability benefits. Okay, so if that's the argument, if that's the implied case for emissions increasing, let's be very clear about that. Are we saying that it is prudent to increase emissions from the tech sector for the next few years? Are we saying that the tech sector has been doing the right thing emissions-wise for the past few years, because those emissions on a robust methodology are shown to be more than offset by the sustainability benefits that they can provide on an appropriate timescale? Chris Adams: We'll be touching on that a little bit later, but, alright. Okay. Thank you that I appreciate you providing a bit of extra context on that. And just to check if I understand, you said one or two things about, okay there is, the way you could work out the environmental footprint of the ICT sector when people talk about the direct impact, you said there's like a primary, tertiary, sorry, secondary and tertiary, presumably you're talking about like there is a direct impact, but there's an impact from people, what you enable with that computing and stuff like that. Is that what you're referring to with that primary, secondary and tertiary stuff? Jo Lindsay Walton: Yeah, absolutely. So you and I are on a, Zoom call now. If we weren't on the Zoom call, I probably would have ridden to you on a giant lump of blazing coal. Or some more carbon intensive mode of transport. And those are very, complex calculations to do. You have rebound effects where, things look like they're providing efficiencies, but those efficiencies are mitigated or more than offset by increased volume, it's complicated stuff. Chris Adams: Okay, cool. All right. Thank you for providing that extra kind of elucidation or like clarifying that part there. Okay. There's another thing I wanted to give a bit of time for actually was this one. You said, and given that we just spoke about kind of cloud giants and one of the takeaways, which was none of the cloud giants is a monolith. So this is a bit of a kind of more nuanced take on big tech bad, big tech good that we often see in the discourse, because it's very simple and attractive way to talk about that, but it sounds like you're trying to go for something a bit more sophisticated there, a bit more multidimensional there. Maybe we could spend a bit of time trying to see what you were trying to get at there or what the report was trying to really get across to people. Jo Lindsay Walton: Yeah. I think climate invites us to really reflect on our roles in our professional lives and other aspects of our lives. And sometimes to challenge and push back on the parameters that are set for us in those roles. And that may mean that your company is pushing a particular line or your bosses is pushing a particular line, but there is a kind of, there's a practical incentive and there's, frankly, there's a kind of ethical duty to be critical about that and to step outside of the boxes that you're asked to perform in. And definitely these companies are huge companies. There's a great diversity of knowledge, a great diversity of kind of politics, really, within any particular industry. big tech company, nevermind between tech companies as well. So in the realm of the greenhouse gas protocol and how we do carbon accounting, there's a lot of disagreement within big tech between on the one hand, Amazon and Meta who want one kind of particular set of rules as the greenhouse gas protocol is revised and Google and perhaps Microsoft who would like to see it go another way. I think we look at this a little bit in the report. We look at a nature article that is largely authored by, Microsoft researchers. And spend a little bit of time in a hopefully good natured roast of the estimate of the carbon impact of AI, which the methodology there just isn't really fit for purpose. If you drill, really drill, drill, drill baby, drill down into the details, you find that it is based on one back of envelope kind of estimate by Vijay Rakesh, who is really a stock market analyst who said that he expected NVIDIA to deliver a hundred thousand AI servers in 2023. It's not a sufficient basis for estimating the global impact of AI, but that's hopefully not the main point because the bigger part of this article, which I think speaks to your question about companies not being monoliths and trying to build alliances for progressive and robust climate policy that cut across your loyalty to a particular company. The proposal of this article is that AI researchers should work more closely with climate, the climate modeling community, and that AI should be integrated into the IPCC's shared socioeconomic pathways and integrated assessment modeling. Which is, I have mixed feelings about that. Like the closer collaboration sounds really great. It does feel like in that particular article, there isn't yet a very deep understanding of how those climate models work. They're not really scenarios. They're more like building blocks for scenarios. And to some extent, they already do build in the possibility of technological change. So you could go down a rabbit hole as to whether or not AI is already priced into these models or not. I think what it speaks to is a certain kind of nervousness here, like, okay, so we are big tech, we are AI, we're presenting this AI powered future, and we're increasing our emissions, and we're doing this on the basis that we think, we believe, that AI is going to unlock all these fantastic sustainability benefits. But can somebody please check our working? We recognize that we may have conflict of interests. We need to do this in a more collaborative way. We need to have all kinds of expertise and we need to have more independent voices. I think that's what that article is ultimately calling for. Chris Adams: Okay. So there was one thing you said that you were getting at there was the idea that cloud giants not being a monolith isn't just within the cloud giant. If you think about it horizontally, like Meta and Amazon having one point of view. And I think you're referring to the emissions first versus the 24/7 kind bond fight about how do you count energy as green? Because the current process has a few significant issues with basically, there's people trying to work out a new approach and you have two camps. So that's one thing you were talking about. And then there's almost one within each company. Like there are different people who have different drivers inside that. If you just assume that someone's working for say Amazon, that ends up being a very lossy way of talking about, okay, what are they doing? And like, what might, the drivers be, for example? Jo Lindsay Walton: Absolutely. And some of those disagreements might not be so visible for obvious reasons. People have to be tactful and work in constructive ways with their colleagues. I mean, to respond to that, I think that I, share a bit of alarm about timescales and, solutions being proposed that aren't immediately referred back. If you've got any kind of plan to do with the climate, check it against IPCC timescales. We were supposed to, in the next six years, cut carbon emissions by more than half, four of which are going to be under a Trump administration. And I would definitely, I would celebrate that kind of all hands on deck approach where everybody's doing everything they can in their role and maybe rethinking their role in creating alliances. At the same time, I also think we need a little bit of reflection on actually which hands are on deck. Are there problems that aren't owned by anybody, risks that are not being addressed by anybody. And I think that we need a little bit, in the AI space, there has been talk of pauses and kind of moratoriums, not always for the best reasons, but I do think these are really important tools in our toolkit rather than, "okay, we're going to just keep doing what we're doing and, hope to sustainabilize it as quickly as we can," actually, saying "maybe we need to pause this and maybe we can pick up where we left off, but we need to pause it while we're gathering more data or we're greening our energy supply or we're building capacity" or whatever it might be. I wrote an article about this in the Fantastic Branch magazine called Pause. I just realized this morning, I should have called it after Andreas Malm's How to Blow Up a Pipeline, I should have called it How to Blow Up an AI Pipeline. But yeah, I something else for that. Chris Adams: Yeah. All right. All right. Okay. Thank you for that. Let's move to the next one. Cause you spoke a little bit about AI and, in the report, you actually spend a bit of time talking about sustainability. Basically the sustainability of AI, but also AI for sustainability. Right. And these being two somewhat different things. Now we talk about sustainability of AI on this podcast quite a lot. So we talk about how to use like more efficient algorithms or how to clean the energy and some of the steps you might take. And obviously the report talks about that, but there's actually something that you speak about in terms of the claims about AI for sustainability goals that you spend some time talking about and like you also raised, like "these are some of the red flags you might be looking for." Could you maybe, are there any like specific messages you might use or anything you draw to people's attention to when they're trying to navigate claims about AI for sustainability and like, "yes, there's a massive energy footprint, but the upside is this, for example, and these are the upsides that we're delivering." Jo Lindsay Walton: Yeah, absolutely. And all that kind of sustainability of AI stuff is extremely exciting. And, as you say, we, touch on that in the report. AI for sustainability. There's this great metaphor that Arvind Narayanan and Sayash Kapoor have in their book, AI Snake Oil, they say, "imagine we just talked about vehicles. We didn't talk about bicycles or cars or buses or trains. And we tried to talk about the climate impact of vehicles. It would be very difficult to do." And that's essentially what AI discourse does, right? We don't on a regular basis make these kinds of fine differentiations in public discourse, in journalism, in conversations with friends. So right before the show, actually, we were talking about acronyms and I tried to come up with an, acronym of the things that you might want to ask when you find a claim that AI is delivering some kind of sustainability benefit. So the first thing to consider is maturity. That might be technology readiness level whatever it may be. Often there is a claim is inflated. It says something is already happening when actually what we see is that there's been a study that says it might work. It could be rolled out commercially, scaled up in five, 10, 20 years, whatever it might be. So maturity is one. Then additionality. So AI is responsible for delivering this sustainability benefit. Well, do your best to identify which bit the AI is responsible for. Often an AI sustainability project will involve data collection and analysis, and then some kind of efficiency gains from that. What could have been delivered with the, with more kind of traditional data analytic methods? And then generative or discriminative or some other type of AI. What kind of AI are we talking about here? These are often conflated. Is it even machine learning at all? Is it something, some cool new thing like, I don't know if it's new, but active inference, for example. And how big is the model and so on? What kind of AI are we looking at? And then finally adaptation versus mitigation. So these are the two broad categories of climate action that most climate scientists will recognize. And they're interrelated and they overlap in various ways, but mitigation is really about decarbonizing, Chris Adams: Yeah. Green energy instead of fossil energy. And then the mitigation might be building the seawalls because the sea levels have risen. Stuff like that. Jo Lindsay Walton: other Chris Adams: I say, yeah. adaptation is building the seawalls because the sea levels have risen. Mitigation would be switching out of fossil fuels and burning, using greener energy, for example, which Jo Lindsay Walton: Absolutely. Chris Adams: the Jo Lindsay Walton: As can imagine with AI, if the AI has a problematic carbon footprint, but delivers substantial adaptation benefits, that again is a very hard calculation to do. You, can't simply. Subtract one, one from the other. The acronym unfortunately came out as MAGA, which has already been taken. So, I'll keep working on on it. Chris Adams: Okay. All right. I don't know how far that's going to go. I'll be honest. But. All right then, so, so that's one of the things you're speaking about was this idea that these are two separate things and it's worth being aware that there's, there, there are different ways you can essentially critically engage with some of these claims. And I think I'm get where you're going with some of that now. And I've realized that I'm basically an Englishman in Germany, speaking to someone, to an Englishman who's also in the UK. And this was a report that came from a UK research unit. And obviously there's a UK research focus on this, but it's also, we're also in a scenario where there is new government in the UK who have very aggressive goals of like decarbonizing the entire grid by 2030. So we spoke about 2030 target before, and like, this is one where there is a goal to decarbonize the grid by 2030 and reduce nationally carbon emissions by more than 80 percent by 2035. So this is like, in many ways, this is like a similar kind of moonshot thing we have here, but there's also, it's the government is also very, Gung ho right now on the increased deployment of data centers around the UK as one of the kind of drivers for growth, for example. So I wanted to ask you, like, when you look at this, do you see these goals as complementary or compatible or are there any specific areas of attention for the UK that are like for policymakers should be thinking about if they want these goals to be possible, for example? Because yeah, there's, it sounds like it's, there's probably a lot of nuance to it, and this is something that you've been having to navigate or have to think about. Jo Lindsay Walton: Yeah. And I mean, I don't know. I really don't know. And I wonder what guests we will need to assemble on your show to solve this question. It's definitely a, it's an interdisciplinary type question, right? We need people who can think about the counterfactuals, the opportunity costs. If data centers are not expanding at this particular rate in the UK, what's happening in that alternative universe? There's in the report, there's a quite upbeat section lead authored by my colleague, Benjamin Sovacool, which is all about the wonderful things data centers can do to be more efficient and environmentally friendly. And so from a UK perspective, you can see those things going together. Yes, we're going to, we're going to be a leader on net zero. We're also going to be a leader on data centers. And we're going to do that by having the greenest, the best, the most efficient data centers. Microsoft is shifting from concrete and steel to a special new timber. The new exciting innovations happening all the time. As a thought experiment. If we were building global data center infrastructure from scratch, knowing everything that we know, how would we design it? Maybe you can get some experts on your show and ask them this. I've heard it said that data centers are these kind of fabulous heat generators that just happen to be able to do computation as well. One of the reviewers of the report said that. And so we should really go in hard on small and medium data centers woven into the fabric of our urban environments. Anne Currie, who, we did that previous really fun episode about data centers on the moon and various things. Anne has said that a key consideration is that you really don't want to be competing with other local energy needs. So this is a contrasting view. You don't want to be displacing demand into carbon-intensive, generation then claiming that you have these wonderful green credentials. So then the question is really, where in the world would you locate a data center and the green energy to power that, data center where it otherwise wouldn't be used for, anything else? How will data center expansion in the UK affect data center expansion in the EU or in Trump's America? Who is doing all this? This is the real question for me. Who is thinking about these things? I mean, I'm here and glimpsing how huge and complicated a question it is. Who is doing this difficult holistic joined up thinking, including thinking through those second and third order effects? Are policymakers in the UK thinking in those terms? Is SECR reporting going to have any impact? The Environment Agency, they like the detail and the nuance, but their remit has tended to be a bit more narrow. Their budget has been absolutely slashed under the conservatives. Is the onus on civil society to, to work through consultations, local planning authorities on a kind of data center by data center basis? Is it maybe up to Environment Variables? Maybe it's on you. Chris Adams: Well, what I can share with you is that we've got someone who's leading one of the distributed data center companies to give their side of the story in a future one, precisely to talk about okay, just how you spoke about the idea of like you mentioned that quota of AI, and imagine if we only spoke about vehicles, I wonder if there's maybe a thing where we talk, there's a similar comparable way of thinking about data centers, right? Like if we only think about data centers as one thing, rather than being like, there's a typology of this giant, gigantic out of town hyperscale data centers, like gigawatt scale. And there's one at the other end, which are not the same, for example. Maybe there's a need for a kind of different strategy to think about what kinds of data centers make sense in what circumstances. So like, maybe that you want to have certain kinds of computation. Like you mentioned that word, like inside the urban fabric, and there's certain things where you don't want to have it because you might have a different use for this. This makes me think of actually China. So China does have something along these lines, where in China, there's a really aggressive target to A, get lots and lots of data center, lots and lots of computer computation out of relatively old data centers into much more advanced centralized hyperscale kind of facilities, which are being paired with the kind of energy bases where there are just significant amounts of clean generation being put together there. So you've got co-locating hyperscale data centers with the kind of generation that you have. So that you have different approaches and maybe there's something that you might see like that in the UK. I, don't know, but, I found maybe it's someone we should speak to. And if you're listening to this podcast and who is thinking about that. Please do suggest them because we'd like to cover that in a bit more detail. All right then, you've spoken about two of the things that I think we, I'd like to just, if I can, jump into. You mentioned SECR, I don't know, could you maybe expand on who that is or what that is for people who aren't familiar with that acronym? Jo Lindsay Walton: Oh, Streamlined Energy and Carbon Reporting regulations. Chris Adams: okay, all right, so that's basically UK government has that data centers above a certain size have to report, basically, right? Jo Lindsay Walton: Companies, yeah. Chris Adams: Oh, okay, got it, okay, thank you. All right then, okay, so we've touched on quite a few, we've gone into a number of different areas for this and we're coming up to time. So I guess to ask you, you've spent this time and you've put a labor of love into this report, for example, but that came out in September, in the last, in the kind of subsequent months. And are there any, is there any kind of, what work is exciting you? What things do you want to, are you looking at, you think, "this is really exciting, I wish more people would, who are interested in sustainable software, I wish they would look at this," for example. What's on your radar these days? Jo Lindsay Walton: Well, it's been a very kind of busy and strange couple of months. So just even as you say that, it just reminds me how quickly these things move. Basically, I feel like I'm a little bit behind and I need to listen to some podcasts and click on some LinkedIn links and, bring myself up to speed. I continue to be delighted by the work of the Green Software Foundation. I'm a big fan of your podcast. The GARP Climate Risk podcast is one that I like. Top three podcasts, the other one would be the Bunta Vista podcast, but that's not actually about climate and environment. That's just people getting high and reading news stories. I'm interested in further collaborative work at a smaller scale with individual kind of companies and organizations. We've been doing a little bit of work with kind of cultural heritage organizations, thinking about their carbon impact. The focus of that work is under the rubric of climate acuity. Which we've recently launched. It's connected to the DHCC in that we have a workshop that we do called the Digital Sustainability Game. So I'm, excited about continuing to iterate that work with all the constant barrage of developments that happen week by week in this space. Chris Adams: It's pretty exhausting. I could, I can definitely share that. I struggled to keep up myself and this is pretty much my job. Jo Lindsay Walton: I think, yeah, I think we do need to take a break every now and then. Pause, moratorium. Chris Adams: Okay. On that note, we're coming up to time actually. So Jo, thank you so much for coming onto this and providing extra context to the report. If people are curious, where should they be looking if they wanted to read this report themselves? Jo Lindsay Walton: It will be in the show notes, or if you type in The Cloud and The Climate AI powered or Navigating AI-Powered Futures, I think it should pop up. Chris Adams: As the first result in pretty all the search engines. Jo Lindsay Walton: I hope so anyway, otherwise something's very wrong. Chris Adams: Well, in that case, folks, that's what to look for then. All right then. Well, Jo, thank you so much for coming on to this. This has been really, fun. And let's do this again, maybe next year. Like continue this tradition of every 12 months, we have you come on and tell us what you've been up to. Jo Lindsay Walton: I would absolutely love that. Thanks so much for having me. Chris Adams: All right. Thanks, Jo. Have a lovely afternoon. All right. And take care of yourself. Bye! Hey everyone! Thanks for listening! Just a reminder to follow Environment Variables on Apple Podcasts, Spotify, Google Podcasts, or wherever you get your podcasts. And please, do leave a rating and review if you like what we're doing. It helps other people discover the show, and of course, we'd love to have more listeners. To find out more about the Green Software Foundation, please visit greensoftware.foundationon. That's greensoftware.foundation in any browser. Thanks again and see you in the next episode.…
E
Environment Variables

1 Green Networking with Carlos Pignataro 39:45
39:45
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי39:45
In this episode of Environment Variables, Anne Currie welcomes Carlos Pignataro, a leading expert in sustainable network architecture, to explore how networks can balance energy efficiency with performance and resilience. Carlos shares insights from his career at Cisco and beyond, including strategies for reducing emissions through dynamic software principles, energy-aware networking, and leveraging technologies like IoT and Content Delivery Networks (CDNs). They discuss practical applications, the alignment of green practices with business interests, and the role of multidisciplinary collaboration in driving innovation. Tune in for actionable advice and forward-thinking perspectives on making networks greener while enhancing their capabilities. Learn more about our people: Anne Currie: LinkedIn | Website Carlos Pignataro: LinkedIn | Website | GSF Champion | IETF Profile Find out more about the GSF: The Green Software Foundation Website Sign up to the Green Software Foundation Newsletter News: Challenges and Opportunities in Management for Green Networking [16:50] Architectural Considerations for Environmental Sustainability [22:22] Sustainable Network Operations (SNO) [13:54] Environmental Sustainability Terminology and Concepts [20:03] Resources: IETF [24:21] Internet Research Task Force (IRTF) [24:54] E-Impact Workshop | GitHub Green Software Practitioner If you enjoyed this episode then please either: Follow, rate, and review on Apple Podcasts Follow and rate on Spotify Watch our videos on The Green Software Foundation YouTube Channel! Connect with us on Twitter , Github and LinkedIn ! TRANSCRIPT BELOW: Carlos Pignataro: Many of the things that we do with a little bit of spin, hugely benefit the bottom line by cutting down costs, hugely benefiting the world by lowering emissions. I'm going to run this batch job whenever there's renewable. I'm going to turn off the lights and the APL, the access points on the ceiling automatically when there's no presence. Basic things that make a difference. Chris Adams: Hello, and welcome to Environment Variables, brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software. I'm your host, Chris Adams. Anne Currie: Hello, and welcome to Environment Variables, where we bring you the latest news and updates from the world of sustainable software development. I'm your host, Anne Currie, and today we're joined by Carlos Pignataro, a leading voice in sustainable network architecture. Carlos has over 20 years of experience in technology, innovation, and strategic thinking, holding key roles at Cisco, including head of technology and data for Cisco's engineering sustainability. Currently, he's a mentor in residence at Duke University New Ventures, adjunct facility member at NC State University. And he is the founder and principal at Blue Fern Consulting. In this episode, we'll dive into Carlos' work on architecting networks for environmental sustainability. He's contributed to cutting edge research on how networks, which are critical to our connected lives, can balance energy efficiency with performance and resilience. How can the networking industry innovate without compromising the planet? What trade-offs do we have to consider? Things like uptime, speed, against, or as well as, environmental responsibility. Let's explore these questions and more with Carlos. So welcome Carlos. Can you tell us a little bit about yourself? Carlos Pignataro: Thank you very much, Anne. I am so excited to be here, first of all, and I'm happy to tell you a little bit about myself, even though you covered a lot, that was a very kind intro. I appreciate it. My favorite number is 27. My favorite color is aquamarine, just for completeness. But on a more, on an equally serious note, Important for my introduction is to say that I have passions for technology and for sustainability. And I look at tech as a means to an end and making things better is one of the ends. I've been involved, as you said, as you mentioned in building networks, computer networks, internet infrastructure, data centers. I have also been involved in other uses of technology, such as technoconservation or conservation technologies, protecting endangered species, such as rhinos in different countries in Africa or in India from poaching using IOT, Internet of Things technology, I've been involved in building data models and information models for recycling and overall circularity. So I try to build my overall experiential breadth through trying different aspects of technology and sustainability is one on which I'm super passionate about. Anne Currie: Excellent. That's very good to hear. And that's why you're here today. So just a little bit about myself, because I'm not always the host. I'm a guest host for Environment Variables. My name is Anne Currie. I am, I've been in the tech industry for about 30 years. I am one of the co authors of O'Reilly's new book, Building Green Software, which covers a lot of, actually covers a lot of what Carlos and I will be talking about today, cause there is a networking chapter in it. And I strongly recommend it to everybody who's listening to the podcast. I am also the CEO of learning and development company, Strategically Green. We do a lot of workshops to help people push knowledge and engagement with green subjects within their own businesses. So touch me up on LinkedIn if you're interested in that kind of thing. So before we start in, we dive into today's content is a reminder that everything we talk about today will be linked to in the show notes below. So feel free to have a look, read through, read as we go, read afterwards, whatever, but the data is all there. So, let's kick off with an introductory question, Carlos. How did you become a Green Software Foundation champion? What are your goals as a champion? Carlos Pignataro: Thank you very much. And it's a very meaningful question. When I looked at green software champion, when I was exposed to the acronym, to the three words, actually, and the GSF acronym, I couldn't help but looking at each one of the words and understand that each one had a very, profound meaning for me. Number one, green is something that encapsulates what you and your book and all of us are trying to push our industry and our different industries towards. Software is corner of what we really do day in day out and our expertise where we can actually make a difference is where we live. And in champion, I really always wanted to be a champion. And I try soccer. I try tennis. In Argentina, I have, which is where I come from, I have a very, below average soccer scores, no, jokes aside, I, champion is such an important word because I look at it as, the first followers, and I am very moved by the way in which we create a movement on what Derek Sivers describes into being the first follower. And having these champions are like reflectors. It seems this is a little bit of networking, BGP reflectors of the message. So we all different have insertions in different parts of the ecosystem, whether it's within corporate, within different software, different repositories, whether it's different standard bodies and being a champion means being a follower and reflector of that larger message and echo it within the all the different places that we live. So it really aligns very deeply with my passions and frankly, something that I see you as I follow you as well, is like I see the three words very clearly and I don't know how you feel, Anne, but to me it's always a little bit of a work in progress. What we have achieved, I'll play that back to you a little bit, but as a question, but to me it's like we have achieved a lot of awareness and we have achieved a lot of education and we have achieved the realization that there's a lot more and a lot more to actually do as a champion. What is, I don't know, Anne Currie: I think you're, very right there. We've done a lot. Things are a lot better than they were 10 years ago. Vastly better. And I feel constantly buoyed up by that, but there's just so much to do still. I mean, still, when I talk to most people outside of the immediate movement, folk don't realize that actually there is a lot of good stuff that the tech industry can do to cut carbon emissions. I speak to folk in tech all the time who are really interested in climate change and doing their bit, and they often focus directly on individual change they can make, like becoming a vegan or something like that, which is fine. But that's as nothing compared to the far more scalable change they can make through their jobs. Through being a software engineer, there's an enormous change that we can make. There's an enormous improvement that we can make there. And it's nice when people realize, because folk do want to fix things. They do want to make a difference. And a lot of folks don't realize. Carlos Pignataro: A hundred percent. It's so interesting as you were saying that I was, and picturing myself the concept of bring your whole self to work and the fact that we are individuals with a set of values and we can project them in different personas that we have. And for me, it was actually a part of my professional growth and realization earlier on as an engineer that I could actually bring my values to work as well. And exactly like you said, that's when the difference really compounds. It's, not like I have a nights and weekends green version. I can apply that in my day job, not only nights and weekends. And also what I find super monumental and I applaud what you all do and what we do here is there's so much more resources, so many more resources available for anyone who says, "yeah, I want to learn, and apply it." Anne Currie: Yes. Oh yeah. The number of resources are really, is really taking off, which is amazing. I like what you're saying about bringing your whole self to work. The good thing about building green software and being more carbon aware at work, if you are a tech company, is that, I mean, a lot of people feel, "Oh, well, it's a bit unprofessional because I'm bringing something to work and actually that's, I'm asking the business to do something that is not in their best interest to do." When it comes to cutting carbon emissions, it is in the interest of the business to do that. It cuts costs. It makes the business more resilient. It's, it makes it more future proof to the way that the energy system is going. Eventually, folk are going to have to be ready to run on renewables because that is the new form of energy. I mean, people can do this in a way that is against the interests of the business. If you decide that you're going to rewrite all these systems in Rust, that might well be against the interests of your business. But cutting your energy use, cutting your carbon emissions in half by just being more clever and smart and modern about the way you operate systems so they're more efficient and more secure and more cost effective. That is in the best interest of your business. That's not going against it. And I think we do need to keep constantly hammering that message home. Carlos Pignataro: Absolutely. It's a message in my corporate tenure, within big tech. That's one of the key messages that resonated inside the company, resonated with customers, which is good for the world and good for the business and finding these things it's. And that's one that actually I don't think is a marketing tag. I run numbers on, "shall I do this? Does it make sense or not?" And just like you say, many of the things that we do with a little bit of spin hugely benefit bottom line by cutting down costs, hugely benefiting the world by lowering emissions. I'm going to run this batch job whenever there's renewable. I'm going to turn off the lights and the APL, the access points on the ceiling automatically when there's no presence. Basic things that make a difference. Anne Currie: Yeah. If you go back to the, I would say almost the inception of the modern thinking about being more efficient in data centers is the work that Google and Sun were doing at the beginning of this century around containers and orchestration, the use of that precursor to Kubernetes, their Borg orchestration system. That wasn't about cutting carbon emissions. That was about cutting the cost of operating systems and improving the resilience of operating systems. Those two things are completely aligned with carbon reductions. It's the way of being more efficient, being more resilient, being more, usually adopting modern operational practices like auto scaling. Do all the, deliver all the things. It's Carlos Pignataro: And I'll tell you one thing, if you don't mind, allow me extrapolate from what you're saying, the idea of actually having software, green software principles, really as a, in our workspace, visible and very aware, is so incredibly important because the virtualization concept that you're talking about, Kubernetes and so on, are things that we apply to fight in the networking world, are things that we apply to any type of SFC software function, virtualization, NFVs, et cetera. And trying to bring a software more dynamic approach of how we think about it, to think that are traditionally more, I have a big router and a big switch and a big antenna, softwarizing, if you will, the thinking and doing that with the principles that you can have within, things like that are at the reach of all of us, like some of the courses on green awareness, building those principles into any type of software practice is such a win/win. Anne Currie: Yeah, absolutely. There's no reason for people to go, "Oh, I love the idea, but it's actually going to hurt my business." If it hurts your business, you're not doing it right. If you think it's going to hurt your business, don't do it that way. Do it a way that is materially aligned with your business. That will scale better, will deliver more value, and you'll actually make it happen. So there is no reason why being green should hurt. If it does, you're probably doing it wrong. Stop. Carlos Pignataro: And I tell you, one of the things that we often think about, and I thought about very much in my CTO roles or in when I do standardization is how can we actually make some of these things codified, reusable, repeatable. Someone can actually learn, someone can actually use, whether it's a standard or whether it's a certification or, and do that in the context of. Not only the architecting of software and networks, but also the operationalization of networks. One small example that I can share is work that we're doing with Alex Clem on sustainable network operations, which is an IEEE, SIG or special interest group. And the interesting thing there is that when we look at the overall life cycle of networking and software, we have the use phase within the life cycle that focuses on the specific operations of the network, operational aspects of networks. There's a lot to gain in terms of managing energy efficiency and carbon awareness and carbon efficiency. So how can, within our different insertion points, within the life cycle, some are earlier, closer to manufacturing networking equipment or designing chips, some are closer to doing architecting networks and actually designing networks and operations, and then plus plusing networks, right? Like updating and upgrading equipment in a more circular, sustainable way. Each one of those areas has a very strong sustainable benefit that we can actually bring. These are the reasons, and honestly, Anne, hearing you talk are the reasons why, going back to your earlier question, why I got really drawn into this green software champion concept. Anne Currie: It's interesting. It's, I'm going to talk about something you don't, you wouldn't necessarily speak specifically at the moment, but one of the things that interests me, and we mentioned it a lot in Building Green Software. Is that networking is one of the few areas of the tech industry where there's already been a lot of thoughts, which is directly aligned with energy saving. That's, that networking has a concept of, is it bits per watt or watts per bit? Carlos Pignataro: Yeah. And the interesting thing there is that there's been a lot of research and there's still ongoing discussion into how much things like energy over a number of throughput, like Watts per bit or, it's over megabit, how useful something like that is because, which I think is actually useful in many areas. And there's, it's still an ongoing discussion about it in some standard groups, because if you take, for example, a core in a router, piece of equipment of the guts of the internet. And you have it powered up without traffic that is consuming about 80 percent of the power. Anne Currie: Which is astonishing, isn't it? Carlos Pignataro: It's crazy. And then with power, 95, right? So the proportionality has a smaller slope. Anne Currie: So actually let's move on to one of our, some of our discussion points from today. So we're going to be talking about two papers that you were involved with and, co-authored, contributed to the first one is an article entitled Challenges and Opportunities in Management for Green Networking, which explores the environmental impacts of networking technology. Noting that while, that basically it says networking is great, there's loads of things come as of networking that can cut travel, that can, could significantly contribute to reducing carbon footprints. Networking is an amazing thing. But it also uses a lot of power. So if there's some way that we can get all the wins and reduce some of the losses and some of the waste, that would be fantastically good. But at the same time, we don't want to lose any of the wins. So do you want to talk a little bit about the article? Carlos Pignataro: Absolutely. Thank you so much. I'm going to start, Anne, by honing into a couple of keywords that you mentioned, which are important in my mind. And the first one is the one about trade-offs. It's potentially tempting to say, "I'm going to move a part of the system to another part, which resides outside boundary conditions, and therefore things are more green." And it's one of the learnings that I've seen. I've worked in a number of technologies in my career and sustainability, environmental sustainability has been the more new one by far. We're looking at the system with broad boundary conditions. It's not like saying "I'm going to replace all the lights with LEDs" because we have material extraction to actually make the new light bulbs and we have to dispose of the old ones. So if we look in the broader sense, we've really had trade-offs. Imagine to make it very practical that we have a link between two routers or two devices. If we add more links, we can have more redundancy. And if we add more routers to actually duplicate that and more links between them, we have even more redundancy. And that redundancy or resiliency improves asymptotically. It gets to a point in which I add more. And while my carbon emissions continue going up linearly, there's no benefit to redundancy and in fact, the system can become a little bit more brittle. So one of the concepts that we kind of talked about a little bit, we have a multi goal scenario. We are optimizing for two different goals at the same time, one of which is resiliency and performance and traditional business metrics, and the other one is for sustainability at the same time. The main area where I feel that makes such a, strong difference is in Moving to automation, moving a lot of these processes to automated processes. It's one in which we can actually get to an optimal point in sustainability while lowering the extra links for redundancy based on the needed traffic at the time or the seasonality of the traffic or the seasonality of the requirement. So that's one of the key pieces of that work. And let me explore, if you don't mind, another area of this paper, which I think is very, relevant and important, which is, which really is how we define terms. It's a Socrates quote that the beginning of wisdom starts with the definition of terms. And one of the things that I found is that in such a multidisciplinary field in which you have people coming only from environmental sciences, people who are only tech, there's a little bit of an impedance mismatch sometimes in the dialogue. So even simple things like saying, for example, sustainable something versus something for sustainability, right? Do we have sustainable AI, meaning AI systems that are sustainable in themselves? And we call that the footprint, or AI systems that are AI for sustainability, meaning the output of the system can actually help you with sustainable outcomes and we call that the handprint, right? So the concept of footprint and handprint are not necessarily well understood within the networking and software spaces in my experience. And I feel there are fundamentally, when I reduce the footprint, you want to improve or enhance or grow the handprint. Anne Currie: Yeah. Yeah, I agree. It's not a well known phraseology. we use this in Building Green Software. It is a bit counterintuitive in that kind of like footprint, bad hand print good. But it's like, why is the foot so bad if you are in, if you are judging by your, reverse of the football analogy where hand is bad, foot is good, but in green software, we talk about footprint, foot being bad, and handprint, hand being good. Carlos Pignataro: That's right. In football, you would get a red card and in green software, you're going to get a green card. Anne Currie: Indeed. Indeed. But I don't terminology because I think it's a tad confusing, but it is well known that is, and that's what we're talking about here, which is fine. Carlos Pignataro: Thank you. No, for sure. And I'm going to bring back also something that you said before, which is the networking industry have been thinking for quite some time in many sustainability aspects and at different levels, at the cheap level, as I was mentioning, at the power efficiency, there's metrics, so you cannot improve what you cannot measure trackers. So we have power efficiency for data centers and so on. And that becomes much more and more important because the amount of electricity that gets consumed by networks and particularly data centers these days keeps growing and keeps growing. There's many things that we can learn from the way in which we design protocols. Part of the work that I've done historically is protocol design within the Internet Engineering Task Force, the IETF. Reading RFCs and things like that. And there's very interesting work on protocols for the Internet of Things. And that is interesting, Anne, because IoT Internet of Things devices, just by their use case, need to minimize power consumption, need to be alive for years and years on battery power. So protocol definitions evolved so that protocols are a lot less chatty. There's a lot less back and forth and wake up. Devices can actually go to different levels of sleep. The same way that we have sleep levels in the laptops that we're using you and I right now, and on the phones that on the handheld that we have. There's less of that in some of the networking areas, and that is an active area of research and development. How to bring dormant states, less chatty protocols into some of the networking arena. Anne Currie: That is something that I, really liked in this paper. So stepping back for a minute, there's the IETF and the IRTF, which are both parts of the same organization. The IETF tends to focus on protocols, designing protocols, but actually, and the way it does that is it, talks about a famous, Definition of how, it approaches things, which is rough consensus working code, which is based on the fact that actually it's very hard to tell what's going to work in networking because it's so complicated until you actually get something out there and see whether it works or not. And IRTF, which is a lot of what these papers are, is about kind of research and thinking and, but it's hard to marry those two things I would imagine in many ways, because the whole point is that you can't necessarily just think through which network protocol is going to work and which are not. You almost have to suck it and see. But one of the things that you pointed out in your paper, which I liked, was there are already protocols out there that are working, that are achieving what we want to achieve here. There are already protocols for, Internet of Things, which are out there working. And it would be great if people looked at them and said, "What works there and what doesn't work there?" "How can we apply that learning in other environments?" I think that's what you're saying. Carlos Pignataro: A million percent. That is actually in a much more eloquent way with a couple of the very important points that we try to convey in the paper. Thank you very much, Anne, for explaining IRTF, IETF, ISTAR, it's, I wasn't sure if we want to, how boring that was, but it was actually very useful. So you put both of them in context. Anne Currie: Well, what I quite like to do all the time when we're doing it, what we know works in terms of delivering good stuff in the human, in human history is to look at stuff that works in a similar but not identical field and say, let's steal some of those ideas. And something else I liked in your papers was when you talked about CDNs. And the ideas in CDNs, Content Delivery Networks, if people aren't familiar with them, they are what I think one of the most interesting architectural approaches in networking and in technology above the network as well. And there were tons of ideas that feel that they could be borrowed. And you said the same thing in your, paper. What was your, what's your thinking on CDNs? Carlos Pignataro: Yeah, no, it's, exactly that. Thank you for it. Because the main point is exactly that. There's many areas that there's already been deployment, not only research and not only development and not only test and Q& A, but deployment, that can be applied to new things that we need to do today, right? So Spoton on that, IoT is one of them. And because naturally there's not necessarily a very smooth bridge in my experience between research and standardization and actual deployment and running code is a little bit bumpy. So having examples and use cases that work, that we can apply to the problems between code that we have today is critical. When we look at the most dynamic and complex networks, I really look at CDNs. Because it's a network that is actually focused on delivering the content and in a CDN, it's incredibly critical to number one, replicate content near the receiver, right? So that you don't have to stream from transatlantic, but don't over replicate if there's not a lot of listeners and receivers. So the equations can really, help you to minimize the overall end to end system electricity, consumption, and maximize efficiency just because of. What to replicate, where to replicate it, at what times do we do this when we have a signal that the electricity feed that we have is coming from renewables. It's one of the systems that really gives you the flexibility to implement all of the things that we discuss in a paper. And if you allow me again to extrapolate a little bit more, I frankly think that talking about green and talking about sustainability, we can actually extrapolate further. And look more into what nature does and try to understand and replicate that in some of our systems. The fact that our laptops hibernate, the first there were animals that's where the word come from, during the winter, they were saving mode, right? It was a bear in saving mode and we have a laptop in saving mode. And next we're going to have a data center cluster in saving mode potentially. And. Many, ways in which if we look at the amount of energy that our brain uses versus an LLM system uses, there's clearly a huge, ginormous opportunity for improvement. Anne Currie: Yeah. Yeah, that's true. But yes, that actually it's interesting that you mentioned LLMs and AI there, although not, directly, but I'm very interested in networking and LLMs and how those are going to be merged together in a green way. I, as I said, I'm a huge fan of CDNs. So as you said, CDNs use buffering effectively close to users to do two things, to mean that every time somebody that, if you've got somebody in London and they want to look at a huge asset that was served from the US, maybe an episode of Game of Thrones. I still use Game of Thrones, even though it was a bit out of date. It's better to have that episode, one copy of that episode, move over the Atlantic and be cached somewhere local to the user in the UK. And then they, and then all the, users there just take it over a shorter distance rather than have to take it all the way from the US, as you said. There's another benefit, which is that you can move that giant asset at times when the grid is, the internet is not busy, so it flattens, something that you've said in your papers you talked a lot about was the efficiency of flattening peaks in load, peaks in demand. It's much better if you could spread demand, find some clever way of spreading demand out so that you don't have peaks that you have to provision for, because that means you need more equipment and more of everything. It's not as resilient and it's more expensive. So, CDNs are fantastic from that perspective, but AI feels like it could potentially benefit, AI inference feels like it could potentially benefit from CDNs as well. If you could try and cache responses that are common questions so that you weren't having to run everything, sorry, I'm taking you off networking now. I don't know if you've had any thoughts about AI and networking and CDNs or anything like that. Carlos Pignataro: I work a lot on AI these days and I definitely, have thoughts. I think number one that, and thank you for, the bunny trail towards where the dialogue takes us and things are relevant, right? The way in which we actually train systems today can like immensely be improved and whether that's by some mechanism of incremental caching so that you don't have to relearn everything as you actually tweak the model and things like that, CDN like, absolutely. And in another way in which I really think about it, and frankly, particularly with a couple of startups that I'm either working on or following or seeing is, do we want a, like the typical army Swiss knife B2C business to consumer that can actually solve everything and we need three cities worth of electricity to train? Or do we define more constrained SLMs instead of LLMs, small language models that are a lot more domain specific and a lot more domain shifted and more B2B potentially, business to business type. And regardless, I think that going back to sustainable X, versus X for sustainability, I always like to do like two by twos or X, Y. And I think that AI as a broad technology from whatever, from machine learning and computer vision and, has not significantly into going AI for sustainability, we have Google maps today that can actually give me the most sustainable travel, fuel efficient route, and I go in to book my flights and I see carbons of each one of those. And there's a lot of AI that is applied for sustainability. There has not been enough, or I should say, there's a enormous upside and opportunity for sustainable AI, right? Right. So in the backwards lingo that we were using. A lot of handprint, not enough reduction of footprint. And I think a lot of methods that we know from other domains, like CDNs, that we can apply to inference and learning of models. Absolutely. Please let's do. Anne Currie: Yeah, I mean, in many ways, it feels like it is the lesson to take away from the internet, which is that it's really hard to make things work. You have to, like working code. Working code absolutely is the king. So if you can take someone else's working code and apply it in your situation, that's a great idea. Carlos Pignataro: Exactly. Exactly. A million percent. And one thing, Anne, that I'm going to mention now, because I fear that the way in which we're choosing conversational forking paths, I'm going to forget. So, so. Because it's important. It's a call to action. And one of the things that you mentioned, I mentioned, is how much more resources and material exists. And really my call to action is to go to learn.greensoftware.foundation and start with the green software practitioner curricula. It is such an incredibly well packaged. Set of modules that go through a lot of the fundamentals and demystifies it provides lexicon, it demystifies, it talks about carbon. And one of the important ways in which I wanna make this actionable is to really encourage any listener to, is super easy: learn.greensoftware.foundation. Anne Currie: I totally second that. And I really, I apologize, Carlos. We've gone all over the place on our discussion today. So you're quite right to stop me and make sure that's a very important message got through. So, I mean, we are now coming towards the end of it. Is there anything else that you want to tell? There's tons of interesting stuff in your two papers that are well worth reading. You don't have to feel that it's too, that it's too difficult. The papers are quite accessible. Is there anything that we haven't talked about that you would like to talk about? Carlos Pignataro: Thank you, Anne first of all, thank you for actually reading the papers and, actually not only reading, but really reading because you actually distilled some of the fundamental principles that we, and I wanted to convey. More than anything, what's needed to drive green software is our full commitment and bringing, like you were talking about, like I was mentioning, our values and whole selves to the one which we call and the one which we do finger to keyboard. This is a very multidisciplinary nuanced area. And after leading in Cisco technology and data for the engineering sustainability team, we really don't know what we don't know. And for me, learning and think humbly in every conversation is fundamental goal. One of the things that I love about the approach that the GSF is taking with, SCI is that it's data driven, right? Let's get matrix, let the data as opposed to myths drive the conversation. And to continue to stay together because the ecosystem is multidisciplinary, and we all learn a little bit from each other and reusing and leveraging particularly code and principles that you talk so, so very well in your book and those principles, that code, bring it in our insertion points within the ecosystem, whether it's at the vendor or if we're cloning in a public repository and make some changes, or if we're thinking about the mixed networking protocol or networking operations. Anne Currie: Yes, very true. So I think that's a very good place to be finishing up because we've pretty much come to the end of our episode. And I have one final question for you, which is where can listeners go to if they want to find out more about you? Obviously links to the papers will be in the show notes, so you can, you should, I strongly recommend you to read them, but where else can people find out about you? Carlos Pignataro: Hey, thank you very much, Anne. LinkedIn is an easy place to go. And I'm always open to any connection and any messages. My website, you can check out also bluefern.consulting and has my email, has my contact, I really, I don't just say that IQ and you respond, I respond. So, super happy to continue the conversation and continue engaging. Anne Currie: Excellent. That's very good to hear. I'm sure that lots of our listeners will reach out and talk to you, but certainly they should be reading your papers and they should be connecting to you on LinkedIn or looking at your LinkedIn, following your LinkedIn. And so thank you very much for being on this episode. It's been a fascinating episode, a deep dive into networking and all the... networking is not so, it's been very interesting because a lot of the concepts are also applied to non-networking software. All the ideas and the overlap in CDNs, which really are the concept I think that's best suited to environmental sustainability and aligning with renewable power in the long run. And a final reminder to all our listeners that the resources for this episode are in the show description below, and you can visit podcast.greensoftware.foundation to listen to more episodes of Environment Variables. So see you all soon. Bye for now. Chris Adams: Hey everyone, thanks for listening. Just a reminder to follow Environment Variables on Apple Podcasts, Spotify, Google Podcasts, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show, and of course, we'd love to have more listeners. To find out more about the Green Software Foundation, please visit greensoftware.foundation. That's greensoftware.foundation in any browser. Thanks again, and see you in the next episode!…
E
Environment Variables

1 The Quantum Entanglement of Software Sustainability with Wilco Burggraaf 52:16
52:16
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי52:16
In this episode of Environment Variables, host Anne Currie speaks to Wilco Burggraaf, a lead green practitioner and architect at HighTech Innovators, for an engaging discussion on integrating sustainability into software development. Wilco shares his journey into green software, the inspiration behind his innovative workshops, and his efforts to build a vibrant green tech community in the Netherlands. The conversation explores his articles on the Software Carbon Intensity standard, the complexities of balancing micro and macro sustainability goals, and the synergy between FinOps and green software. Tune in for actionable insights and strategies to make greener choices in tech while aligning sustainability with business goals. Learn more about our people: Anne Currie: LinkedIn | Website Wilco Burggraaf: LinkedIn Find out more about the GSF: The Green Software Foundation Website Sign up to the Green Software Foundation Newsletter News: Use of the Software Carbon Intensity (SCI) and Impact Framework (IF) Tools [39:35] The Quantum Entanglement of Software Sustainability: Navigating the Micro and Macro Scales of Carbon Footprint Measurement [41:55] Is this Green IT / Green Software at its Core? [47:13] Events: Green Software - The Netherlands | Meetup [50:48] Resources: Software Carbon Intensity (SCI) Specification Project | GSF [41:14] Green Software Maturity Matrix [46:16] If you enjoyed this episode then please either: Follow, rate, and review on Apple Podcasts Follow and rate on Spotify Watch our videos on The Green Software Foundation YouTube Channel! Connect with us on Twitter , Github and LinkedIn ! Transcript below: Wilco: At some point I came to the conclusion, like, okay, we can measure a lot of things, we can have all these metrics but at some point the numbers are not going to change outcomes. Decisions do. Chris Adams: Hello, and welcome to Environment Variables, brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field, who have a passion for how to reduce the greenhouse gas emissions of software. I'm your host, Chris Adams. Anne: Welcome to Environment Variables, where we bring you the latest news and updates from the world of sustainable software development. I'm your host today, Anne Currie. And joining us is Wilco Burggraaf, lead green practitioner and green architect at HighTech Innovators. Wilco brings a wealth of experience in software development, been over 20 years in the industry, and is an active Green Software Foundation champion, and we'll be talking a lot about that today. So he cares a great deal about integrating sustainable practices directly into the code and architecture of software, helping to make greener choices not only possible, but essential in tech. And in this episode, well, this episode really is the Wilco show. We will be talking about three articles that he's written on LinkedIn and what they mean and what people should learn from them, what he's learned on his journey in becoming a Green Software Foundation champion, a green software practitioner. So yes, he has a lot of interesting thoughts on integrating software sustainability at the lowest, the deepest level, the lowest micro scale, the code level scale, and the macro scale. Bizarrely today, we're going to be talking about those in the reverse order, focusing on his articles on the micro scale first, and then moving over to the macro scale, which I, and I'm a big fan of macro scale. So that'll be interesting when we get there. Actually, I'm a fan of all the things, but I'm a big fan of starting at the macro scale. So yeah. So Wilco's going to be talking about his articles. And he'll also be talking about his experience using the SCI, the Software Carbon Intensity standard and the Impact Framework, because I'm very keen on his thoughts about whether they're useful or not, why they're useful and what they add to the software development process. So, welcome, Wilco. Can you tell us a little bit about yourself? Wilco: Yeah. Hi, Anne. Yeah. Thanks for having me. Big fan, by the way. I love the book you worked on, Building Green Software. So I'm Wilco. 41, married with no kids and I live in the Netherlands. We have an Airedale Terrier named Iron. And although the country I live in is small with only 18 million inhabitants, I grew up in the South near the coast on a factory plant tied to the coal industry in the eighties. And my dad was a night guard. So we lived in the factory plant and yeah, when you come out of the bed and you smell the stench of chemical processes in the air and when the wash was hanging out to dry, but beautiful weather, but, the coal dust came on the clothes. Yeah, that was, but yeah, if I look now back on it, that was kind of weird, but that was normal. That was, home for us. So I deeply love nature. I spend a lot of my time on hikes of two or three hours in the forests and the heat lands, and that's only 10 minutes from my home. So, yeah, I love to live here in the South and what we call the nature. Anne: That's great. That's lovely. And a really interesting backstory that your first, coal was your nemesis was your laundry's nemesis from a very early age. Wilco: Yeah. It's always a story that my mom tells people because yeah, a lot of people who didn't experience that can not have a understanding of how it must been. Anne: That is really interesting. That is a very interesting backstory. So my backstory is not quite so interesting. So my name is Anne Currie. I am, as I mentioned, one of the co-authors of the new O'Reilly book, Building Green Software. And I said that in the last podcast, and I'll say it again, if you care about this kind of stuff, if you're listening to this podcast, Building Green Software from O'Reilly is a really good book to read to get cracking. And it doesn't, it's not particularly techie, it is useful for everybody. So if you're a product manager, if you're a marketing person, you can read that and understand it. And it's a good, place for you to kick off because I think a lot of the changes that we're going to need to make to build green software actually start with product managers, not necessarily with techies, but that's an interesting other point. I'm also the CEO of the learning and development company, Strategically green. And we do workshops as Wilco also does workshops. We'll be talking a little bit about that later, but we do workshops to kind of get your company started on getting people understanding what it is to be great and kicking off some interest and excitement, as well as helping you build some internal expertise in that. So if you want to do any of those things, hit me up on LinkedIn. Before we dive in, I want to make a quick reminder that everything we talk about in this podcast today will be linked to in the show notes below the episode. So you can go and you can read it and you can follow along as you listen to the podcast. So back to you, Wilco. I think the place for us to start is what started this off for you? What kicked it off? What led you to transition into green IT and how has your journey evolved over time? Wilco: Well, only 10 months ago, it's not even that long ago, I dived into green IT and sustainable coding, starting with no background in green. In IT, of course, with 20 years of experience. And. Now I'm progressing to discuss things with university professors. So it went kind of quickly. And also since March this year, I'm a co-founder and co-host alongside Pini Reznik, I think a familiar person for you, of Green Software Meetups in the Netherlands. Anne: And you've had a lot of success with meetups in the Netherlands, which is really good. So, what, role do you see your current work playing in the larger mission of sustainability? Wilco: Well, maybe a fun detail. I work in secondment, for some countries that is not a familiar thing, but it's meaning I'm contracted by various companies. And this year I'm working with the National Databank for Flora and Fauna as a solution architect and together with a fantastic team, we're making hundreds of million biodiversity observations publicly accessible to everyone in the Netherlands. And that is kind of something really cool. And we're on track to reach our first major release in the new year. Anne: That is very cool. That's very good. And that quite interests me, links back to something you said on LinkedIn when I was talking about the last environment variables, where I was talking to Stefana Sopco, who also lives in the Netherlands. And you pointed out another Dutchie. Which, you're quite right. We have a, there's a lot of interest in this in the Netherlands. Do you have a feeling for why that is? Wilco: I hope I helped a bit with that the last half year. But no, of course, no, that's a, just a joke. But when I started like at the beginning of this year, I was looking on Google, searching for information and information was hard to come by. And at some point I was thinking, yeah, of course, books and podcasts, the GreenIO and Environmental Variables. That is a place where I find a lot of that information that I needed. But at some point I was like, okay, so maybe I need to talk to people to gather more information. And when I was searching on LinkedIn for people who knew more about green IT and green coding and green software, I found out that there were all these kind of bubbles, yeah, in the Netherlands we call them bubbles, like you have 20, 30 people working on a certain topic. And I was like, also at the same time, we were thinking, okay, how can we build a community for the meetups? And I was like, yeah, the only thing that I can do is connect to these people and make aware that the other bubbles exist and to keep on doing that. And when I was finding out, and I found the other group and another group and eventually there are, I think, right now, yeah, I think a small 2000 people in the Netherlands busy with this topic. But a lot of those people are not aware of each other. So you have to think about people working on CSRD and monitoring, people on FinOps, but are really that are interested in sustainability, people who are like, "yeah, we, need to measure not only emissions, but also nitrogen and other things and PFAS," is it how we call it in Netherlands? So yeah, I don't know if it's because of a trend or because of a lot of people now with CSRD are looking, "okay, how do we need to do this?" But yeah, there's a lot of activity in the Netherlands. Anne: That's, that is really interesting and there's a lesson there for anybody who wants to grow a community is that you went out and found all the small communities and hook them together. That's an incredibly valuable thing to be doing. Wilco: And it's also cool that there are also, there's an organization, the National Coalition of Digital Sustainability, and it's a little bit different, the acronym in the Netherlands, but they are already busy with this topic for more than 10 years. And then when I was doing my thing on LinkedIn, and then I found out that there was an other meetup group from a bank and a consultancy company, and then we're already busy with doing meetups in the year before. And, but they weren't aware of even sometimes other organizations and also like a Green Software Foundation, but there's also of course the Cloud Native Computing Foundation, where you have a sustainability group. And I'm not even talking about things like Climate Action Tech and, that kind of organizations of groups. Anne: And of course actually trying to link together these groups is incredibly, so we have actually met in person, we met at a Green IO conference in London in September, which was great. And that was very good. That was a very good way of getting a whole load of people in Western Europe basically to all connect together and have a drink and see one another face to face. Very effective. So.. Wilco: Yeah, it's very inspiring to see other practitioners and also other perspectives from UX to Green Ops to yeah, all the different, because that is something that is so clear. And this is also maybe eventually, if we go to macro, why it's hard to implement is because sustainability, it hits so many fronts within a company or an organization, there are so many roles. Where if you start thinking about, "okay, what am I, actually doing?" So the impact, what we're doing from boardroom to eventually DevOps teams or members of DevOps teams. And it's cool to see that all those people come then together in such a conference. Yeah. Anne: Yes, it is true. And I've said this many times before that it's, everybody's being bonded together by having the same goal, which is reducing carbon in the atmosphere. An intrinsic goal that's, you know, it's doing good. It's improving the world. And it does mean that you can share common ground with people you wouldn't necessarily previously have shared much common ground with. Wilco: Yes. Anne: So, I mean, you, said you've only been interested in green software for about 10 months and you've certainly done an awful lot during that time. How did you get started? How did you get started organizing meetups particularly? Wilco: Well, in January, my former boss introduced me to the Green Software Foundation website, and I immediately noticed two things. So CarbonHack24 was on the website, the Hackathon and the company I work for, they really love Hackathons. So I formed a group of volunteers together. And beside that, I will come back to that later. There was also a lag. I saw the website of meetups in the Netherlands. So I reached out to Asim for advice and he connected me with a group of Green Software Foundation employees and the Green Software members in the Netherlands. And including with Pini Reznik and together, we started planning. And by April, we had our first meetup. And my team even won the CarbonHack24 Best Contribution, which is crazy if I think about it, which was such an incredible motivator. And each step I took from organizing meetups to winning the hackathon felt like a chance to make a meaningful impact. Anne: Which is fantastically good. That is good. So, but you didn't stop there, did you? You became a green software champion, which is a new Green Software Foundation kind of a project to build up people who know more and can go out and shout about green software. How did that happen? Wilco: So by May, after hosting two meetups and writing over 10 articles on green software, I felt certain that this was my calling, right? I felt so much passion and fire. So, I mean, I think through all the content I create and all the conversations, that was kind of clear. And I discovered the Green Software Champion program on the Green Software Web Foundation website. And I knew it was the right path to amplify my impact because I believe that if you have a recognition of a certain organizations that especially like multinationals and big organizations are like, "Hey. This is something that we maybe need to take more seriously." Not because of me, because of the, "Hey, this is, there's something going on here." And yeah, forward four months and we've organized five meetups in just over seven months. And with the sixth one on the way on the 22nd of November, with the Green Waves Hackathon at the TU Delft, that's a university in Delft. And I now written over 150 articles on LinkedIn, collaborated with professors to bring green theory into practice. And we're still, I'm still doing that and have given six talks so far. So, with more plans and each step has deepened my commitment to building a sustainable tech community. Anne: Which is absolutely fascinating. It is amazing how much you've done in 10 months. So, but what next for Wilco and the green IT community? Wilco: Yeah, what is next? That is a good question. So what I really try to do is to follow this certain path. So when I started gathering the information, I found kind of out, okay, there is already a decennia of research done and a lot of information, but to some degree, we have a hard time transferring this information to other developers and we are kind of stuck. So for me personally, I was really invested. "Okay. How can I make this first stepping stone on making this a thing that other people can understand?" And that's why I started to invest what I now call EQUAL, Energy, Quality, Utilization, and Load. So the idea that you have an application that has a certain algorithm or a certain logic that everyone understands. So kind of a loop. And if you have this loop, so a few important things in, in, in green coding and green software is, okay, how can we, based on utilization, can we estimate and not exact utilization, but can we estimate the energy and then eventually related to emissions? So if I started understanding, hey, wait, utilization to some degree is like the amount of threads that the CPU is running. Also, of course, based on the cores. So 50%. You would maybe expect, like, if you have like 12 cores that with 50%, six cores are running, but it is not necessary at the truth because frequencies, of course, can be higher and lower and there are some things going on, but if you start, okay, so the amount of threads, so let's say the 12 cores. So you can have then most easiest for a clock, 24 threads. So if you have a loop that you can start playing around with two threads, four threads, eight threads, 12 threads. So that's a first parameter you can give to this loop that I placed in the API. Then the next one is the amount of iterations. So do I want to do a small test? And the funny thing is one line, one normal line of code, because I can make a line of code that just can gas pedal the CPU 200 percent for an hour for one line of code. If you take an average line of code, it's most of the time so insignificant for a CPU that like, if you have a loop that is running within this very small time, then 10 million iterations is quickly over that's very fast. So my EQUAL starts with 10 million and it goes eventually to in the billions of iterations. And then the third parameter of equal is the use case. So you can place in the iteration just an I++ or just any use case you kind of want. And then what I kind of start doing when the loop is running, I start asynchronically, I start measuring the utilization of the cores in a very high time resolution. Like 10ms, 20ms, 30ms, so very small. And then after the whole loop is done, what I then do is I can place, those samples, I can eventually connect them back again to the traces of the code. And then you can see a certain few things happening here. So what you can see happening is if you will reduce the amount of operations happening on the CPU, yeah, of course you probably, your, utilization will be lower and your energy use. But there's also another thing is because CPU, how CPU works, that sometimes you will see unexpected behavior. So although you start to play around with these use cases and you think, "Hey, this should be more efficient" and you start rerunning it and then you're seeing things happening like, "Hey, wait a minute, if I run this on 18 threads, this use case works more efficient energy-wise on 12 threads. How does that make sense?" Well, that is something I tried to figure out, but this is what I place in a demonstration style, because if you demo this and you show us this loop and everybody understands the loop and you show this in the user interface, and then with Prometheus, with eventually graph set that you show to everyone, then it's makes more clear for, "Oh, okay. Wait a minute. There is beside time efficiency, compute, power, and there's also this third dimension, energy consumption. And it has sometimes another effect than we sometimes expect." And I started, okay, so if I can eventually use this in talks, if I can start using this in eventually a use case for blogs. And eventually this is also where the workshop that I'm going to give from January is built on. So this is for me is like the future. And then my estimation model that I just created on Prox. Which is kind of built in, of course, in your Linux kernel with just a dumps is now not the most perfect model, but this is the reason why I work where I have contact with, especially University of Groningen to make this model eventually better with socket management of measurements and real kind of measurements. And yeah, Anne: So that so the all sounds very, so basically you're working on a tool that helps people measure, at least proxy measure, their carbon emissions through energy use, then tune it and improve it. And I'm guessing that there's kind of several advantages to that tool. If you work on that tool to deliver the same functionality using less energy, the product, your application will run faster. As you say, CPU cycles are another proxy for energy use. So is that commonly what happens? It improves the performance of the application? Wilco: Well, if you say performance in time, well, this is a funny question. So if your focus is on performance in time, sometimes if you say I make my code quicker, it sometimes start using more energy. So, and then the question is, I have this value, "is it okay for a user to wait on it or does it need to have this very fast?" And there's also a difference between the performance if, and this isn't on the, on the, in the cloud, on a server, almost impossible. If you on A CPU only use a few cores, it has often a very higher CPU frequency, so it probably will be with this exact same code will, be quicker than if the complete CPU has 80% of the cores or a hundred percent of the cores active because the higher, especially with a hundred percent because of heat, the frequency goes down and it kind of becomes slower. And this is what I say, you can, of course, if you lower the amount of code or operations to the CPU, it will eventually be more efficient. But there is also this thing going on that the CPU has sometimes 20, 30, 40 percent influence based on the state of the CPU it's in. And yeah, your code can have some influence on it, but it's more in a different way. So how many threads am I spinning up? Or how many things are going on this server that I'm running my code on? And yeah. Anne: So yeah, I see now why you, and when we go over onto your next, onto your second article, talking about trying to balance these micro line level changes with a more macro perspective. That's yeah, it gets quite complicated and you don't always know what's going to work until you try it. So obviously, you know, the whole point of running this tool will be to make a more energy efficient application within your kind of high level goals of your SLAs. But I'm imagining it's also quite fun, that it's quite a good thing for a hackathon, it's quite a fun thing developers to play with. Wilco: Yeah. Yeah. That. And also you can just replace, that's what I every time keep saying. The loop is just for demonstration purposes that people understand it. But I use this whole logic in an API and you can just put your own code in there. That's the whole thing that we're, with the workshop, going to do. So people will build their own API. And then with the same process of asynchronically measure when you run this code, what's happening, because you will see funny things going on when you're waiting or things are connecting to a database or connecting to another API. And based on how things are programmed. So are you waiting with a loop that is pushing your CPU high? Or are you using smarter mechanics so that there is a drop, but is a drop sometimes something you want because if you want to be very efficient with your resource, you kind of want to maximize it around 80%. Well, I don't want to be come too fast to conclusions yet, because I think we still need to figure out what the patterns are and what are good patterns and bad patterns, but yeah. Anne: Yes, because as you say, and again, this leads back to the kind of macro micro picture. It's definitely.. In certain circumstances is definitely the right thing to do to if you're waiting on an API call or something to kind of say, "right, I yield all the threads and everything running on the machine to somebody else to use the machine while I'm waiting" so that the machine is still highly utilized whilst I'm waiting for my API call, but that relies on you having a design or an architecture, which might be within your application, but it might be within your operational decisions. You know, are you multi tenant? Is there somebody else or some other company or some other application that is going to be able to pick up and use the machine while you're yielded? But if you're just waiting around. then that's less good. So then the machine is just going to waste during the time that you're waiting. So you're, right in saying that there's so much context to this. Wilco: Yeah. Yeah. And, okay, so, okay, I'm really getting excited about this topic. So, because I instantly thinking like, yeah, of course in the cloud, in the server, you maybe they have not full control, but one thing we know the grid is getting fuller, the electricity grid. And one of the things is that some university have research done is how can we optimize the devices that we have better? And one of the things that we in the Netherlands have, we have a lot of people with solar panels. We're not using optimally the, electricity that is generated by the solar panel. So if you think about the following, so what are the devices that you can easiest, how do you say, charge based on the solar panels, then it's mobile devices or tablets, or maybe a laptop with maybe a good battery lifetime. So now we're from originally, we always been like, "okay, we need to move logic to the server because it's more secure and you can not manipulate it." But if you start thinking about, "Hey, we need to optimize the devices better." What if we start using a WebAssembly in a better way? So things in your browser or on IOT device, or in this case, then on mobile or tablet, and use that green energy, especially if someone is as smart that like not charging it at night, but more than the day when the solar panels are active, that is always like an important catch. And of course the solar panels have some embodied carbon, but yeah, but still, so there are so many cool things you can go on in this. Anne: It is really interesting that renewables, unlike, you know, you aren't going to run your own coal fired power station. You say, nobody apart from Wilco has ever lived on a coal fired power station, but particularly solar, it's a very distributed technology. There are lots of people, I've got solar panels, and when they're running and the sun's shining, I've got more power than I know what to do with. When I had, when I got it installed, I said to the, chap who was installing it, "what should I do with this? You know, and he said, "Oh, well, you will have, there are times when you are going to have more power than you know what to do with." So make sure that you, it might seem historically, it's always been very inefficient to heat, your water with an immersion heater rather than using, a gas burner or something like that. So it's a very inefficient way to do that, to heat the water. But if you've got free energy and it's just otherwise going to go to waste, heat your water with an immersion heater. He said, "get a swimming pool." Not that I did get a swimming pool, but get a swimming pool and heat that up because some of that has changed the way we need to think about, we still talk about green software very much from the perspective of efficiency and improving efficiency and reducing waste. But I would say even more importantly than that, it's about doing more when the sun's shining. You know, don't forget efficiency when the sun's shining. You might want to write applications that are very efficient, that operate in totally different ways at night and while the sun's shining. Yes. Wilco: Yeah. I would even dare to challenge the following. So. If the sun is very shining in the day, what I see in the Netherlands a lot is if you look to the electrical grid, especially also on windy days, that's, or somehow, and I don't see the relation yet, is that the industry seems to be working harder in the general. So you still see the gas turbines in the Netherlands emit a lot of emissions. So it's very sunny. And if you go to electricitymaps.com, to the Netherlands. And you look there, you will see then the solar panels, generating a lot of energy. And sometimes of course, also the wind turbines, but that's also the gas turbines. And that is mainly because there's a higher demand or there's instability on the net. And so you could even start. And that's why I think that carbon-aware is a very complex topic, because are you gonna do a weather forecast and then run, but then find out that maybe the grid was emitting more than you expected? And lately, the last days, we had a lot of emissions in the Netherlands. Or are you more going to try to indeed optimize the devices we already have that maybe run on green electricity? There's no perfect answer in this, but we need more data, we need more access, but I understand from security standpoint, even with electricity grids, I mean, they want us to give us the information, but they're scared for terrorist attacks or for things that the information that we want to do for good that can be used for bad. But yeah.. Anne: I mean, and quite often, grids just don't have the information yet. I mean, there is, for carbon awareness, we were a long way from having really good data on that. So I always tend to say, don't start, well, you could pick proxies, perfectly reasonable to pick a proxy because actually the difficult thing is designing systems that can respond. That's going to take years to do. So you can, in many ways, pick a proxy now, even if it isn't great. Design a system that is responsive to that proxy. And then as that proxy gets better, your system will get better. So you might be going, " actually that proxy's terrible" now, but the difficulty is, well, getting the data is often somebody else's problem. It is put pressure on, you know, suppliers and energy grids and everything to provide good data to us. But in the meantime, the big job for us, the thing that's going to take us a long time is redesigning our systems to be able to respond to that data. So that's things like thinking about what your graceful downgrade options are for when the grids are very dirty. You might have to move big, having big latency-insensitive tasks that you can move to when the sun's shiny. The Texan grid is doing a lot of good work on that. And I talk, again, I'm sorry, I talk endlessly about large, flexible loads. So the Texan grid is putting out a call to industry for large, flexible loads, which you can run, which are latency intensive, don't matter when they run. But they can run when the grid is full of solar because Texas is quite rightly putting a whole load of solar panels because it's very hot in Texas, it's very sunny in Texas and there's a lot of desert. So they want something to run on that, solar power. It's very sad at the moment that the people who are really responding to it are the Bitcoin miners, but AI is another potential customer who have large flexible loads. So very CPU intensive loads. Wilco: Yeah. Oh, but this is perfect because, okay, you kind of influenced me when we had talked in London. So to think more about, okay, if you can better react to renewables in a more flexible way, because I started thinking about it, especially if you put it in the following perspective. So most front-ends have a lifetime of two, three years. There is of course always shorter and longer. Back-end systems often have like a longer lifetime span. So if you build something today in 2024, and it runs five to six years, that means that it still runs in 2030 when we have our big first milestone that we should have reached. So if you're not building your software today that it can adapt, that it can be flexible, you have to refactor things in the future, or you're getting in a stuck position because most of the time, especially with big systems, the more you build, the more dependencies you get, the harder it becomes to eventually change things when that foundation is set. So, yeah, I really like that idea that, although we maybe not have all the right answers now, and maybe the situations are not always perfect, but it doesn't mean that you shouldn't start thinking and implement in this way. Anne: Absolutely, I think it's going to take ages to do this, it's a completely different way of thinking about it. I mean, there are tools out there that already exist, that can help you get into this way of thinking. So I'm a huge fan of spot instances on AWS and, or Azure or, preemptible instances on GCP, because they're, a kind of mini version of Texas's large flexible loads. You say it's a small flexible load. You say. What you're saying is "I've got this load, it's flexible, run it. I don't really mind where it runs. I don't have any particular SLA associated with it." And you can use it for, the clouds use it to improve operational efficiency, which is good for green as well. But in the future, I can see those loads absolutely match to what we're going to need in order to shift work forward in time or later in time or forward in time. Wilco: Yeah, I really believe to be, to some degree as much in control as possible because it's easy to let some other company or SaaS solution fix things for you, but especially from a board perspective, it's a good idea. Like, okay, we're working together with hyperscalers and we're doing things serverless. And especially if you're a big organization. And then mainly like if we do serverless, they are kind of responsible to fix if the utilization on the background is well organized. And I find this always very interesting because yeah, to some degree that's true. So for consumers, normally, if you go to Azure, you're probably, if you're using serverless, I think they can really optimize it very well. But if you have a very big multinational where kind of, they already reserve a certain space for you in the data center and you're running serverless. I'm very curious because we don't have the information now. So do they reach indeed that more utilization because you work serverless or do you have virtual machines where often in a while some function comes by and it runs and it's done? So that's why, and you mentioned spot, that's why I like, and not because I have stocks in them, because it's not possible, I think, but I, that's why I love Kubernetes and Cloud Native thinking so much. That's also why I really like to check out also every time what's happening in the Cloud Native Computing Foundation environment, because course that is where Kubernetes is very active, because I strongly believe that if you are in control as far as possible, you have, not only you can better measure what's happening to some degree, although you're doing some estimates, I think it's also from a security perspective, it's a good idea. And I think also from just the willing, yeah, the willing to be responsible because nine of the time you're also responsible of the value that's running in your cluster. And I think just outsourcing the company's core values or the product values outsourcing to somewhere else, it's possible, but you're giving them also some control away. Yeah. And this is something that I think a lot about, but not having all the, I think I will never have all the answers, but, yeah.. Anne: I think you're very right to be skeptical about serverless running on prem rather than in the cloud. Cause it does feel to me like, I touched a little bit earlier about multi tenancy. When you're not doing something, what's somebody else doing on the same machine? A lot of these tools like serverless work really well because you're in a multi tenant environment. So the classic example is with, if you're in the cloud and you're, you might be sharing physical resources with a company that has very different demand profile to you. So if all of your, if you are, say a retailer, I used to be head of IT for an online retailer, then all of your resources are assigned to your demand. So if there's a peak, then you have provision for your peak. And that means that a lot of the time, so say Christmas might be your peak. You have a provision for Christmas. And then most of the year, the machines are underutilized because you had to provision for Christmas. If you move into a multi tenant environment, one of the root things that the hyperscalers attempt to do is to pair up, or not just pair up, but group users on machines in such a way that they have different demand profiles. So everybody has a correlated demand at Christmas. Maybe if you're a retailer, you might be sharing a machine with a training company. And the retailer is very busy at Christmas and the training company is very quiet at Christmas. So you kind of, you, rather than needing to provision for the peak, you are getting better utilization of those machines over time. And serverless is a little bit of an example of that. It's, the win is with multi tenancy and multi tenancy is easier in the cloud than it is on prem. Now, having said that there are some multinationals that are so big and have such complicated systems internally that they are effectively their own multi tenant. I mean, Google is a hyperscaler, but forgetting GCP for the moment and looking at Google's internal tools and applications, they are their own multi tenants, they have enough variability in what individual tools are doing that they can act to keep their machines fully utilized all the time. They're kind of designed for that, but most companies are not at that level and quite haven't quite designed for that yet. But I agree serverless internally on prem for a small enterprise probably doesn't buy you that much. Wilco: And there's also an other perspective on this. And I heard this in a conversation with a bank and that was also very inspiring for me. So. The fun thing is everything we just talked about can also work in harmony. So if you always have a baseline utilization, you could do that on prem where you know, "okay, with these applications, we always have activity. So we have a certain utilization." And what they say is what then a big, they have done in control. They know exactly the energy usage and because of adventure, the energy uses also the negative impact and emissions and avoid carbon. But when they have peaks, they overload to the cloud. So they're like, "okay, so if we have, then we go there." And you could kind of also do sometimes maybe do the same if you want to save costs to maybe the devices to some degree that using your application. So there are ways, and this is complex, but I think that is a way. And also there is a topic that we had in the past and nobody's really talking about anymore because it's complex. It's distributed computing, of course. And that's also another pattern that could, have a play in this. So, yeah.. Anne: Yeah. I mean, distributed computing is very potentially well aligned with this whole thing of demand shifting and shaping and saying, you know, "actually I've got a, thing here. We need to treat it as essentially asynchronous." And of course, asynchronousness is, or asynchronicity, is a really key part of distributed systems, designing distributed systems well. Completely synchronous distributed systems are often a little bit of a distributed monolith. You don't, you often don't get the same breadth view of Wilco: a funny thing. I heard, yeah, if you're YouTube, you can indeed see, like serverless monolith, microservice monolith, there are the.. So sometimes you think, "okay, this is a good start, if we design it that way," but if you create so much dependencies and then you're still, on an abstract level, creating a monolith. Yeah.. Anne: Yeah, except a more difficult one. Because a monolith, the value of monolith is, quite simple and, well, it's, yeah.. Monoliths and microservices and distributor systems all have their place. It's always a matter of choose the right tool for the job. That is efficiency 101 is choose the right tool for the job. Wilco: Oh yeah. A hundred percent. Yeah. Anne: So we have chatted so long, we've got hardly any time to actually go over your articles. So do you? Let's see, we were going to talk about three articles. One was about your using the, SCI and the impact framework. Wilco: The thing that I would like to say on that one is that sometimes maybe people think that I'm only interested in energy and that is absolutely not the case. The only thing that I came to the conclusion, not that you have to do it that way, but to some degree, because we're still also figuring out how to best measure emissions based on the grid. And the other thing is embodied carbon. So we can do a lot of stuff with lifecycle access data of assessments and hopefully also the correct information we get from our scope three suppliers, but to some degree, if you know, and this is what I always keep saying, and maybe it's, if you know, at what moment, at what location your software was running on what kind of resource type on what kind of hardware, and you log that down, the most important thing is energy you cannot historically get back. It's you compute it and it's gone. So, and we have still with hardware disproportionality, but we have still things to figure out, okay, how do we really measure it? And I really strongly believe if I can help to get it down somehow, we to some degree can historically get back. Okay, we, knew that in that moment in time, even with electricity maps, with historical data, we can get the emissions from that moment, that location. And also with the information we gathered with past procurement information or supplier information, we can get the embodied carbon right and that kind of stuff. So that is the main reason why I really focus on that E of energy and the SCI, the Software Carbon Intensity, you know. Anne: The thing I like about the E is it's something that software developers can have effect, you know, it's not like we're, we don't care about the other things. We're just trying to focus on where we can have, make a material impact. And there is somewhere where we can really make an enormous impact. Our ability to change things in other areas is more limited and that's why we started the Green Software Foundation, was to find ways that software engineers can improve things and people related to software development. So product managers, testers, that kind of thing. It doesn't mean that we don't care. It means that we're looking at where we can have the an impact. Wilco: Oh and the quantum entanglement. That came from the idea, I'm a big nerd and I, like to know things, a lot of things, and it's also counts for physics. And I'm watching a lot of YouTube videos, also science videos, and a lot of videos every time came back on the concept of, okay, we have Einstein's, real, real, Anne: Relativity. Yeah. Yeah. Wilco: Relatable. I'm still going wrong here, like, like about the, eventually can calculate things going on in a black hole. And we have quantum mechanics with quarks and all cool stuff. And when they try to bring this theories together, they have all these kinds of issues that doesn't really match up. And that's what I've found really interesting because I started before my journey, I had some information from the holistic view perspective. So from boardroom perspective, from more top, if you think from top to bottom. And they are more interested in compliant data. So they, their data that they're using has to be valid, has to be compliant and also streamlined. So eventually they can report and also use it to create internally policies on it and that kind of stuff. But if you talk like the software engineers, we're really like, "yeah, we're missing data on calculating the correct energy consumption. If we run software on this CPU or this GPU and we're still figuring things out," but very on the lower level, but I think on a boardroom level, they are probably not that interested in how correct the CPU or GPU was, as long as it is correct enough to make it right for the reporting. So, and then the other thing where it really aligns to is at some point I came to the conclusion, like, okay, we can measure a lot of things. We can have all these metrics. But at some point the numbers are not going to change outcomes, decisions do. That was kind of a big moment for me. So there has to be at some point the sustainable decision making process going on from bottom to top or the other way around where those worlds connect to each other. And that's what I really, with this article for the first time, it was also a few months back. That I was thinking, "okay, how can I connect this worlds together?" So what would the steps be? And the main, I think outcome for me from this whole thought experiment was, is that based on certain levels in the organization, you have different requirements for the data you want, different tools you probably want to use, and also different reasons you want to use this. And yeah, that is still a thing that I'm working on. One thing where I think that everything in comes together is a simple concept that if you set up an organization, a big IT resource list, or even resource less in general. And you have this resource list where you have like your mobile phones, your laptops, but also your cloud resource. We have the infrastructure as code. We could generate those resources. And you have all these resources in this list. And if it's a multiple of the same resources, you just do a count after it. And then you would say to each resource, you would say you're using so much kilowatt hour a year. This is the footprint estimated benchmark or real time. But also this is the security need because the security people are in the same thing as us. They really want more information. And that would be a great starting point because from that perspective, you can eventually bring that to information that is more on the higher level of the organization and you also can connect it to the really nitty gritty things on the bottom. But yeah. Anne: I was very interested in your, quantum versus relativity. I also, I have, my degrees in theoretical physics, but I will say, it kind of, when I read it, I thought, ah, now this, I find this quite an interesting analogy because I was, there's two. I was the fan, I was a particularly fan girl of neither. I was always a fan girl of classical physics. I like you remember that enterprises operate at the level of classical physics. I think it's actually a really good analogy that enterprises are thinking about, "is my data compliance, you know, are my bills ridiculous, are my system staying up?" And I think that actually, and only once that's, you know, in the Maslow's hierarchy of needs or the Maturity Matrix that the green software maturity matrix from the Green Software Foundation that Pini and I run together, the first thing you need to do is get your operations right. And that is classical physics. You know, it's kind of like, are you paying too much? Are you over provisioned? Do you have a whole load of machines that you're not properly monitoring and using anymore? Maybe they are security holes. So you've got security problems there, you've got financial problems there, and you've got waste there as well. You've got loads of carbon as a result of, Wilco: yeah. That's the one I forgot on my list. Waste that also needs to be Yeah on there. But yeah.. Anne: I love relativity, I love quantum physics. But for most enterprises, I would say start with classical physics, really just focus on getting your basic ops good, do those thrift a thons. Your last article, which you're not going to have time to go over, but I would strongly recommend people read, is about the alignment of FinOps and green software. And I would say that the alignment of FinOps and green software is your classical physics. Oh, that's awesome. It works for every single enterprise. Nothing fancy is required. It's totally aligned with the business, with the desires and the goals of your business. No one is going to complain that you saved the money. Wilco: But I think that is, Anne: once. Wilco: yeah, I think that's very important because sometimes, I mean, a lot of people, they say, "why are our companies not starting?" Or "I have this idea, why is no one picking it up?" And I think we also have to be honest to ourself. If you invest money in something, you want it to be, have a certain maturity level and also, especially if we buy something, we want to know that it's going to work and that's, yeah, it's going to have the value that you expect it to be. And so together, I really believe together that we can, and the whole, or the whole line of field can bring it to a certain maturity level. So with the maturity matrix, that's a different story. Yeah. But because I think that a lot of people always focusing on, "yeah, but if you do this, you have lower of cost," I think the most important thing for a company to be more interested in how can we solve the things we need to do for sustainability is to make it more frictionless implementation and have it less risk, because I think that if you can do it in a way and it doesn't have to be perfect, but that it's easy to implement and to use, I think companies will start doing more. I strongly believe in that. Anne: I do. I think there's a huge, people really care about being green. People do really care about the environment. They don't know that there are changes they can make through their work as software engineers, that will make a huge difference. And if they do think that there are changes, they have a tendency to think those changes are misaligned with their company goals, which quite often they are because people think, "Oh, I'll just rewrite everything in Rust." And that would generally be misaligned with the company's goals. But going through and making sure that you're not over-provisioned in your data centers and you turn off stuff that's not in use, that you're being cost-minded, that's totally aligned with your business goals. And it's also aligned with being green. On that, I think we really need to raise awareness of that. Wilco: Yeah. Yeah. And that's what I really can bring back to the fact, like if you.. And like, Hey, the mention of time efficiency, something has to be fast, but you want to do it in the most low energy consumption, but then the most important thing to, so what is the value, what you're doing is going to bring. And it's something that we struggling in for a while because we really tried with agile, define like the customer value, business value. But I think those three together, if we figure that one out in sustainability in a better way, we really can make some jumps. Anne: And with that, I think we'll need to end because we have been talking for ages and it's been absolutely fascinating and I've really enjoyed it. So thank you very much indeed, Wilco, for this. So where can people find you and get involved in your meta communities? Wilco: Well, mainly I'm active on LinkedIn on my name, Wilco Burggraaf. I try to post every two days, a new content. And yeah, you can also, if you look up meetup.com, you can find the meetup group under the green software meetup Netherlands. And another thing is of course, from January, I'm starting with my workshops. So if you're located in the Netherlands and you're interested in that, yeah, reach out to me. Anne: Excellent. Thank you very much for coming on this episode. It's been really fun. And if anybody wants to contact me or chat to me about my, I also do workshops, which are not the same as Wilco's. So we are, I would say we run a complimentary workshops, then you can also take me through LinkedIn. And this is a final reminder that all the, well, we didn't really talk through the resources for this episode, but they are good background resources for our discussion. They're quite easy and pleasant to read. So have a look at the links to Wilco's posts, follow Wilco on LinkedIn and read his articles. They're very, very good. So thank you very much. And I will see you all in the next episode. Goodbye for now. Wilco: Bye. Chris Adams: Hey everyone, thanks for listening. Just a reminder to follow Environment Variables on Apple Podcasts, Spotify, Google Podcasts, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show. And of course, we'd love to have more listeners. To find out more about the Green Software Foundation, please visit greensoftware.foundation. That's greensoftware.foundation in any browser. Thanks again and see you in the next episode.…
ברוכים הבאים אל Player FM!
Player FM סורק את האינטרנט עבור פודקאסטים באיכות גבוהה בשבילכם כדי שתהנו מהם כרגע. זה יישום הפודקאסט הטוב ביותר והוא עובד על אנדרואיד, iPhone ואינטרנט. הירשמו לסנכרון מנויים במכשירים שונים.