Does AI really use less carbon than a human?

First of all, what a stupid question.

Does AI really use less carbon than a human?

Before we start: This publication is entirely independent, and publishes monthly articles about technology, culture and society along with occasional extra content.

Each of these articles takes a while to research and write, and while it’s enjoyable, it is time consuming and bears an opportunity cost. If you enjoy this content or find it interesting, and would like to support this publication, please consider subscribing for AUD$5 per month, or AUD$30 for a whole year. You’ll get access to articles early, and be able to directly suggest topics to talk about.

If not, please enjoy, and share with someone who you think may also enjoy this content.


Photo Credit: Brett Jordan from Unsplash

Ah, jaysus, here I go, talkin’ about AI again.

For those who aren’t aware, when I’m not writing witty-yet-digestible prose to a loyal tolerant worldwide fanbase of dozens, I work in environmental design. More specifically, I work in environmental design of buildings. One of the things that building developers are increasingly aware of1, and interested in, is the carbon use of a building over its entire lifecycle. In the developed world, we are now at the stage where the carbon used in constructing a building represents enough of its total carbon consumption that we should be thinking about minimising it. Considering where building energy efficiency was in the early 90s, it’s a pretty amazing achievement. Obviously, it’s not enough considering the way we’re still treating the earth like it’s a rotisserie chicken, but it’s still an achievement.

Life Cycle carbon is a really thorny thing to calculate though. You need to consider how materials were mined, how they were transported, how they were manufactured, how they were shipped to site, how they were installed, and how often they need to be replaced. And you need to do that for every material or product in your building, for as long as the building is expected to exist.

I have former colleagues who are on the boards that define what it means to measure lifecycle carbon. There are industry bodies like MECLA and NABERS figuring out the standards to use and methodologies for using them. There are scope documents and definitions and discussion of units and all sorts of stuff, and all of these documents are written and reviewed by serious people with in-depth, specific knowledge.

Turns out though, that if you’re a particular breed of bonehead, you can just fabricate a bunch of stuff and call it a carbon analysis, because I found this paper (published in Nature2, no less!) saying that ackshually, AI uses 1500 times less carbon than a person!

I guess this paper is the kind of mold-infested brick-headed thing that you think is a good idea to write when you outsource all your thinking to linear algebra.

Come along, we’re gonna dunk on some techbros.

AI nonsense.

I’m not going to do a line-by-line debunking of this paper. First of all, it’s really not worth it, because the work within is not serious. This is a poorly researched article; a blind swipe by a non-expert at a field where serious people are doing good work in an attempt to legitimise or minimise the harms that this technology is doing. I’m going to point out a few of the critical research failures, to provide a sample of the kind of wrongheaded thinking that the article shows, but the goal of this article is not to prove this particular paper wrong in isolation.

Instead, I want to talk about how the attitude of people who are proponents of generative AI systems seem to be operating. This is broader than this one paper, but the thing that I’m seeing from GenAI people is a disdain for the expertise of other people.

And it sucks.

So, as I point out some of the (very funny) flaws in this paper, I want you to ask yourself why the authors didn’t approach, say, embodied carbon experts, or writers, or illustrators, or any of the people with actual knowledge in the areas the authors abut in their discussions. Moreover, I want you to consider why someone felt the need to construct this poorly researched paper in the first place.

For transparency, when I’m talking about embodied carbon and lifecycle analysis, I’m going to be referring to BS EN 15978. This is the model used in construction, for buildings specifically. It’s got a diagram that looks like this (which I pinched from here):

To be clear, we need to get to “Cradle to Cradle”, yet we still suck at “Crade to practical completion” in most cases.

Obviously, buildings are not the same as technological systems, so this diagram and methodology doesn’t map one-to-one onto how a software product would be scoped. However, the fundamentals are the same. There is an upfront stage (stage A), a use stage (stage B) and an end-of-life stage (stage C). This is something that is universal when trying to scope out life cycle carbon. Figure out how much you need to build, figure out how much it takes to maintain, and then figure out how much you need to throw out. So, I concede that the methodology I’m more familiar with is not the same one used to scope tech. However, I think drawing a broad comparison here is still valuable for two reasons:

  • The errors in the methodology of the AI paper are more fundamental than getting into the nitty gritty about scoping. I can point to where they’ve failed to define things appropriately without being concerned about the specifics of the borders of scoping. If I tell you that I need a picture of a boat, and you draw me a picture of an aeroplane, we’ve got bigger problems than whether I meant a schooner or a sloop.
  • I’m not submitting my substack post to a fucking scientific journal.

With that out of the way, let’s talk about what I think is wrong with this article.

(Don’t) Consider the Human

The first thing that set alarm bells off for me when I skimmed this paper was the way they calculated the emissions of a human.

In the results section where they calculated the carbon impact of human vs AI writing, they have two methodologies.

In the AI writing section, there’s a discussion of how they’ve calculated the amount of carbon that the AI uses while generating a page of text. There’s a bunch of estimates there that, to be fair, do a decent enough job of scoping out upfront amortised energy use associated with training, followed by the energy use of system queries.

Following that, there is a rather less impressive discussion of human writing productivity, which includes this hilarious quote:

An article in The Writer magazine states that Mark Twain’s output, which was roughly 300 words per hour, is representative of the average writing speed among authors. Therefore, we use this writing speed as a baseline for human writing productivity.

My brother in Christ, you couldn’t find an author more representative of contemporary writing practice than Mark Twain3? The dude died in 1910! I think it’s fair to say that the technical improvements we’ve made have potentially improved writing throughput since then. He wasn’t exactly sitting around with a Macbook and a copy of Scrivener.

Anyway, they then take our anachronistically ousted author and do some back-of-napkin numbering to figure out how much carbon it takes to write. I’ll talk more about the source they use in the next section, but it’s bizarre to read the difference in stringency between the two calculations. AI is measured, in grams, to three significant figures. Humans, by comparison, are handed an annual estimate in the tons, and applied to Mark fucking Twain. Despite this off-kilter measurement of carbon intensity, that’s not the main problem with this analysis.

No, the main problem is this:

YOU DON’T INCLUDE HUMAN CARBON IN A CARBON CALCULATION.

This paper measures the consumption of AI, entirely autonomously and in great detail. It talks about the size of the database, the time taken for a query, all that jazz. In the world of this paper, AI simply springs fully formed from the ground, builds itself and then distributes itself to the platonic ideal of a querying body, whereupon it answers questions put to it by the aether. Bust out the ouija boards, folks!

Then we get to the human, and it mentions the power consumption of a laptop and the energy consumed by the human. Suddenly, the (suspiciously sizeable) impact of a human is folded in, and wouldn’t you know it, it’s just so much larger than what AI would consume.

I shouldn’t really have to say this, but the reason we don’t include the carbon that humans consume in carbon analysis is that it’s taken as a given that we still want the humans to be around even if they aren’t in that place specifically. If you take humans away from a system, those humans still exist somewhere, so you’re not actually removing human carbon in any meaningful sense. Certain books like The Population Bomb do posit that getting rid of people is the best way to address environmental issues, but I’m of the opinion that global genocide isn’t a great move. Call me a softhearted lib.

If we were to include the human in this equation (again - you wouldn’t), you would have to include the way a human interacts with the AI. How many words long is the query? How long does the human think before typing its query? How long does the human take to read the text that has been produced? Even when AI is being used, there are human interfaces with the program that are time consuming.

If you are going to include human , you’d want a really robust source of how much carbon a human uses. Speaking of…

Read Your Sources

In the aforementioned part of this paper, the authors assume the following:

the emission footprint of a US resident is approximately 15 metric tons CO2e per year, which translates to roughly 1.7 kg CO2e per hour.

This is described as though it’s the amount of carbon consumes per hour, and it’s thrown out without much comment. The source of this is here, and we’ll come back to it. First of all, let’s just consider this number:

1.7kg of Carbon dioxide per hour is a lot. Carbon exhalation is what humans do. We take all the caloric stuff in our bodies, we mix it with the water we drink and the air we breathe and carbon dioxide is part of the stuff we breathe out. This is a lossy equation, which is why we have to eat, and why eating less means we will lose weight4. A simplistic calculation posits that the basal metabolic rate of a 25 year old female human is about 1,550 calories a day. Another simplified calculation5 is that 3500 calories is the equivalent of 1 pound (0.45kg) of fat. So if you reduce your caloric intake by 3500 below your total energy expenditure, you will lose 0.45kg of weight.

You might see where I’m going with this.

Carbon doesn’t just come from nowhere. When we exhale carbon, or excrete it in other ways, we lose that mass from our body. In order for us to emit 1.7kg of (about 4lbs) of CO2-e per hour, we would be losing 1.7kg per hour from our bodies. To avoid just losing that weight, we would have to consume a commensurate amount of calories, which we could express as a basal metabolic rate.

For this to be true, instead of a basal metabolic rate of 1,550 Calories a day, our 25-year old female subject would be burning 14,000 calories per hour. That’s about 24 Big Macs, by the way, or 160 odd bananas if you don’t like junk food. Per hour.

I hope she’s hungry.

May well you ask, then: where does this data come from, and why is it so off-base?

To pay this paper the smallest modicum of respect, they at least sourced their data. It’s here, and it doesn’t take long to see where they fucked up. See if you can spot it!

‘Murica

That’s right: this graph is not measuring how much carbon a human uses by just existing - it’s measuring the CO2 emissions per capita of burning fossil fuels. All this graph is doing is taking the total carbon emissions of the entire United States, and dividing it by the population.

So yeah, the entire industrial complex of the United States economy, does, in fact, use a shitload of carbon per capita. That’s not the same thing as the carbon intensity of a person though, that’s the carbon intensity of everything. Which, ironically, includes things like AI datacentres.

Oh-ho! Hoist by our own petard.

This is probably the most egregious example of the fundamental misreading of data that I saw in this paper, and it’s emblematic. It feels like the people who wrote this either didn’t understand or didn’t want to understand their pool of references, and were just grabbing things that seemed supportive without digesting them.

And sure, I was in high school once, I can relate to a frantic scramble to find a citation that seems like it’s supporting your thesis, but this isn’t high school. You can’t do this and still be considered seriously.

If we assume a spherical cow in a vacuum…

Physicists will often take shortcuts when describing information. The phrase of “assuming a frictionless spherical cow in a vacuum” is a joke describing the way physicists discard a lot of real-world information

The important thing about making assumptions, though, is that the thing you are assuming has to either be grounded in reality, or understood that it is modifying the outcome. The spherical cow is the latter; assuming that the air conditioner will be turned on all the time in an office is the former.

One of the key assumptions in this paper is neither of these:

Assuming the quality of writing produced by AI is sufficient for whatever task may be at hand, AI produces less CO2e per page than a human author

No, dude, you don’t get to assume that AI is just going to produce work of sufficiently high quality! The outcome has to be appropriate, otherwise the work is not done. You don’t get to pat the glorified bogosort on the back for spitting out text, the text has to be good for the task at hand.

They handwave this by saying:

Both human- and AI-produced text will likely need to be revised and rewritten based on the human authors’ sense for how effectively the text expresses the desired content. Since this revision process exists in both human and AI-assisted writing, we feel it is beyond the scope of this analysis.

Which just moves the problem back a single step: Assume that humans and AI are just as good as each other at editing work. Again, what evidence has been shown that this is the case? It is central to the argument, yet at no point do the authors demonstrate that human and AI generated work is equivalent to the work of a creative human. They do mention in the limitations section that “writing an in-depth, heavily-referenced, original article on a niche scientific topic is currently beyond the capabilities of an AI” but a) assume that at some point AI will be able to do it, and b) assume that other tasks performed by AI are already human equivalents.

Proponents of generative AI tools constantly talk as though the things being produced by the machine are, by dint of their method of generation, task appropriate, when it’s just patently not true. The banal word salads flooding the social media accounts of self-described entrepreneurs and the overlong emails that clog my inbox are proof of that, and the confused miasma of smoke-vendor marketing around AI demonstrates that generative AI is still a solution in search of a problem.

Show me an AI that is actually as creative, thoughtful and inspired as a human who has skills in the area the AI is trying to replicate, and then we can talk about carbon.

Better yet, don’t.

What’s the Point?

What we’ve seen, then, is an article that is fundamentally incorrect. But what’s the endgame here? Why even engage in shoddy research like this if it’s such poor quality?

Generative AI companies are very busily trying to apply the “disruption” model to pretty much anything they can get their hands on. From illustration to writing to coding and everything in between. The arrogance it takes to walk into an industry and declare that you’ve automated all their processes and solved all their problems is staggering, and it’s happening everywhere right now. You can’t so much as log on to LinkedIn without seeing some wet towel of a human being bloviating about the need to adapt or die before the oncoming AI revolution.

But it doesn’t work, because people with expertise can tell. Authors can spot AI writing from a mile off, because they’ve honed the craft of writing. Illustrators can spot AI art a mile off, because they’ve honed the craft of illustrating. Coders are constantly having to deal with the mess that results after a session of “Vibe Coding”.

And this is why people with barely more than a passing knowledge of how to do a carbon analysis can look at an AI bro’s attempt at it and immediately clock how laughably bad it is.

Because people who have expertise in a field are better at that field than the robots who don’t have that expertise.

I could go on and on about how bad this paper is, but it kind of doesn’t matter. The reality is, this is not a paper that will be taken seriously by the people doing the work of reducing carbon emissions, because they’re the ones with the expertise.

No, this is an article for AI sycophants to wave in front of skeptics, likely without having read beyond the headline themselves, or for genAI companies to be able to drop into quarterly reports and attach green stickers next to.

It’s in-group messaging, intended to corral support from people who are already convinced. There are a lot of people who are already aboard the AI hype train, but the voices shouting it down have got specific issues that proponents need answers to. Papers like this one provide a rebuttal. It’s not a good rebuttal, as my discussions have shown, but it’s a rebuttal nonetheless. And if they are able to loudly wave papers like this in front of a public that lacks the esoteric knowledge discussed therein while experts are distracted by doing actual meaningful work, then the court of public opinion shifts an iota toward AI being a good thing, actually.

But it isn’t. AI is still a stupid, dangerous technology being implemented with thoughtless idiocy. It lowers peoples ability to think critically, clogs up the internet with meaningless bilge, redirects water infrastructure away from crucial housing developments, can’t do basic maths, drives people to psychosis, and the end goal of its creators is to replace human labour and destroy livelihoods for the sake of profit margins. And it does it all while not achieving the things it claims to achieve, while being operated by people who don’t know enough about the things they’re generating to know better.

D minus, and if I catch you using ChatGPT in my class again I’ll send you to the faculty head for discipline.


Thanks for reading. If you find this valuable, please consider a free or paid subscription, or share this with friends and family who may find it enlightening or useful. See you again soon.


  1. By which I mean, “one of the things developers are being forced into by legislation”

  2. More specifically, and more importantly, it’s Nature’s “Scientific Reports” section: a high volume, pay-to-publish subsidiary of nature that claims to be peer-reviewed, but based on some commentary I found, there’s some issues there.

  3. For some reason, Mark Twain is an author whose name gets wheeled out to bolster ideas a lot, including an oft-misattributed quote: “Twenty years from now you will be more disappointed by the things you didn’t do than by the ones you did do. So throw off the bowlines! Sail away from safe harbor. Catch the trade winds in your sails. Explore. Dream. Discover!” This doesn’t particularly sound Twain-ish, and with good reason: it’s from a book written in 1990 by an author name H. Jackson Brown Jr, who attributes the phrase to his mother. It seems to have been popularly misattributed in the late 90’s by an ad campaign (what else) for a sailing company (what else). Sorry, Mr Brown Jr’s Mum but it appears your words are only worth a damn if a rich man said them a hundred years ago.

  4. There’s lots of complicated science here and I’m doing a simple calculation to demonstrate something, don’t @ me.

  5. Again there are complexities around this. I’m making a point, and the complexities wouldn’t go near the numbers that would be required to make the point I’m making untrue.