Hello! I’m back to writing after some unexpected delays; last year dropped a trifecta of personal + family + health challenges on me, so I was busy learning some human lessons. (TLDR; things can always get much much worse, remember to feel the joy and acknowledge the privilege of having an able body, supportive partner, or nurturing family!) Shout out to the empathetic friends in my life, and thanks to you all readers for sticking around 🖤🖤
There’s also just a lot going on, huh?

Enable 3rd party cookies or use another browser
BUT ANYWAYS: on my AI + product strategy POV research project, I did 6 more great interviews exploring the gaps that came up in the first round (grounded theory research style). I’m working to synthesize those, but I thought I’d share some initial thoughts and links.
(After this project is out the door, I’m going back to the podcast format because it’s much easier and more satisfying to share things as I go. But these bigger questions required a more proper qualitative research process.)
The biggest blocking question for me and AI is ethics.
Can AI be used ethically?
I believe that AI in some form is here to stay, based on the huge investment into and decent output from the tools — even though companies are still struggling with the business models.
And the AI industry needs diverse activists and user advocates to help shape its development, not just progress-fixated scientists and businesspeople. Protesting outside the system is not always the best way to change it — we also need reform from inside.

So it’s more about HOW we engage with the tech, not IF.
Where do we draw the line? My mental model is something like this (for each technology):
For me, the decision is not just “AI” but specific technologies and providers. I’m currently avoiding AI image generators, but starting to embrace some of the LLM tools (mostly Claude) for specific low-risk, high-value scenarios.
My two biggest ethical concerns are climate impacts and creator impacts. These are two stage-gate level questions preventing me from accepting or evangelizing many of the tools.
Let’s start with climate.
1. Is the system environmentally sustainable?
On the environmental side, I’m curious about two things: the resource cost of AI, and the comparative scale of that cost.
Driving cars and flying to Europe are not Earth-friendly, but people do those things all the time. Yes, we want to avoid net-new resource hogs with unclear value. But some of the value is clear, and some of the resource usage can be minimized.
So. How much water and electricity is AI using (now and in future predictions)?
I’ve heard very conflicting information. We know that the initial training of an LLM model is a huge cost, but then using that model is much less costly. Some negative reports:
UNEP: ”AI has an Environmental Problem”
"A request made through ChatGPT, an AI-based virtual assistant, consumes 10 times the electricity of a Google Search, reported the International Energy Agency."
WaPo: “A bottle of water per email: the hidden environmental costs of using AI chatbots”
“A single 100 word email generated by an AI chatbot using GPT-4 requires 519ml of water, more than 1 water bottle.”
HBR: ”The Uneven Distribution of AI’s Environmental Impacts”
“In many cases, adverse environmental impacts of AI disproportionately burden communities and regions that are particularly vulnerable to the resulting environmental harms”
Gary Marcus: ”Eric Schmidt’s Risky Bet on AI and Climate Change”
“The case that AI will do serious harm to the environment if we continue on the current path is actually much stronger. The only potent trick anyone seems to have is scaling, and the more scaling the big tech companies do, the more power they will consume.”
But then you see rebuttals or additional information like this:
Hard Fork: ”A.I.’s Environmental Impact”
The stat about each LLM query using a pint of water is an extrapolation of Dr. Sasha Luccioni’s old work, but the actual number depends on many things (and she dislikes accounting at the individual level vs the systemic)
There’s not enough data to measure the environmental impact of AI, because none of the companies are releasing it (though that itself is not a great sign)
RAND: “What DeepSeek Really Changes About AI Competition”
”Its V3 model matches GPT-4's performance while reportedly using just a fraction of the training compute.”
If this is true, and future training costs aren’t as horrible, it changes the math in these discussions significantly
Andy Masley: “Using ChatGPT is not bad for the environment”
“It is extremely bad to distract the climate movement with debates about inconsequential levels of emissions.”
The most shocking article to me — saying the carbon cost of AI is minuscule compared to a Zoom call, or a 4K video, or a flight, or the leaking pipes in the US — enough to ignore at the individual level

Who do we believe? It’s all overwhelming to a layperson. And many of the parties who understand things well enough to judge have business interests compromising their perspectives. And it’s also constantly evolving.
So here’s where I’m currently at.
What do we do?
For people building or using this technology:
Reduce unnecessary GenAI usage. Don’t use it like a calculator! Don’t auto-load it in search results pages! If you don’t find it useful, don’t use it! (ht Dr. Sasha Luccioni on Hard Fork )
Make algorithms more efficient. Measure the system! Reduce its demand for energy! Reuse components where feasible! (ht UNEP, see their other recs if you’re at the policy-making level)
Focus on the corporate/governmental level. (See the UNEP guidelines.) The phrase “carbon footprint” was invented by BP to focus peoples’ attention on themselves, not the big polluters. Yes, it’s nice to do the actions that convey and reinforce our ideals, like recycling. But the SCALE of change we need is systemic, and requires collective action.
Subscribe to scientists and journalists. First off Gary Marcus, you’ll learn far more from him than me. He’s a consistent and informed AI skeptic. Also Casey Newton for a more moderate or positive view. I’ve learned a lot from watching them argue in public. Subscriptions help good journalism continue to exist.
I would love to hear other great sources you’ve found. And any other thoughts!
2. Was the source material ethically acquired?
I’ll have to finish this part in a next post. But I hope that licensing models can create a win-win situation, and public-domain models like Public Diffusion will overtake the exploitative/career-ending ones. That’s definitely not what’s happening yet.
So, can AI be used ethically?
Environmentally, I think yes — but let’s keep both those two stage-gate questions in our product strategy docs. And dig into the creator question too.
If it’s actually no — well, Bill Gates predicted that the 3 industries remaining strong in this new world will be AI specialists, energy, and biology. So off to the wind farm / science camp I go.
Thanks for reading.
Thanks for the Gary Marcus recommendation. I definitely need to integrate "Deep Bullshit" with my "Bullshit All The Way Down" concept. :->
Another good blog is AI Snake Oil. Very thoughtful stuff. https://www.aisnakeoil.com/
Omg the chart with the zoom call / 4K video!!! I’ve been enjoying “Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence” by Kate Crawford (have not yet finished though, admittedly!)