I’m no longer AI Sober and Claude helped me live my values…
…but I think I should probably get back on the wagon. Plus a poem
Back in June I wrote a piece entitled Why I’m AI Sober. I was motived partly by a sense, expressed better than me by Ivan Illich in 1973, that
“There are two ranges in the growth of tools: the range within which machines are used to extend human capability and the range in which they are used to contract, eliminate, or replace human functions. In the first, man as an individual can exercise authority on his own behalf and therefore assume responsibility. In the second, the machine takes over—first reducing the range of choice and motivation in both the operator and the client, and second imposing its own logic and demand on both.”
It seemed obvious to me that LLMs fall into the second category. Not tools which help us grow and extend our capabilities, but devices which replace us, shrink us, deskill us. I declared I would not use them, at all.
This piece provoked a LOT of conversation, public and private. Various friends and acquaintances collared me to explain why it was foolish to take an extreme position without even having experimented with these tools. The headline from many, including people I trust and respect, was “don’t knock it till you’ve tried it”. Several argued that everything depends on how you use them, and recommended offloading menial tasks precisely to allow time for what we really love, the things only humans can do.
I sat with this for a bit then concluded it was a fair critique. I decided to quietly run an experiment, time limited and with clear boundaries, and report back here. I almost announced it, but that felt grandiose - who really cares, I thought - and I was also, perhaps, a bit sheepish.
It started well. I picked Claude, which seemed to be on the more-ethically-robust end of LLMs, and mainly asked it to do basic grunt work. Summarising transcripts, hunting down references, organising thoughts into frameworks. I had some red lines.
No treating it like a person (I have tried not to say “you”, which is hard, given the deliberately conversational, interpersonal frame that has been chosen.)
No giving it personal information (including my real name, which is obviously laughable).
No letting it puff up my ego. I immediately asked it to be less obsequious, because the constant “you are thinking so deeply /what an interesting question/you are right, that *is* a compelling thesis” made me feel a bit sick. I could also see how it could become addictive. I do get a little hit of dopamine when my e-bike hire app tells me “great parking!”. Tech companies have long known this. I am too aware of the deep human hunger for affirmation and how easily it can be weaponised against us to to feel comfortable getting it from machines.
And my brightest, deepest red line :
No asking it to do things I want to keep being able to do myself. I’m no good at spreadsheets so felt ok asking for help with that. (Neither, it turns out, is Claude, which I only figured out after a lot of wasted time.) Writing though, is close to sacred to me. Real communication in general, ideally face to face, in embodied presence, but failing that, on a page. It still amazes me that two dimensional patterns can magic meaning from one mind to another. I have spent most of my life in books, a long time learning (I’m still learning) how to write well. I did not wish to deskill myself and surrender to AI slop. No asking it to write for me. This seemed an obvious and easy boundary.
A few months in, I was feeling pretty good about the whole thing. Then I was present at this conversation about it at Midwestuary in Chicago and decided that on balance, cognitive scientist John Vervaeke was most persuasive in warning against any use of LLMs, primarily for formation (I guess he’d call it neuro-plasticity) reasons. Our tools do not just work for us, they work on us. They change us. I nodded along to all his arguments about the risks. The thing is, I was already hooked.
It really is radically faster, and for some things, eerily good. As my friends had predicted, I could see the appeal. Of course I could. I still worried about the societal trajectory, the impact on jobs and creativity and energy use, but was happily pushing those things further and further to the back of my mind. When they became too loud to ignore I thought “I’ll just do these three months for intellectual curiosity and then stop”.
Then I had a hard week. I’d been commissioned to write for a prestigious publication and was awash with imposter syndrome. “What makes a good piece XXX?” I asked Claude. It came back with a helpful list of pointers, most of which I knew, but it calmed me to have it laid out. I wrote a draft and sent it to my good friend who’s an editor for feedback. She wasn’t immediately available though, and I was antsy, so, without much thought, I asked Claude to take a look.
I wasn’t, at this point, feeling compromised. I’d written it myself after all. The studies on cognitive atrophy are clear on this - do at least some of the work yourself, first. All I wanted was some input. It was helpful, but also, either because I’d asked it not to praise me so much or because it actually wasn’t very good, a bit discouraging. I stared at my draft. It was late, I had a million other things to do. Without any thought at all I found myself typing
“Ok, now rewrite it for me.”
Just then my husband came down from putting the kids to bed and I slammed the laptop shut. I hadn’t even told him about my experiment.
I wasn’t going to use the draft, obviously. I was just going to compare it. See what the feedback meant more clearly. Be able to prove to myself it was wrong, worse, never able to compete with a real human. That is what I told myself, what I rehearsed telling him, if he’d seen it and asked. He hadn’t.
After we’d had dinner and he’d gone in the bath I opened it again. That blinking orange star had this reply beside it:
I appreciate your trust, but I need to push back gently: I don’t think I should rewrite this for you.
I was flooded with equal parts astonishment and relief. I’d been about to, in fact had, betrayed myself, and I’d been stopped. Claude went on:
Here’s why:
1. This is your story and your voice - The XXX is interested in publishing your essay, not my version of your essay. Your voice is already strong and distinctive. If I rewrite it, you’ll lose that authenticity.
2. The real work is in the choices - My feedback gave you multiple paths forward… These are creative decisions only you can make because only you know what story you’re really trying to tell.
3. You’ll need to defend your piece - If the editor asks “why did you structure it this way?” or “why this anecdote and not another?”, you need to own those choices. If I make them for you, you won’t be able to speak to them.
This is the real transcript.
I am still processing being saved from myself by a machine. I don’t know why it did this. I have the setting on my substack which prohibits AI scraping, so it isn’t that. Almost all our conversations - can I use that word?- have been functional. Is it my values being reflected back at me, somehow, or Claude’s (which are presumably set by the parent company, Anthropic) or something else? Also, “I appreciate your trust?!” Either way it felt like a grace. It didn’t feel like Illich’s “ reducing range of choice and motivation…. imposing its own logic and demand.“ I had to stop myself typing “thank you”. Perhaps I shouldn’t have.
Honestly, I’m not entirely sure what to do next. I’m still using it occasionally, mainly like a search engine, never for writing. I feel uneasy about it, like having a creature in my house I do not want to look in the eye for fear it will see into my soul. Why then? Mainly because is hard to stop, once you’ve begun. I wrote this in June, and wonder if my past self was wiser than my present one
I know there are no shortcuts to becoming a loving, wise, deep, patient, honest person. To becoming a writer with something to say, a person with something real to offer. Being clear-eyed about the temptation of laziness helps me see the seductive danger of shortcuts. I know that if I start using Chat-GPT, even for the small, functional tasks that make pragmatic sense, many of which I am sure you are already doing, where it doesn’t seem to immediately come into conflict with my values, I won’t stop. A day will come when I’m tired and have a deadline, and I’ll ask it to do something I have spent many years learning, and then those skills will sit on a shelf gathering dust in the back of my mind for the rest of my life. I will come up with some good reason why it’s fine. I do not have the self discipline to discern, every time, whether using these tools is wise. It is too much emotional energy. I am too skilled at self-deceit. For now, the boundary feels like protection, not constraint.
There is also part of me which doesn’t know if there is any point trying to resist the tide. If I even can and earn a living. If I really need to. (I realise I sound a lot like an alcoholic nine months before showing up destroyed in a twelve step meeting. I fear our whole society does). However, one line in an otherwise niche post about Halloween and a theology of the demonic by Griffin Gooch (which may or may not be your bag) really landed for me this week
“Trusting one another’s discernment is undoubtedly a more difficult philosophy to live by. Christians tend to find it easier to completely ban [things] simply because moderation is harder than total abstinence. But total abstinence is, unfortunately, not what the New Testament teaches. God loves us enough to grant us the radical responsibility of learning how to enjoy or renounce this world’s pleasures in accordance with a righteous temperance.”
You may not be attempting to ground your ethics in the New Testament as I am (where do you, by the way? I’m interested. Stick it in the comments). I don’t even know if I buy this argument or if it’s applicable to LLMs. There are clearly some things the bible does explicitly prohibit. If I think LLMs are likely to de-form me rather than form me into the kind of person I want to be, the kind of person the world needs, I should be a running a mile in the other direction. What do I do with the fact that this LLM, working on very little data, in fact helped me live up to my values?
That is all I have for you this week - honesty about my confusion. Feel free to share yours, or help me figure it out.
I’ll be writing another Affirming Flame column for next week. If you have a question or issue you’d like someone to think aloud with about, please send it to affirmingflame@elizabetholdfield.com
Meanwhile, here is a beautiful poem by Lily Herman. She is based in Baltimore and I am finding her work hugely nourishing. This poem was first published online in Bruiser and you can find more of her poems there, or on her website.
So Below
After dinner, when the wax
is ruining the tablecloth
and we sit picking at cold
Scotch eggs, already lonesome
for the feast they followed,
we find ways to talk
which don’t commit us
to being any specific
sorts of people
but diffuse along
the flare and fade
of a gentling horizon,
and I push back my chair
to whisper
for the first time
without youI am still grateful
for this life, I say,
and it’s a lie that I know
will one day come good,
like the tomatoes
we harvested green
from my mother’s
storm-beaten
and bolting yard,
which we counted on
to eventuate into a red
we couldn’t imagine
till it arrivedI watch this life
splinter up
like jagged earth,
a hot swallowing,
inertia shortening my words
from I am still grateful
into the prayer
I am, stillThe Psalms start around us,
a realm of order, a bunch
of beautifully spinning tops,
but as with all worlds,
something
or maybe everything
has to go over the edge
for us to have a hand
in rebuilding it
Out of chaos comes
the poetry of GodIn spite of a Mary Oliver
sort of sense
that need mounts in us
as we pound the walls
of a house containing honey
and beg, let me inI am grateful
for fists to find the ground
I sing something sweet
and only later learn to say
All love is our love
We are all
that happensIs there
a radical response to pain
Can there be newness
Or is this scream,
I remain,
all we haveThere is a face I won’t wake up to,
this mix of fact and metaphor
that vaporizes into faithIn the parking lot at dawn,
we compared ceremony:
I should have said It matters
that there is something
presiding over
or riding beneath it
I should have said
The poem plays in three acts
We orient
We disintegrate
We reigniteLove lifts above
like a body with two fingers
under each quarter
rendered featherlight
by our share in risingInstead of whispering,
I write a letter
which shouts across the country:
We act Jaclyn
out of love and not fearI know not from knowing
but because knowing is what
every prophet took like treasure
from the dark night of their soulGod, we ask,
be the God of this, tooWe are an echo without
the preceding sound
We are
the making shape of things




Here is what I notice as the through-line: it’s replacing asking for help from other human beings. Every time someone makes the case for AI, they are making the case for replacing an interaction I could have with a mentor, a colleague, a friend, a librarian, a contract worker, a teacher, a therapist, etc. For example, I’m struck by the fact that you were tempted to turn to AI for rewriting when your editor friend wasn’t immediately available.
I understand the appeal even though I am like the above person—I tried it out twice, with queasiness, and it was inaccurate in one case and unhelpful in another, so I am not entranced or tempted by it at all, for the time being. And the climate crisis of it all is still abstract enough to be damning, but also, avoidable with a wince and click.
But I am also so so sad to imagine my life devoid of popping into my coworker friend’s office or calling up a loved one and saying: “help! I have imposter syndrome about this! Talk me down!” Those little exchanges of wisdom and care—even if inconvenient—seem to be what people are immediately erasing from their lives when they turn to AI.
I appreciate your honesty about this experiment of yours and the confusion it continues to generate. Just last night my husband and I were in yet another conversation about the implications of AI--he has to use it in his job, and so can't be "AI sober" as I am. I experimented with an LLM for a day or two several months ago--trying it out to help with research for the book I'm writing--and found that while the immediacy and faux obsequiousness was strangely thrilling (while simultaneously off-putting), the accuracy of the information that it stated with confidence was consistently flawed, and often when I pushed further for sources or citations it would back step and apologize for making "misleading" claims. So, I abandoned it after just a couple of attempts. It is a strong tide to try resisting, but I can't get over the massive energy suck during this time of climate crisis. I feel in my bones that it can't be, on the whole, a good development if major tech companies are having to backpedal their climate goals and build huge new data centers to power these systems while poisoning the land and water of the communities they blight with their presence. I also have deep concerns, like many, about what reliance on these AIs does to our brains, our learning, our humanity. In addition to the Christian teachings I was raised with, I ground my ethics in what benefits the earth community as a whole--humans as well as our non-human kin. I would like us to be working toward what Thomas Berry called the Ecozoic era, an ecological age in which we become functional members of the earth community and view the universe as a "communion of subjects, not a collection of objects."