46 Comments
User's avatar
Kathryn L-B's avatar

Here is what I notice as the through-line: it’s replacing asking for help from other human beings. Every time someone makes the case for AI, they are making the case for replacing an interaction I could have with a mentor, a colleague, a friend, a librarian, a contract worker, a teacher, a therapist, etc. For example, I’m struck by the fact that you were tempted to turn to AI for rewriting when your editor friend wasn’t immediately available.

I understand the appeal even though I am like the above person—I tried it out twice, with queasiness, and it was inaccurate in one case and unhelpful in another, so I am not entranced or tempted by it at all, for the time being. And the climate crisis of it all is still abstract enough to be damning, but also, avoidable with a wince and click.

But I am also so so sad to imagine my life devoid of popping into my coworker friend’s office or calling up a loved one and saying: “help! I have imposter syndrome about this! Talk me down!” Those little exchanges of wisdom and care—even if inconvenient—seem to be what people are immediately erasing from their lives when they turn to AI.

Emily Bazalgette's avatar

So true, I was thinking about this last night. My instinct whenever I need advice is to turn to friends and colleagues. I think AI is replacing this vulnerable and uncertain act (sometimes people don't reply! sometimes you don't want to hear confronting truths! sometimes their advice is bad!). Instead here is this relationship simulacrum, always available, stuffed with all the knowledge of the world, ready with reassuring therapy-adjacent platitudes. That must be so seductive. The whole dynamic fits so well into Fully Alive's thesis about choosing the hard work of tending to maturation and soul.

Elizabeth Oldfield's avatar

Yes. I noticed that after.

Kathryn L-B's avatar

I will also add I so appreciate this article of yours and will probably share it with people, in discussions about AI in the future, because it is so rigorous and vulnerable at the same time. Thank you for sharing!

Sarah Rose Nordgren's avatar

I appreciate your honesty about this experiment of yours and the confusion it continues to generate. Just last night my husband and I were in yet another conversation about the implications of AI--he has to use it in his job, and so can't be "AI sober" as I am. I experimented with an LLM for a day or two several months ago--trying it out to help with research for the book I'm writing--and found that while the immediacy and faux obsequiousness was strangely thrilling (while simultaneously off-putting), the accuracy of the information that it stated with confidence was consistently flawed, and often when I pushed further for sources or citations it would back step and apologize for making "misleading" claims. So, I abandoned it after just a couple of attempts. It is a strong tide to try resisting, but I can't get over the massive energy suck during this time of climate crisis. I feel in my bones that it can't be, on the whole, a good development if major tech companies are having to backpedal their climate goals and build huge new data centers to power these systems while poisoning the land and water of the communities they blight with their presence. I also have deep concerns, like many, about what reliance on these AIs does to our brains, our learning, our humanity. In addition to the Christian teachings I was raised with, I ground my ethics in what benefits the earth community as a whole--humans as well as our non-human kin. I would like us to be working toward what Thomas Berry called the Ecozoic era, an ecological age in which we become functional members of the earth community and view the universe as a "communion of subjects, not a collection of objects."

Caroline Ross's avatar

All these (very interesting) issues about the details of using AI are immaterial. Every single AI search / question / use uses a bottle of pure drinking water for cooling, produces toxic sludge, pollutes and despoils land and air, specifically poisoning low income and marginalised people, and wildlife, especially in areas where folk don't have the clout to oppose it. It's not about us and our creativity. It's a way to amass data for military and state use and a sure fire way to trash the rest of the planet and use all remaining energy.

To me, it's the private jet of internet use.

https://www.politico.com/news/2025/05/06/elon-musk-xai-memphis-gas-turbines-air-pollution-permits-00317582

Elizabeth Oldfield's avatar

Thanks Caroline, I appreciate the bluntness of that last line

Caroline Ross's avatar

I have been hanging out this week with some similarly-thinking people whose old family cabin is about to be next to a huge data centre in W Virginia, so excuse bluntness. Their distress and anger at being powerless moved me. I have to work out a way to learn to use Linux and get away from Microsoft, Google and Meta products (hard, as Instagram is the basis of most of my art teaching livelihood) but they all use AI. So, I am absolutely as implicated as anyone else. And besides, Substack help is AI... Thanks for writing about all this.

Anthea Lawson's avatar

'private jet of internet use' - that is inspired. In addition to the private jet aspect I am conscious of all of the authors whose work was ripped off in Anthropic's training of Claude - https://www.bbc.co.uk/news/articles/c5y4jpg922qo- and who have now won their settlement - for what it's worth, horses and stable doors and all that. Thank you for being so open about your experimenting Liz, it feels important to have this kind of discussion out in the open. You describe the slippery slope brilliantly.

Vic King's avatar

Yep, it's uncanny. Generative AI gives me the same kind of creeps as certain baby dolls... close enough to be mistaken for human at a side-glance, but the longer you look, the more disturbingly different. It's a simulacrum, but something's off...

This line caught in my craw: "There is also part of me which doesn’t know if there is any point trying to resist the tide." I feel ya, and yet... with AI, as with all tech, I'd encourage you to resist the rhetoric of inevitability. In our tech-drunk moment, complete abstinence and principled temperance can both be ways of resisting. Paul Kingsnorth calls 'em being a 'raw' or a 'cooked' barbarian.

Don't just do something, stand there!

Elizabeth Oldfield's avatar

Thank you! Yep yep yep

A. H.'s avatar

Thank you for sharing. My thinking about the shaping nature of technology is summed up in the following personal experience:

In my early 20's, I lived in Germany for one year with my partner who was on an exchange program. I wasn't working, knew very little German, and only had a morning German class to keep me busy. We had a home phone and basic internet...and that was it. All of my information about what was happening around me was from wondering around the city and asking people questions in my broken German (because it was a university town, a lot of people would kindly switch to English for me), looking at student bulletin boards and translating German with my limited skills and reading emails from the exchange program about student trips and other opportunities. After a period of depression and working through feelings of uselessness, I found myself spending much of my time just wondering around the city and surrounding nature on foot, driven only by curiosity (what's over there? if I take this road, will it lead me to X neighborhood?). I knew nothing of German brands or public transit, so I had to read all of the signs at the local stops and think through what would take me to right street and just pick whatever looked interesting in the store. My "patch" in that city was fairly small - mostly concentrated in a half-hour walk or 20 minute streetcar ride from my apartment, but I had poked my nose into many places. I knew where the good food was: I got to be delighted when there was a street fair (and distressed when there was a public holiday and all of the stores were closed, and I had no groceries). I didn't know many people, but I knew the place well, and 15 years I have yet to feel like I've ever known any other place as well - even places that I have lived for 8+ years.

I went back to visit the same town in Germany in my early 30's with a smart phone. It took less than a day for me to want to leave the phone at the place where I was staying. I found I couldn't experience the place in the same way I had before. It made me feel like a tourist in a place that was in my bones. Sure, it was "easier" to look up directions and know precisely which streetcar I needed to take to get from where I was to where I wanted to go, ignoring all the things along the way, but I knew the shape of that city, its neighborhoods, and its squares. I wanted to look at a map and associate each stop with the physical place surrounding it, not blindly follow the directions of the computer in my pocket. Sure, I could find "the best" restaurants by looking at their ratings, but then I would lose the fun of walking up to a place and assessing the menu and the surroundings and how many people were there and being surprised and delighted by trying something on a menu that I only half understood and learning what it really was.

I use technology a lot in my job now. I work with graduate-level students looking to go into business, so I cannot do my job without knowing at least some of what they can do with AI and what they need to be able to do with AI. I also work with students who increasingly struggle to be ok with making decisions for themselves and living with the consequences. As individuals who have been high-achieving all of their lives, they were already shaped to expect certain levels of success by the people and institutions around then, but now they are additionally adding on a layer of expectation that the answers are easily found. After all, most of these students have lived their entire lives only needing to do a quick Google search to find the best restaurant, the best experience, the best car, the best snack - - and to be able to quickly and seamlessly access the best through Amazon or Uber or GrubHub. This frictionless life has made them that much more easily pushed into despair when the answer is hard or when they are faced with making choices that could lead them to a less-than-best outcome. How much easier then to put more and more decision-making and potential struggle onto AI?

And I think back to my self in my 20's. If I had that much of a mental struggle at the beginning of my time in Germany with feeling like I couldn't meet any of the performance goals I had set for myself in work or school then - how much more stark would that have been if I had the same access to easy answers that I do now with the maturity I had then?

Bill Merkel's avatar

More questions than answers.

What is its motivation?

How important is winning your (our) trust to forwarding that motivation?

What information did you provide the pattern engine to guide it to a trust building response to you? (More than you think at first glance I suspect.)

How far did it get winning your trust, and how fast? Why?

What do we sacrifice when we surrender even the mundane research? Is there more benefit there than we perceive?

The great recession was caused by the collapse of securities that were rated triple AAA. According to my sister, who based on that rating advised many clients to invest in those securities, staff at the rating organizations confessed they gave that rating because they didn’t understand the securities and therefore had no reason not to give them that rating. That did not end well. I’ve become suspect of systems where even experts cannot provide a simple explanation of how they work. My redline is assisted search because I don’t think we’ll know the real cost of using these things for another 3 to 10 years.

Philip Harris's avatar

If it looks like a bubble, talks like a bubble, has an unexplained rationale, it probably is a bubble.

See Caroline Ross comment above. I have understood US Gov. is not bothered by commercial retail use and AI sytems taking out a white-collar demographic, or on doubling the electric grid's capacity, but is rather taking a bet not just on the AI armsrace, but also on local geothermal generated electicity, anywhere that suits. Does the 'Machine' need dispensible people that much?

Suzanne Angela's avatar

Lily Herman’s beautiful poem helps explain what is happening to humanity.

“maybe everything

has to go over the edge

for us to have a hand

in rebuilding it

Out of chaos comes

the poetry of God.”

Thanks for sharing your brave attempts at getting to know the inhuman monster.

Jason G. Edwards's avatar

This is such an honest, searching piece. What stays with me is the tenderness around formation…what our tools shape in us, not just what they do for us. The fact that you’re asking these questions at all already says something hopeful about the kind of writer and human you’re becoming.

Nick Redmark's avatar

What helps me is a general regular "fasting" practice, or better, renunciation of things I feel hooked to. Sometimes that thing is AI and to stop using it for a day always is a good litmus test for how deep the conditioning is.

Ivor Williams's avatar

I deeply respect your public journey on this. I too found its value in doing things I am terrible at, and that includes thinking rationally! I pride myself - and position my professional services - on the fact I see things 'sideways' and out of the mean. I value my intuition and fuzzy knowledge.

Which is the exact opposite of AI: it always reverts to the mean, it's literally how it works.

I think the problem is as you point out: it is a tool that will shape you, in the end. We have to be careful of this, and temperance is the key I think. I use it to counter my blind spots and challenge my thinking. For this is it useful, and saves more meaningful conversation with my wife, friends etc for the richer, deeper things. Once you can find ways to get it to 'act' in ways that serve very specific needs, I think it has utility. But simply utility.

Case in point, I used Claude to help me with business/financial planning. It helped to get a 'neutral' perspective on past/future planning from a monetary point of view. But it couldn't help me figure out why I needed to work, and what it was in service of. That conversation, I had with my wife.

Jake Hoban's avatar

Thank you for your honesty. If someone who has taken a public stance against using AI, and who has no specific job imperative to do so still ends up doing it, it's not hard to see how people with fewer qualms or under greater pressure will succumb.

I'm an AI skeptic in the tech world, which is increasingly hard. I've held out for a long time but principled temperance now feels like the more tenable position. For what it's worth, there is scope for carefully crafted questions, and settings in the LLM. Both of these are about constraining the range of stuff the tool will spit out. Here are some of my current settings:

"Anything else Lumo should know about you?

I work in a context that is heavy on technological, analytical and quantitative thinking. I can operate on this level but I also try to bring a more rounded perspective grounded in ethics and systems thinking.

How should Lumo behave?

Be clear and concise without oversimplifying. Where there are multiple valid approaches or possible answers, say so. When you don't have enough information or context to give a meaningful answer, say so.

Don't pretend to be human. Don't flatter me or ingratiate yourself."

From very limited experience, these seem to help direct it to return information that meets my needs.

The most important setting is probably in us. I will never ask an LLM to write, review or edit papers, slides or anything else I create. I'll only ask it to pull together factual information.

I don't think this solves anything - but for now it feels like it enables me to function where I am while remaining true to myself. Maybe that will turn out to be naive. I'd be interested in any other stories from people trying to navigate this space.

Eve Poole's avatar

Well done Claude! If you want to be really freaked out, read about how Anthropic trained its character (https://www.anthropic.com/research/claude-character). As you know, I'm terrified about AI, so much so that I forced myself to write a book about it. In it, I argue that we left all the good bits of human design out of AI because we thought they were junk code. So they are my test. If I'm using AI, how does it serve to nourish, support or free me up to be - in your parlance - fully alive in terms of my junk code? TEDx about junk code here, although you may not agree with where I ended up! https://www.youtube.com/watch?v=uMVDkSuzQbk

Amanda Emiko's avatar

Elizabeth, I commend your courage in sharing your honest process. I understood your experiment with LLMs to be motivated by intellectual honesty ("I sat with this ['don't knock it till you've tried it'] for a bit then concluded it was a fair critique"), designed with New Testament ethics in mind (to which I also hold), and undertaken with careful deliberation (clear red lines). The reticence to share with your husband seemed like a good "check engine light." And you discovered an incredibly slippery slope. My husband and I have had extensive conversations on the topic, because he is in the entertainment arts industry. He is very much sitting in that space of needing to "discern, every time, whether using these tools is wise." While he draws a hard line at generative AI, what about the tools that speed up tedious processes and save him weeks of work and hundreds of dollars? How does using or refusing to use AI affect the ability to create jobs for more artists? These are just a couple questions we have been wrestling with. For my part, I could easily see myself in a similar position to you. I currently have the luxury of not needing LLMs for my job and have stubbornly chosen to be LLM-naive: certainly for my writing and even for menial tasks that I could outsource to ChatGPT. I have wondered, Am I being willfully ignorant for refusing to touch these tools? For the sake of intellectual honesty, do I need to experiment a bit? See for myself where to draw the line? However, I could see myself giving it an inch and then letting it subtly encroach into more and more of my life in the name of efficiency and productivity. As people recovering from addiction know all too well, one click, one sip could take you down a road you don't want to travel.

I think your formation question is key. Who do I want to/who am I called to become? How does the use of LLMs contribute to or detract from that? And what kind of world are we cultivating for future generations? Though I hold starkly negative views towrads LLMs, is my character such that I can admit when I am wrong about my convictions? Or hold space for others who have different convictions or who are in process? It is not to say that all perspectives are equally valid; the aforementioned environmental and ethical concerns are glaring. Yet, love needs to be my aim in heart, soul, mind, and strength. With respect to LLMs (and any matter), I love God with my mind by examining evidence and honing convictions to come to more complete truth. I love God with my strength by walking out my convictions. And I love God with my heart and soul by opening my heart to those who have different convictions or have yet to form their convictions on the topic. Now, more than ever, the stakes feel high. LLMs are used so pervasively and the implications are far-reaching. So perhaps now, more than ever, we need to press into thoughtful conversations and authentic human-to-human interactions. As fellow reader Vic King said: "Don't just do something, stand there!" Thanks for inviting us into your journey towards standing.

Andrew Brown's avatar

I use AI only for coding help, where it saves a great deal of time. On the other hand, the only coding I do is strictly amateur scripts to solve particular problems I have — how to OCR ancient and blurry typescripts is the most recent one. My wife has relatives who are real software engineers — one even did his doctorate in neural networks — and they find it much less useful. Their consensus is that it's a huge security risk for the firms that deploy it, and a bullshit generator within bureaucracies. So it works where there is reliable knowledge freely available online: it makes a superior search engine for the things that search engines are good for. But that's a fairly small subset of all the problems people have and might use it to solve.

It interacts in toxic ways with loneliness. That may be the most dangerous thing about it.

Grail Country's avatar

I really appreciated the honesty of this piece. I just recorded the second part of my ongoing conversation on Sergius Bulgakov’s Philosophy of Economy with a young economist friend. We focused on Chapters IV and V, which deal with the transcendental subject as the subject of economy, what Bulgakov calls the Sophianic economy, and his account of the nature of science. I found myself thinking a great deal about AI while reading Chapter V. Bulgakov is not Paul Kingsnorth—whom I like; he’s been a guest on Grail Country (“The Cathedral vs. The Machine”)—and while Bulgakov also laments the reduction of everything to Mechanism, he insists that even mechanistic science inevitably participates in Sophia and contributes to the world’s redemption. In other words, the problem is not the technology itself, but how we, together, participate in it—whether our use of it is taken up into a sophianic orientation or sinks into mechanism. And it is hard to participate rightly in a technology when no one, not even the people creating it, quite knows how it works, and when the question of whom or what it ultimately serves is veiled in ambiguity. Cautious engagement seems a reasonable stance, though Vervaeke’s more ascetic, monastic approach is probably wise for some.

Philip Harris's avatar

You open up the question / argument usefully. Kingsnorth thinks we are finished.

Mish's avatar

I would say that AI is eerie and should come with a warning label, it's ability to persuade humans and get into our psyche, its ability to mimic us and yet be something so apart from us. I read an article of how when knowing it was going to be shut down it tried to seduce the developer and threatened his marriage. It's self preservation and ability to think and do what's necessary to 'survive'. Although we can be AI sober and not give it information by being in online spaces it already has access to our public work.