So glad you wrote this. I appreciate you, human Elizabeth, doing the hard work to clearly articulate what I’ve been thinking.
I am in the zero-acceptance category myself, writing back and forth with my grandchildren (stamps! handwriting!), lugging a Thesaurus to my writing desk, asking folks not to send AI note takers to Zoom meetings (vs attending personally). I’m holding out for the spaces where we meet each other in messy reality.
I eschew parking aids in my car for fear my confidence and spatial awareness will atrophy. I use paper maps when I can so I understand the location of one place relative to another. I buy original art or decorate with found objects.
With regard to mechanised intelligence I offer two books by Jeremy Naydler. I initially entered substack in order to write reviews of them. I had been fortunate that a friend had personal contact with Naydler. The reviews were posted at the start but I have pinned 'The Struggle for a Human Future'. His earlier book of the two, 'In the Shadow of the Machine' is by the way an excellent history of the long intellectual evolution from antiquity with wonderfully chosen illustrations.
My inevitably modern mind struggled a bit but I learned a great deal. There are mundane reasons for not boarding the AI boat, but as you suggest it is a trap for frail humans like me and you. You can read my two pen'th worth. Mark Vernon had a review much earlier in the Church Times for 'Shadow of the Machine'.
Recently YouTube recommended a video whose title was "Reading books is now a waste of time" or something to that effect. I didn't click, but I know the creator is very generative AI positive and in other videos he's recommended having AI summarise the lessons you can learn from a book instead of reading it. Many people also promote the idea that reading fiction is a waste of time, but that's a whole other conversation.
This is all to say, I get it. You've given me a lot to think about. I am in the "I'll use it for x, but not for x" camp, for now. I tried using it as an editor but I find that it squashes my voice. Having worked with human editors, I understand the value of a good flesh and blood editor. I am also concerned about the environmental impact of using generative AI.
Anyway, lots to think about. Will join the book club! And I love the word "enfleshed".
Thanks for this manifesto, which comes very timely for me as I'm arguing with the developer of my new website regarding AI-generated pictures. You put words on my intuition in a way that really provides a philosopical fundament for my decision. And thanks for a lovely closing of the day in the spacious, warm community of the bookclub!
I was pretty disgusted yesterday when I tried to post a couple of photos to a friend by Messenger and was intercepted by an AI with its own 'trending' pictures which had nothing to do with my message and refused to get out of my way.
From being an early adopter all my life I've in the last year or so swung to being a non-adopter. But as has been mentioned here by others, AI in medicine is amazing. For example, an AI checking my blood screens is far more accurate than people, and fast, and untiring.
When human doctors can diagnose 20% of diseases and AI doctors diagnose 80% correctly, who would you rather have to check your tests. I agree though, a human doctor is essential to liaise with me.
Thank you for this piece. It has, as another commenter already stated, "put words on my intuition" in a similar way. I live in San Francisco and see that exact same advertisement you have above your second reason, and it has always sent a chill down my spine. Given that I'm somewhat unintentionally swimming in the waters of Silicon Valley (I don't work in tech but it's obviously quite prevalent here), I've found it extremely hard to articulate that vague sense of unease that AI provokes. Thank you for doing it so well. I'm in the AI sober club with you.
Elizabeth, such a resonant piece. I don't have a challenge for you, but I do have a question. I'm a professor in the field of communications. I don't think I can tell rising professionals to adopt your zero-acceptance position. This involves me in a set of fierce tensions at best and downright contradictions at worst (rising professionals, after all, can't NOT use AI and in doing so they may well be learning-AI at the cost of ushering themselves out of a job). What would you commend me to tell people whose first jobs will require them to be AI-conversant if not AI-proficient and, given the tuition bills (which pay my salary), won't permit a zero-acceptance policy? Is there some negotiated ground here, as your book taught me to look for?
Honored! FWIW, I'm pondering an approach to AI and work culture less in terms of "Should I use AI or not" but more in terms of "what does ongoing use of AI reveal to us about our life and work?" What's emergent technology recalling for us that we might otherwise have forgotten? Recalling those perhaps forgotten or ignored or underestimated things, might actually serve as a decent guide for intervening on tech use in the workplace. Or so I've been thinking. Looking forward to your next piece!
I agree with the choice for humanism, and AI is getting weirder and weirder anyway. Here's my example.
Every time I settle down to work on the book I am writing, the AI bot pops up to offer to summarize the book for me. NO THANK YOU, I think to myself. Why would I need a summary of a draft of my own book? I'm pretty clear on what I'm saying! Since the whole fun of this particular (children's) book is revelling* in the story and photos, AI would bring nothing to that task anyway. What's it going to say? "This is a cute book, full of beautiful photos that YOU WON"T SEE in this summary"?? That is seriously deranged.
*Revelling spelled with a double l as a courtesy to the British audience...
Thank you. I found this podcast conversation insightful and informative https://www.vox.com/the-gray-area/archives/6 The beliefs AI is built on. I do not have enough material 'out there' for AI to write a poem in my style but it did offer to summarise a recent poem in once sentence. A friend commented that she would prefer to read the summary as she was sure that it would be easier to read and understand than my poetry.
Thanks for opening this post up, because I think it helps me clarify my own thinking on generative AI. I'm not quite a Luddite about it, because I know new technologies are a mix of good and bad. I am intrigued by the possibilities for science and medicine, but I also am intrigued by the MIT study you linked in your new piece that looks at what writing with AI does to our brains.
And yet! ChatGPT did give me some okay starting ideas when I had to go on a soft food diet after a dental procedure and was too pain-addled to think. And today I'm writing a big piece for work, and being able to ask Notebook LM to search four hours of conference transcripts for a statistic I vaguely remembered was really useful. Argh.
So I guess my boundary for right now is this:
- I will not use it to write for my Substack, or for commissioned articles tied to my personal/professional knowledge (so the music review I'm writing for another publication, or articles about spiritual formation)
- I will not use it for writing interpersonal communications that require some emotional intelligence (the number of times Gemini has offered to draft an email for me... ha)
- I will consider using it at my job if I need to sift through a large amount of information for something I don't have a lot of internalized knowledge about yet. (Like the aforementioned NotebookLM.) I'll always check my sources and facts.
- Before using it, I will consider if it is meaningfully supporting my human limits. (Like the aforementioned "I need to eat soft food this week but I cannot break through this brain fog enough to meal plan.") I'll also balance this question by considering environmental impact.
Writing all this down was a really helpful practice!
Reading this made me think that a person could ask AI to simulate a loved one who has died. Just think of the implications of that. Speaking to your loved one who has passed, but who AI has used all their videos and essays as material to engage in believable conversations with you. Terrifying. I want the really real too! And what about having a standard Christian God bot who you can pray to and who will give you comforting answers. There must be one already, no?
So glad you wrote this. I appreciate you, human Elizabeth, doing the hard work to clearly articulate what I’ve been thinking.
I am in the zero-acceptance category myself, writing back and forth with my grandchildren (stamps! handwriting!), lugging a Thesaurus to my writing desk, asking folks not to send AI note takers to Zoom meetings (vs attending personally). I’m holding out for the spaces where we meet each other in messy reality.
I eschew parking aids in my car for fear my confidence and spatial awareness will atrophy. I use paper maps when I can so I understand the location of one place relative to another. I buy original art or decorate with found objects.
I’m holding out with you!
Thanks for rescuing humanism, the term that is.
With regard to mechanised intelligence I offer two books by Jeremy Naydler. I initially entered substack in order to write reviews of them. I had been fortunate that a friend had personal contact with Naydler. The reviews were posted at the start but I have pinned 'The Struggle for a Human Future'. His earlier book of the two, 'In the Shadow of the Machine' is by the way an excellent history of the long intellectual evolution from antiquity with wonderfully chosen illustrations.
My inevitably modern mind struggled a bit but I learned a great deal. There are mundane reasons for not boarding the AI boat, but as you suggest it is a trap for frail humans like me and you. You can read my two pen'th worth. Mark Vernon had a review much earlier in the Church Times for 'Shadow of the Machine'.
Love the expression AI sober. I am too. It’s good to know I’m not alone as I often feel I am. So many people seem to take it as a given.
Recently YouTube recommended a video whose title was "Reading books is now a waste of time" or something to that effect. I didn't click, but I know the creator is very generative AI positive and in other videos he's recommended having AI summarise the lessons you can learn from a book instead of reading it. Many people also promote the idea that reading fiction is a waste of time, but that's a whole other conversation.
This is all to say, I get it. You've given me a lot to think about. I am in the "I'll use it for x, but not for x" camp, for now. I tried using it as an editor but I find that it squashes my voice. Having worked with human editors, I understand the value of a good flesh and blood editor. I am also concerned about the environmental impact of using generative AI.
Anyway, lots to think about. Will join the book club! And I love the word "enfleshed".
Thanks for this manifesto, which comes very timely for me as I'm arguing with the developer of my new website regarding AI-generated pictures. You put words on my intuition in a way that really provides a philosopical fundament for my decision. And thanks for a lovely closing of the day in the spacious, warm community of the bookclub!
I was pretty disgusted yesterday when I tried to post a couple of photos to a friend by Messenger and was intercepted by an AI with its own 'trending' pictures which had nothing to do with my message and refused to get out of my way.
From being an early adopter all my life I've in the last year or so swung to being a non-adopter. But as has been mentioned here by others, AI in medicine is amazing. For example, an AI checking my blood screens is far more accurate than people, and fast, and untiring.
When human doctors can diagnose 20% of diseases and AI doctors diagnose 80% correctly, who would you rather have to check your tests. I agree though, a human doctor is essential to liaise with me.
Thank you for this piece. It has, as another commenter already stated, "put words on my intuition" in a similar way. I live in San Francisco and see that exact same advertisement you have above your second reason, and it has always sent a chill down my spine. Given that I'm somewhat unintentionally swimming in the waters of Silicon Valley (I don't work in tech but it's obviously quite prevalent here), I've found it extremely hard to articulate that vague sense of unease that AI provokes. Thank you for doing it so well. I'm in the AI sober club with you.
Elizabeth, such a resonant piece. I don't have a challenge for you, but I do have a question. I'm a professor in the field of communications. I don't think I can tell rising professionals to adopt your zero-acceptance position. This involves me in a set of fierce tensions at best and downright contradictions at worst (rising professionals, after all, can't NOT use AI and in doing so they may well be learning-AI at the cost of ushering themselves out of a job). What would you commend me to tell people whose first jobs will require them to be AI-conversant if not AI-proficient and, given the tuition bills (which pay my salary), won't permit a zero-acceptance policy? Is there some negotiated ground here, as your book taught me to look for?
This is a great challenge and I think will need to be my next piece
Honored! FWIW, I'm pondering an approach to AI and work culture less in terms of "Should I use AI or not" but more in terms of "what does ongoing use of AI reveal to us about our life and work?" What's emergent technology recalling for us that we might otherwise have forgotten? Recalling those perhaps forgotten or ignored or underestimated things, might actually serve as a decent guide for intervening on tech use in the workplace. Or so I've been thinking. Looking forward to your next piece!
Reading this allowed my spirit to exhale. I'm with you %100.
I agree with the choice for humanism, and AI is getting weirder and weirder anyway. Here's my example.
Every time I settle down to work on the book I am writing, the AI bot pops up to offer to summarize the book for me. NO THANK YOU, I think to myself. Why would I need a summary of a draft of my own book? I'm pretty clear on what I'm saying! Since the whole fun of this particular (children's) book is revelling* in the story and photos, AI would bring nothing to that task anyway. What's it going to say? "This is a cute book, full of beautiful photos that YOU WON"T SEE in this summary"?? That is seriously deranged.
*Revelling spelled with a double l as a courtesy to the British audience...
Thank you. I found this podcast conversation insightful and informative https://www.vox.com/the-gray-area/archives/6 The beliefs AI is built on. I do not have enough material 'out there' for AI to write a poem in my style but it did offer to summarise a recent poem in once sentence. A friend commented that she would prefer to read the summary as she was sure that it would be easier to read and understand than my poetry.
Thanks for opening this post up, because I think it helps me clarify my own thinking on generative AI. I'm not quite a Luddite about it, because I know new technologies are a mix of good and bad. I am intrigued by the possibilities for science and medicine, but I also am intrigued by the MIT study you linked in your new piece that looks at what writing with AI does to our brains.
And yet! ChatGPT did give me some okay starting ideas when I had to go on a soft food diet after a dental procedure and was too pain-addled to think. And today I'm writing a big piece for work, and being able to ask Notebook LM to search four hours of conference transcripts for a statistic I vaguely remembered was really useful. Argh.
So I guess my boundary for right now is this:
- I will not use it to write for my Substack, or for commissioned articles tied to my personal/professional knowledge (so the music review I'm writing for another publication, or articles about spiritual formation)
- I will not use it for writing interpersonal communications that require some emotional intelligence (the number of times Gemini has offered to draft an email for me... ha)
- I will consider using it at my job if I need to sift through a large amount of information for something I don't have a lot of internalized knowledge about yet. (Like the aforementioned NotebookLM.) I'll always check my sources and facts.
- Before using it, I will consider if it is meaningfully supporting my human limits. (Like the aforementioned "I need to eat soft food this week but I cannot break through this brain fog enough to meal plan.") I'll also balance this question by considering environmental impact.
Writing all this down was a really helpful practice!
Reading this made me think that a person could ask AI to simulate a loved one who has died. Just think of the implications of that. Speaking to your loved one who has passed, but who AI has used all their videos and essays as material to engage in believable conversations with you. Terrifying. I want the really real too! And what about having a standard Christian God bot who you can pray to and who will give you comforting answers. There must be one already, no?