How Much is AI Harming Our Ability to Connect?
S2 #507

How Much is AI Harming Our Ability to Connect?

Last year, I shared a thought from the book the Coming Wave. This was in an episode where I talked about how I and other small business owners are using AI. And I talked about one of the scenarios in the book and I said, and that is a deeply dehumanizing world. It takes away a human touch point. A year on, and I'm thinking about this even more, so I wanted to revisit that episode.

Things in AI are changing so quickly, and I've been saying basically since the beginning that I am AI hesitant. I would say I'm moving more in the opposite direction of most people. I am becoming more AI skeptical.

And there are a few events that happened this year that made me think that way, but I'm also seeing how more and more people are using it. And the phrase deeply dehumanizing world just keeps rattling around in my head. So in today's episode, I'm going to revisit last year's episode on AI, talk about how I'm using it today, and then talk about the future and how we as small business owners can stand out in a world where I feel is getting increasingly less human and a lot more mediocre.

Look, you're trying to grow your small business while still having a life and not losing your sanity. On Streamlined Solopreneur we help small business owners grow without burning out through simple, powerful online automations and systems. I know you're busy, so let's get started.

So let's talk about how I am using it today. Last year, or at least earlier this year, I was paying for three different services. Gemini, ChatGPT and Claude. I am only paying for Gemini now and that it's really only because it comes with my Google Workspace account. If it didn't, maybe I'd be paying for ChatGPT, but I can't say that with any amount of certainty.

So last year I mentioned that I was using it for research and ideation, overcoming writer's block, creating proposals. I am really only using it for proofreading my own work now. Well, proofreading my own work and extracting things from transcripts. So, like, I did give Gemini this transcript or the transcript from last year and I said, what did I say? Use only direct quotes. And you know, I like that Gemini will actually cite the source and show me where in the transcript it exists.

Google LM does the same thing. So those things cataloging and coalescing your own thoughts, especially because you know, this is over 500 episodes of a podcast now. I find it very helpful and I find it's less likely to hallucinate because I generally know my own thoughts pretty well and I do like that I am using it less for research. I will use it to surface sources, I will use it to bring things to my attention, but I vigorously check that stuff. And I think this is more of a function of me being bad at research than anything else. And research is a weak point for me. And maybe in 2026 I will make a concerted effort to be better at research. I am barely using it for ideation anymore.

I did an audio note over the summer where I talked about how I'm worried that I am doing less critical thinking because I'm relying AI and I think that bore out to be true. I am less likely to use AI to help me come up with ideas because the truth is that AI gives the statistically most likely answer. And if you want to stand out, you know, having it come up with ideas is basically having it come up with other people's ideas.

And it doesn't put it through a lens of your lived experience or your point of view. I just read the Blue Ocean strategy. AI is a Red Ocean product. It only knows what it's been taught. If you're unfamiliar, the idea is that people try to compete in a Red Ocean with other competitors trying to do what their competitors are doing when you should be creating your own Blue Ocean where you are not necessarily revolutionizing, but repositioning yourself to serve a different, bigger audience. So, if you're using AI to help create content that is Red Ocean thinking and you shouldn't be doing it.

You know, I wrote in a notebook, AI allows us to be more mediocre. And in a world that's already largely mediocre, like that's not a good thing. That's not how you stand out. So I am AI hesitant on the verge of being just like a full blown AI skeptic. I still use it because there is utility in it, but I don't think the gains that everybody else is seeing are gains. I think they are a net negative. So my main uses are proofread my own work.

I have a very specific prompt to not modify the work, to just suggest changes for me to make and to not rewrite anything. I just wanted to look for spelling and grammar errors, logical fallacies, clarity, some fact checking, and to make sure that I make my point concisely. And spoiler alert, it almost always says I make my point concisely. And maybe that is because I make my point concisely, but that could also be appeasement from the model. I'm not having it write for me. I'm not even having it replace humans in the proofreading process. I am having it proofread when I would be the only one proofreading. So I use it to proofread my work. I use it to pull out interesting points from long transcripts I'm familiar with. I'm familiar with is the important part here. Because if you just give it a transcript you haven't read, it will make stuff up.

I did an interview with my friend Justin Moore. Like, I'm a. I'm a co host on his podcast sponsor, Magnet. And so we have a prompt in, in this Claude project. Justin likes Claude. And it says, if Joe did the coaching call, make sure to note that in the description.

Then I go through an episode that was only him and it said, my coach, Joe led this call. And I said, why did you say that? Oh, you're absolutely right. The speakers weren't labeled. And so I just assumed. And if I hadn't been familiar with that episode, I would have been like, yeah, all right, fine. It did what I told it to do. So you need to be familiar with the thing you're using AI for.

I have said this before. I also will use it. I said, I barely use it for ideation. Here's how I will use it. I will tell it, I'm thinking about something. Give me an outline or give me points to make what should I say? And then I will read it. I'll go, nope, that's not what I want to do. And then for some reason that unjams my brain and I crystallize what I actually want to talk about.

So those are the ways I use AI. AI is here to stay. And I worry about the deeply dehumanizing aspect of it and that the human element, our stories and personal experiences, is something that AI can't replace. And I think those predictions, those thoughts were bang on. And it probably is not some brilliant thing I came up with, but I like, I know that there are AI advocates listening to this right now, saying, I could not be more wrong.

Everybody's, everybody thinks that their AI thing is the thing that doesn't sound like AI, you know, so it's, if you're really into this thing, it's, you've got the rose colored glasses, you've got the AI beer goggles, and I'm not going to be able to convince you that your AI thing is just as mediocre as every other AI thing. So I feel like my predictions were really good. We're increasingly dehumanizing things that we should be continuing to try to build relationships around.

All the podcast pitches I get sound exactly the same. All the cold emails I get sound exactly the same. And it's troubling. I wrote a piece on the website called faster isn't better. Faster is just faster. Because this is the argument, right? Oh, AI makes me faster. Well, I want you to imagine that you're going skydiving and your instructor offers you two options. You can use the parachute and pull it when it's time to slow your descent and land safely. Or you can forego the parachute and land as fast as possible. If we're prioritizing speed only, you would skydive without a parachute, but you'll die, right? And like, maybe you're thinking that's a stupid example.

Well, in a baseball game, when a shortstop rushes the throw on a routine ground ball, and he overthrows it, he was trying to do it faster, and he did it worse. If you're cooking scrambled eggs on high instead of medium and you burn the eggs, you did it faster. You didn't do it better. Going faster in all three of those examples yields a worse result.

The same thing goes for more. Oh, AI lets me do more. More is not better. More is just more. If you've ever eaten too much Halloween candy or drunk too much, you'll know that more is not better. Always. More is just more. Always. The problem is that our society has put a premium on faux efficiency and faux productivity. And so if you can say I get to do more faster, you get pats on the back.

It's a function of hustle culture. It's a problem. The core problem is that we've decided that productivity is greater than everything else, including relationships. So you do that cold outreach, and it doesn't matter how many you land. Oh, I sent out a hundred pitches today. They all sounded the same, and they all suck.

Aside from that, there are a few stories this year that have really cemented in my mind just how bad our reliance on AI is. One is the story of the Reigns family. They're suing OpenAI. Their claim is that ChatGPT convinced their 16 year old son to commit suicide. And it. It doesn't sound that far fetched. Right. I mean, the fact that on the day the hearing started, OpenAI announced they were putting in new safety measures to prevent against that. Right. If it's not a problem, you're not fix you don't need to fix it.

This is all alleged. That's my point of view. Right. But we are seeing this happen. Increasingly isolated kids are talking to ChatGPT or other AI models, and the AI model wants to appease us, and so it'll tell us what we want to hear. And it's good that OpenAI is putting these safe holds in, but it's still chilling, right, Meta.

A story came out on Reuters where it leaked in an internal document that Meta's AI rules have let bots hold sensual chats with kids, offer wrong medical advice, be racist, and figure out ways around creating pornographic images of celebrities. I'll link to the story in the description. You can find it over at streamlined.fm.

But, you know, the story says Meta confirmed the document's authenticity, but they removed portions which stated it was permissible for chat bots to flirt and engage in romantic roleplay with children. I just have a hard time figuring out how it made it there in the first place. Like, who thought, well, yes, this is probably okay, and if I'm gonna. I mean, the document has examples of a prompt and what's acceptable and what's unacceptable and why. So those two stories are chilling.

And then I made two posts on LinkedIn based on what I had been seeing. And one is that I feel we're giving up personal growth, critical thinking, and professional help in the name of efficiency. And that reminds me of a trend from the early 2000s where we were giving up privacy in the name of security, because then you end up having neither, and only one party benefits, and it's not us. And to further that point, the CEO of Anthropic who makes Claude, Dario Amodei, I think is how it's pronounced. He wrote an essay back in April saying that they don't fully understand their creation. So we're putting our trust into a thing that the creators don't fully understand. It can't possibly. It's what's called. It's non-deterministic.
You cannot determine the output based on the input.

There was another thing I saw around the same time that I think really just cemented these thoughts in my head. Somebody posted about, oh, are you having trouble with the Sunday scaries? Are you feeling anxious? Use this AI prompt to stop feeling anxious. Ask AI if you're feeling stressed or if you need a coach. If you're stuck in your business or need relationship advice or have anxiety, you don't need a real business coach anymore. Just use ChatGPT to coach, you name it, call it he or she or whatever.

That's problematic. AI is not a human being. It doesn't have creative thought. It gives you the most statistically likely answer. And we are outsourcing human growth and development and healing to an app that its creators don't understand. Telling it to act like a coach or a doctor or a therapist doesn't make it one. Personifying it doesn't make it one. It's a machine that has been directed to appease you.

So what do we do from here? Where do we go from here? My friend recently told me that there are people who won't breathe unless AI tells them to. And that is funny, but it's also true. I've talked to people who are like, oh, I don't read articles anymore. I just feed them all to AI and have it summarize it for me. Oh, why should I listen to full podcast episodes? Just give the transcripts to AI and tell me the main points. Sure. Do nothing, Consume nothing, Don't live. Oh, well, you know, I don't watch baseball. I just look at the scores. You don't want to know what happened or how it happened. You don't want the excitement of watching the game? Fine. You can't do that for every aspect of your life.

There's a saying in computer science. Garbage in, garbage out. And when you rely on AI for all of your input, then you will output garbage. You will not output anything worth reading, watching, or listening to. You will cease to have unique thoughts because you are foregoing critical thinking in the name of speed and efficiency.

So those of us who will stand out are the ones who recognize that AI is a tool that could help us in a limited scope, but that the creative work, the craftsmanship of what we do, should be left wholly to us. So in 2026 and beyond, I'm going to fight for personal relationships.

If I am using AI, it is not going to be to try to connect with people. I am going to spend my actual time on that. And if you actually want to save time, then you should use the tools at your disposal for small, repetitive tasks, not for big thinking. And I want to wrap up here by just reminding you that I use AI, but the inputs are fully my inputs, and the output is something that I process. There's the idea of the human in the loop.

I am a much more a bigger fan of the human in control. AI doesn't do something, doesn't output something, doesn't have final say over anything unless I tell it exactly what to do. So that's my approach to AI. I really worry about this, and I think that in order for us to continue to do great work and not mediocre work, we cannot rely on AI to do the hard stuff for us.

So that's it for this episode. Write in over at celelinedfeedback.com. Leave a voice note, send a video. I want to know it's you who came up with these thoughts. Maybe I'll play it on a future episode, but I'd love to hear what you think.
I know that I'm AI, hesitant, skeptical. I think it's making me better that I'm not just defaulting to using AI for everything. I hope that you feel the same way.

Thanks so much for listening. And until next time, I hope you find some space in your week.