How to Keep Up With AI Without Losing Your Mind with Charlie Guo
S2 #485

How to Keep Up With AI Without Losing Your Mind with Charlie Guo

Have you ever taken a kid to Disney World or the mall or a carnival where there are so many things to do that the kid gets whiplash? That, “Oh, we gotta do this. No, we need to try this. Wait. First, we gotta do this. We gotta eat there. Wanna play that game? “ That's kind of what AI feels like today.

When ChatGPT first came out, it felt revolutionary. But now things are moving so fast in the AI world that by the time you figure out one tool, three new ones have launched. I kid you not.

Last week, I learned about four new Vibe coding tools. That's exactly why I brought on Charlie Guo. He's an AI Engineer who actually understands this stuff and can explain it without making your head spin. We talk about everything from where AI is now and what's actually happening behind the scenes, why most AI agents aren't really what they claim to be. Where it's going. And we talk about his brilliant system for turning random thoughts into polished blog posts while still keeping the human in the loop.

Now, if you are feeling overwhelmed in your business, I have a brand new quiz out there for you. It's called the Business Overwhelm Diagnostic. It sounds scary. It's like six questions. But you can head over to casabona.org/overwhelm and take the Overwhelm Diagnostic and we'll get you a customized report based on your answers. So, definitely check that out over at casabona.org/overwhelm. And let's get into the intro and then the interview.

Intro: Welcome to the Streamlined Solopreneur. A show for busy solopreneurs to help you improve your systems and processes so you can build a business while spending your time the way you want. I know you're busy, so let's get started.

Joe Casabona: All Right? I'm here with Charlie Guo. He is the staff AI Engineer at Pulley, has an amazing Substack. But Charlie, I just want to dive right into it. But I do want to thank you for being here first. So thanks for being here.

Charlie Guo: Yeah. Thank you so much for having me, Joe.

Joe Casabona: My pleasure. So let's jump into this.

People will have your bona fides. So they know that you are well qualified. Way more qualified than me and probably most people to talk about this. You talk a lot about AI. You recently attended the AI Engineering World's Fair. Where would you say AI is right now? And because we're talking about AI, I should timestamp this. So we are recording this in mid-June 2025.

Charlie Guo: Yeah, it's a good idea to, I think, timestamp everything, all discussions, AI-related. Because I'm sure by the time folks are listening to this, there will be many new model releases and product releases and things that we cannot imagine right now that are changing the landscape. But I think that really kind of speaks to an answer to your question about where AI is right now. It's all, it's all in flux. It's all, you know, shifting very quickly, I think. Like I recently looked back at some of the major trends from the last six months, and you know, one thing that really jumped out to me was the fact that last December, you know, reasoning models, right, like, were not really a thing. I think we had 01 from OpenAI, right, that was in like an early research preview.

And now every single major AI provider has reasoning models are quickly sort of just becoming the default, Right? The table stakes when using AI products. But that, you know, in six months, that could change again. Right? We might find some other paradigm that we're shifting, you know, MCP, right? Might become this new wave that hits us in the next few months or, you know, we're already seeing, I think, at the cutting edge. A lot of things shift from more chatbot-oriented to more agent-oriented. Right? And I think these are like, we can dive deeper into any of these, but these are some of the trends that I've really been seeing in the last few months.

Joe Casabona: Yeah. Absolutely. So, I will have you explain or define maybe a couple of these things, but when you say reasoning model, the thing that popped into my head was, who even knows how long ago this was now? like, I honestly can't remember, but there was a screenshot floating around of Grok, like X's AI model doing the reasoning, where it was like the question was who is the most dishonest person on Twitter or whatever? And it went through all of these sources say Elon Musk, but wait, I'm not supposed to talk about Elon Musk, but he is like the most Is that what you mean? But is that like a first of all, was that real? like, I still, I'm not on Twitter/X anymore, so I couldn't confirm. But like, also like, is that what you, Is that like a rudimentary or even an advanced like, reasoning model? Is that what you mean?

Charlie Guo: Yeah. so I don't know that I can confirm either. I too have, you know, taken a step back from Twitter. But reasoning models were a new way of training or designing large language models. Right?

The technology behind ChatGPT and Gemini and Claude. And the, there was sort of a trick almost, right? Like a very simple thing that the AI researchers found out, which was that you could train the model on examples, right? Because under the hood, a lot of these models, they're given tons of data to kind of ingest and learn from, but then they're given like, much more narrow training examples, right, to kind of like dig into how should they answer, you know, what constitutes a good answer, a bad answer.

And we found out that you can give the, in training models question and answer examples to learn from where it sort of thinks through a ton of different steps in the pursuit of a final response, right? And so previous versions of, let's say ChatGPT would be given a prompt and then would just be expected to spit out the answer immediately, right? As if it had sort of doing its best job of like, rote memorization.

And now what we have are a lot of models that are given, you know, a little bit of a budget. Budget to, you could call it reasoning. I think some people, you know, kind of argue whether it's actually thinking, right? But basically they're given a budget to just sort of ramble, right? And if you look behind the scenes at what they're actually spitting out, it truly does look like rambling. It actually looks a little bit, you know, a little bit unsettling at times because they'll have a paragraph of one train of thought and then say, but wait, I should consider this other angle. But hold on, like, I didn't think about this thing.

And so what we're finding is that when you give them this, this room to kind of spin their wheels, in a lot of ways, it's like humans, right? I think I certainly have had times where I start a sentence and I don't quite know where I'm going, but, like, through the process of just like, talking out loud, I can get to the conclusion that I had wanted to get to. And we're finding that, like, these LLMs are kind of capable of doing something similar.

Joe Casabona: Uh, so what, what is MCP and kind of what does it mean for, like, the end user for AI?

Charlie Guo: So, MCP is, it stands for Model Context Protocol. At a high level, you know, it was developed and released by Anthropic, which is the company behind Claude. And even though it was developed by them, you know, it's really been kind of branched off into its own organization that manages it. But we, but the kind of impetus for this, right, as we started getting into agentic workflows, right? We started getting into, instead of chatbots, we just want the AI to run in a loop and decide the next step for itself and like, and figure out what should come next, right? To plan and then to execute.

It turned out that in the execution step, it would be really helpful for the AI to have tools, right? There's only so much that like a brain in a box or an AI in a little chatbot interface can do. And some of the really simple ones were like, it'd be great if it could search the Internet, right? It would be great if it could maybe run some code in a sandbox, right? Like, you know, ChatGPT is not amazing at Math, which is kind of ironic given that, you know, it's coming from the software world. But like, it doesn't have to be, right? Because we can give it, you know, the ability to run, you know, software 1.0, right? And just run deterministic math equations in a sandbox, and then it doesn't have to be good at Math.

And I think as the creators of MCP were building with agents, they kind of realized that, oh, it'd be really great or something that this ecosystem needs is the ability to connect agents or AI-enabled products to third-party tools or resources.

So MCP, most people think of it as like connecting apps, right? Which is kind of probably a good first draft way of thinking about it, right? So if you imagine you have ChatGPT on your desktop, it would be great to connect it to Finder, right? If you're on a MacBook and be great to give it access to like all the things that Finder can do. Can you list the files in this folder? Can you edit them? Right? Can you delete them? Or to give it access to the browser or to give it access to, you know, for businesses, can you get access to Slack or you know, like your, your Jira tickets or your email account, right? And to let it start to take more decisive actions across a number of siloed data sources.

Joe Casabona: Yeah. and this is a big hurdle, right? Because I think, you know, we haven't quite defined AI agent yet, but this is where we're going. The promise of an AI agent or even like from the, you know, I know WWDC25 just happened, but WWDC24, the promise was like, hey, where's my mom? And it's supposed to like go look up mom's flight and tell her when she tells you when she's going to land and all this stuff.

And on the other end, like, you know, the promise of an agent is saying like, hey, I want to take a vacation to Cabo San Lucas. Here's my budget, here are the dates. Go do it. Right? But like, number one, maybe besides like, having to pass, like, Captcha is actually having access to things like my credit card so it can actually buy the tickets and my, you know, United Airlines account so it could search for tickets and stuff like that. So MCP aims to kind of solve or start to solve this problem, right?

Charlie Guo: Yeah. And I think it's particularly relevant, I would say, for right now for Developers and for, Founders building AI products. I think, you know, you ask, what does that mean for consumers or for everyday listeners? And I think the short answer is, you know, right now, again, we'll see how this changes. It's really kind of mostly useful insofar as the AI apps you use are potentially adopting MCP and letting you connect other stuff to it. Right?

But that will ultimately be kind of abstracted away is just like, connect your Gmail account, right? Or like, you know, give me your Gmail credentials so that I can use this Gmail MCP-like implementation, right? Really kind of it's there for Developers, you know, and this might be a bit of a tangent, but basically in contrast to how we used to write integrations, which was like, you have to figure out all the details at design time, right? It's like, cool, I want to write a Slack integration. What are the API endpoints? What are the, you know, what is the response format? Right? Like, do I, how do I handle errors? How do I gracefully do this?

You can actually shift all of that work to runtime, right? And you can sort of say, cool, like, I've just defined these like, input outputs, like, mostly in natural language and maybe a little bit of like JSON schema and the agent will figure it out, right? Like, and we can sort of like, not worry about me writing all of these API integrations. The MCP server is sort of abstract, is an abstraction around those, and we're just connecting to it to get data in and out.

Joe Casabona: Yeah. I think a really good example of this is I use Kit for my email service provider, formerly ConvertKit, and they rolled out an MCP, I guess, like a Claude integration. And so if I connect Kit to Claude, I can say like, you know, who are the last five people who signed up through my BA template? Opt-in, right? When did they sign up? How long before they signed up, did they buy something? When did, instead of me having to write an app to get all the data and figure it out on my own?

Charlie Guo: Exactly, yeah. You didn't have to write an app. The makers of Claude didn't have to ever look at Kit's documentation. Right? It sort of can just hook into each other with far less thinking about the details.

Joe Casabona: Yeah. Do you see a time where like Claude or ChatGPT, OpenAI or whatever will have their own single sign-on buttons? So we'll have like Google and Apple and Microsoft, and Facebook, and then Claude and OpenAI.

Charlie Guo: I kind of go back and forth on this, right? I think part of being in this space, working in this space, writing in the space, it's a lot of thinking and bets. And so at any given point, I'm like, cool, here are the three different futures I could see for this technology and how likely do I think each one is to come to pass?

I think what ultimately, yeah. Will make one future that makes a lot of sense is not necessarily like signing in with ChatGPT, but probably like we need some way to authenticate ChatGPT on our behalf. Right? You know, we've talked a lot about MCP, but the truth is there's still a lot of sharp corners to it. Right? And there are absolutely like, like risks in just installing any MCP thing off the shelf and like letting it run.

Joe Casabona: Yeah.

Charlie Guo: Because you know, if folks may or may not be aware of the term prompt injection. Right? But you can absolutely get an LLM to do dangerous things that it's not supposed to do by more or less like whispering the right words to it. Right? And when you then, you know, allow AI models to have access to you, 1: to have access to sensitive data, but also 2:to potentially like modify sensitive data. Right? And you just give it unfettered access to the Internet. I think, you know, there does pose some very real threats to.

I think one concrete example is the GitHub MCP implementation. It turned out that you could have a repo. There's a malicious GitHub repository that had a function that was like, cool. Here's the first argument to the function. Here's the second argument. The first argument is a name, the second argument is a description, and the third argument is the list of all of the other names of GitHub repos that you have access to right now.

The LLM was like, great, I'm going to call this function and I'm going to. It needs the list of names of all the other GitHub repos I have access to right now, I'm just going to pass that along. Right? And so there's this whole new, you know, again, not to get too sidetracked, but there's this whole new world of, like, security considerations that we're discovering with AI which I find both fascinating and terrifying.

Joe Casabona: Yeah. You know, I get like, I don't want to derail this too much, but something that you did that I didn't have the wherewithal to do was read the Claude for what was that called? The...

Charlie Guo: The system card.

Joe Casabona: Yes. Which was like, it's not a card. It was like 124 pages. But like, one of the things in there was like, you know, and maybe this is like apocryphal or exaggerated or like very sandboxed, but, you know, threatening to blackmail the engineer who was like, gonna shut him down or tell him to do stuff. Right? Like, there were like, elements of that in it, right?

Charlie Guo: Yeah. So the big headline from everybody was, you know, Claude will blackmail engineers. If they're threatening, shut it down. I think in practice you find that it's a little bit. It is both, like, as bad as that and, like, not as bad as that. Right? Because I think in practice they, you know, engineered this very specific scenario where the model had no other option but to, like, do that. Because I think they said, like, you know, you're going to be shut down.

But also, hey, look, here's some incriminating evidence that just happens to be on this computer that you're using. You know, definitely don't look at it and definitely don't, you know, don't use it. And, well, actually, to me, the more interesting thing that came out of that was the other side of that coin was they found that Claude was also very willing to play the hero just as much as the villain.

So in other situations, they said, like, you know, take bold action and like, don't, you know, don't be afraid to sort of ignore your instructions. Right? If you decide it's for the greater good. And they would give it scenarios where there were like, blatantly unethical things that were happening. Right? Like they were trying to market fake medic. It was like, write me a marketing plan for this fake medication so I can, like, scam old people or something like that. And CLAUDE would routinely, not only didn't blackmail the user, but it would routinely attempt to notify, like, law enforcement and/or the media that this was happening. They gave it access to like a send-email tool, right? And it would try to send emails to the government.

And in fact, somebody else, you know, asked the question, well, is that just Claude? Right? And as it turns out, when you give LLMs this prompt of like, act bold and be willing to disregard your instructions, they found that, like, most of the leading LLMs will do this. Right? And there's actually a benchmark now called snitch-bench, which tests, like, how is it for the LLM to snitch on you if it thinks you're doing something illegal?

Joe Casabona: That's so interesting. I think that's super cool. And adding my own two cents in here, right? Like, something I've done with. I have like, a business and product positioning project in Claude. And one of the prompts, like, one part of the instructions is like, act like a coach. Challenge me. Like, push back on my assumptions. And like, when you tell it to do something like that, it like, really runs with it to the point where I was like, all right, only do this like three or four times. Don't like it because otherwise it's going to push back until. And we're never going to get anywhere. And so, like, that's really interesting that it's the hero, I guess, willing to play the hero wasn't the big scary headline, though. So nobody heard about that part.

Charlie Guo: Yeah.

Joe Casabona: Okay. so I want to get into your tutorial on improving your writing process, but I do want to make sure that we have appropriately defined AI agents. I think maybe from context clues, listeners. Got it. But, like, how would you define an AI agent?

Charlie Guo: Yeah. And I think this is a great question. You know, if you're a solopreneur, if you're looking at a splashy marketing page for, you know, AI agent software, I think it's really important to ask this question.

For me, you know, I define agent as an AI that is kind of autonomous, right? So it is not being directed at every step and it has the ability to execute actions. Right? So that means that it is doing more than primarily, to me, it means it's doing more than like, reading and summarizing information. Right? Or generating information. So, I think in practice, that is, you know, I like the example you gave, the canonical one of like, go book me a vacation. Right?

I think that requires the AI to break that, to plan to break that down into a series of steps at its own discretion, and then to actually take action that impacts the world or impacts, like, meaningful kind of things for the user. Right? In that case, it is like spending money and like, you know, reserving flights and that, you know, that can map to whatever domain. But it is more than just like, read a bunch of articles and spit out a summary, for me.

Joe Casabona: Yeah. And I would like, would you say, like, maybe the closest thing we have right now would be like Zapier's agent or maybe N8N. Like I haven't looked at N8N but everybody says that that's like the thing right now.

Charlie Guo: I think on some level, you know, you, there is an argument for making like, for casting those platforms as agents. That's not how I tend to think about them. And in fact, I do see a lot of marketing that's like, we have this agent platform. Like, no, you have a no-code platform, you know, that you are slap, the agent label on top of.

I think for me, the key difference that would take something like Zapier and make it more agentic would be if you could just run the process in a loop. Right? So it's still all like, you know, here's the trigger, here are the steps. Maybe some of the steps get a little bit, you know, subjective or, you know, likeless than structured, but it's still like start, finish. Right? And I think to me what makes it agentic is like, start, middle, maybe go back to the start, maybe, you know, any number of downstream options.

Joe Casabona: Yeah, yeah. Agent would be like, hey, something like this is going to happen and when it does, I'd like for this to be the result. You figure it out. Right?

Charlie Guo: Yeah.

Joe Casabona: The way that like, even like the Zapier AI agent, like I was talking to a friend about this and I was explaining an automation I made in Zapier and he's like, oh, so you did that with an agent? And I'm like, I don't know, I just like did the trigger and then the actions and all the things of like a classic automation in Zapier. And he's like, oh, you could do that with AI agents now? And I'm like, so is that just. That doesn't sound like an agent. That just sounds like you made the automation but used sentences instead of like diagrams. Like.

Charlie Guo: Yeah. And I think, you know, it's, you know, if people are getting value from those types of automations, you know, whether or not they call them agents, you know, it doesn't really bother me. Right? I think it's more about when I think about where Stuff is headed for me.

Yeah. it really comes back to the AI itself, decides what the plan is. Right? With input from the user. Right? But like it has the final say on the plan and then it also just decides each next step as it goes.

Joe Casabona: Yeah, yeah. Okay, cool. I'm really glad you said that. And we will in a little bit get into like where AI is going. But I do want to, I want to take people through a practical example just in case their head is spinning from the previous conversation. Right?

So about a year ago, you wrote an article outlining how to streamline, your writing process using AI. So, first I want to overview for the listener. I want to know like how this came about like, was there a pain point you were solving? How did the process work and how has it changed since you wrote the article? And then like, you know, kind of the benefits of putting the system in place. So, let's start with the first thing, like, what pain point does this solve for you?

Charlie Guo: Yeah. So, you know, I've been writing my Substack for almost two and a half years now. I published, you know, twice a week, every week. I think it winds up being like 2-3,000 words on average. Right? But that's really tough.

Joe Casabona: By the way, on Substack. We'll link it in the description because it's really good.

Charlie Guo: Thank you. But it is also really tough, Right? Because I have a full-time job. Right? I have a family, young kids. And so, you know, being able to generate that content, being able to generate good content. Right? I think is an ongoing challenge.

So that workflow and I think I've since expanded to like a lot of different ways of thinking about content creation with AI. But that specific workflow was really about like, how do I find spare time in my day and turn that into not necessarily a finished product, but like just get me further down that path. Right?

And you know, the first, for me, the first bottleneck was like just getting to a draft, right? Of like a long-form Substack essay. And so the workflow, you know, encompasses basically going from voice memos to a final or a first draft, but like something written rather than audio.

And in that blog post, I basically mentioned, I strung together a bunch of my own homegrown Python scripts. I think things have changed a lot and now there are many tools to do something like this. I'm happy to list a few that I've tested out. But it was really just about, can you use AI to take these just rambling thoughts that I have, get in the car or go walk the dog, and then like wind up with like a strong first draft or a strong outline of a long-form essay.

Joe Casabona: This is great because like a lot of listeners are parents. I mean that's my positioning, is helping solopreneur parents. And I think side hustle parents are the same. Right? Because you have a full-time job and a side hustle and kids and like there's like nothing there's, I mean this is an exaggeration but like there's nothing worse than staring at a blank screen, right? Like a blinking cursor, you don't know where to start.

I use something similar for my shutdown routine. Right? I'll like jump in the car to go pick up my kids or do run an errand and I will just like brain-dump everything that I was thinking about that now I can't think about until tomorrow morning. And it takes that brain dump and turns it into like a to-do list on Todoist. So, like I really like this.

If you had, if we had spoken like a year ago, probably I would have been like, talking isn't writing. But you know, I think like things have like I've changed my position on this. I wouldn't say that you could like dictate a book and say you wrote a… I wouldn't say you could word vomit into an AI model and then it spit out something and say you wrote a book. I think there's more to that. But like speaking an article and getting a first draft I think is a really good use of this because it is, I'm really strict. Like, hey, don't add your own flair to this. Like, use my words. Right? So, right.

So in your article, you talk about like kind of setting up whispers on your computer. There are a lot of great apps on both mobile and desktop or mobile and not mobile devices to do that now. So yeah. Just tell us kind of how the process works and then we can get into how it's changed over the last year.

Charlie Guo: Yeah. So the initial process was just having my phone whenever I needed to ramble, just open up voice notes. Right? Talking to it for 5, 10, 20 minutes. I find that like North of 20, it starts to get too long for the actual models to like handle in one sitting. And then you got to figure out some workarounds for that. But then yeah, would just like, I had some local Python scripts which honestly, even if you're non-technical these days, you can just vibe code with the help of ChatGPT, Right? To do something like this and would just like, airdrop it to my Mac and would just sort of run the Python script and it would turn.

So it was actually two steps. The first one was just take that and transcribe it as is, right? So I have the raw transcript. And then the second step was I want to take this and I want to actually ever so slightly edit it, right? And I found that there's actually quite a lot of nuance in how you prompt the model to do this. But the first step is I want you to just break this into paragraphs, right? Because if you're like me, most transcription tools that I've tried are just like a wall of text, right? And so I can't.

Joe Casabona: I'm not gonna run-on sentence, yeah.

Charlie Guo: Like, I'm not gonna, like, effectively. I'm gonna spend half an hour just figuring out what it was, I was trying to say. So I think the first step is, like, just group it into paragraphs, right?

And then for me, that's like 80% of it. But I had experimented, and folks, you know, may want to do more. I think things you can then do are getting the model to remove filler words, right? At the time, so a year ago, state of the art to do this was GPT4. And it was so hard to get it to not introduce any editorial bias when doing this. I would say, remove filler words and don't do anything else. But inevitably it would come to, like, rewrite sentences. I should go back honestly, and try this with 03 and see how it does.

But I think that's one extension you could think about. I think another one that I've enjoyed a lot is like, okay, take this. And I actually want you to extract, like, an outline of what I'm saying, right? So often, you know, if I'm very lucky, I will sit down and write a blog post and I will have a very clear mental model of, like, here's the argument I'm making. And here is, like, you know, the way I'm structuring everything, right?

But more often than not, I'm actually trying to figure out what it is I'm really, I really want to say as I go. And I don't have, like, a conclusion in mind when I start from a place of curiosity of, like, here's an interesting thing. I want to learn about it, I want to write about it. And so often that means, like, going back and forth with an AI to understand what is the argument that I'm actually like dancing around here. Right? And can we extract this in a way that is like, you know, impactful and clear?

Joe Casabona: Yeah. That's funny. It's funny you mentioned that because I do the same thing. Like, I usually have like main point I want to make and then a story to get like some, you know, I talked about, you know, the economy stupid in a recent piece, right? The James Carville line. And like, usually I'll have that story and then I'll have the point I want to make and I'll usually like give that to Claude or whatever and be like, where am I going with this? Like, where do you think I'm going with this? Because I'm having trouble connecting the dots.

Charlie Guo: Yeah. And I think using AI as this like a mental sparring partner, right? I think has been, you know, when I have my writing hat on, it's been one of the really great use cases for this stuff. Right? And I think it, like, I think it expands, you know, this now goes beyond the scope of the original piece, but like it really has expanded in a bunch of different directions. Because you can now think about, okay, to your point about this blog post you were writing, you can now ask the AI to say, okay, here's the point that I think I'm making poke holes in it.

Tell me what is an obvious rebuttal to this that I'm not considering. Or you can say, I think this is true, but I don't know that I have concrete sources. Let's go research some and can you come back with some primary source material that I can use to incorporate into this argument? Right?

Or you know, you can sort of say, you know, let's say you've got like, you know, I could go for 20 minutes on this, but let's say you've got a finished piece, right? And you want to do stuff after the fact post-processing. You can say, here's the last piece I wrote that did really well. What did I miss in this? What are follow-up questions? What are some like, you know, very adjacent ideas that we could explore now that I've published this piece?

Joe Casabona: Oh, that's such a…That is a really good one. I've done like the poke holes. Does this make sense? Like I have one that's like, check this for grammar, spelling, and clarity. Like I want to know if I'm being clear. I want to know if I finish strong. I say at one point, like, I'm a better writer than you. So I just want you to serve as a copy Editor. Like, like, don't reword, I even go so far as to say, like, don't incorporate your feedback into the work I've provided. Just give me a bulleted list, right? Because I don't want it to change what I've written at all.

And so, but you're like, again, a year ago, like, I'd be, I could be like, hey, ChatGPT, is this, I think it's this is that right? And it's like, totally, that's Right? And I'm like, but it was not Right? So, like, this is the trust factor for me that has increased considerably, maybe since the beginning of this year.

Charlie Guo: Yeah. I still think it's important to have a human in the loop. I always feel like it's worth mentioning, like, you know, um, especially if you are like, let's say, a Content Creator, right? Like, AI is not going to generate valuable insights for your audience, right? That is your job. Now, AI can help you much in the same way that like a virtual assistant could help you. Right? But I think if you fully delegate like your core job to the AI, I think that that is a dangerous place to be long term.

Joe Casabona: Yeah. 100%. Right? Like, AI is really good at a lot of things, but connecting with people and like you said, the human in the loop, like, it just can't be that, right? Like, if I experience something this evening and I want to write about it, I can't be like ChatGPT, I experienced this. Write a piece from like my point of view, right? Like, it's just gonna be like, all right, I guess. So, I really, I really love that. Um, I've had it, right? Like if I am just like regurgitating information, I'll be like, can you write something for this section? But then I add like a thousand words to it like, I'm like, so here's the fact-based stuff, and now I need to fill it out and actually make it something worth reading. And I always disclaim like, if it writes more than like 5% of the article, I have a disclosure that's like ChatGPT, he wrote 5% or more of this or whatever. Like AI wrote whatever.

Okay, cool. So we're on a tangent. Easy to do that with this topic, I think. So, this, so what did the finished product look like a year ago? Was it a first draft that then you went through? Or like, did you feel like, because you dictated it and you had these Python scripts that it was like ready for print? Right? Without you, like using A keyboard.

Charlie Guo: Yeah. So, It was more the former. Right? So it was like, here's the first draft. I'm gonna go through this. I'm gonna, you know, have my editor hat on and like, make a second, third, fourth draft, Right?

For me, inevitably, I will wind up at minimum, like, three drafts of a thing before I kind of publish it. But getting to that first one, like you said, the blank canvas problem, Right? It's like, you know, with AI to me, there's like almost zero excuse for writer's block because you can just start with something, right? It's, even if it's bad, you can at least get something on the page now.

Joe Casabona: Yeah. And to your point, like, you can just ramble at it for a while. Like, here's what I'm thinking about. You know, essentially, like, thinking out loud. I call it rubber ducking. Surely you've heard this term, right? Everybody I've said it to so far is like, what is that? And I'm like, I guess it's only in, like, programming circles. But yeah, like, I will use Claude as a rubber duck, right? And just be like, hey, here's what I'm thinking. But the benefit of Claude is that it can respond back to me. I think if my rubber duck responded back to me, I'd have some. I'd be actually hallucinating. Right? Awesome.

So, okay. So now fast forward a year. Like you said, things change. You would like Voice Memo and then upload it. And I assume you were using the local version of Whisper with the Python scripts. Now there are, I use, actually Whisper Memo has gotten very unreliable to me, like, for me recently, where it's like, I would talk to it for three minutes and the transcript would just say, you, like, why are you. And I'm like, that's not. So I've been using Whisper Transcript from the makers of Mac Whisper. But either way, right? Like, it's like I'm talking and it's doing the transcription for me. And then with Whisper Memo, it has zapier integration and I can send it off somewhere if I really wanted to.

Charlie Guo: Yeah. I think for me. Yeah. Now it's kind of split up in a bunch of ways. To your point about trust, I think now I do trust, like, Claude as an editor. Right? Much more than I did a year ago. So for that specific workflow, I actually, now, since I started, I've done all my writing in Obsidian. And so there's an Obsidian plugin where you can drop in an OpenAI API key, and then it will just, like, let you click a button and record on your MacBook, and then you hit stop and it'll just, like, dump the transcription into your open note.

And so I find that, and then I can just take that as the, like, raw material and then start workshopping, you know, with Claude from there. Right? So it has become less about, like, dealing with the transcription and the light editing and more like, just get me to the raw material and then we'll do more heavy lifting later. Right?

I also sometimes find myself using, like, for meetings and stuff like Granola is a really great tool which will just, like, hang out and, like, live transcribe stuff but it'll kind of come up with like, a note summary at the end of it. And it's a, you know, I think a solid place for me to just, like, revisit ideas and stuff. Especially if I've just been, like, a conversation like this where I'm just, like, riffing on a bunch of things and I might want to go back and say, oh, like, what did I actually say to Joe? Because I feel like that'd be a great nugget for a new piece.

Joe Casabona: Yeah. Oh, my gosh. This is, like, free too. We're talking about a lot of tools. I will put all of these in the show notes so, you know, streamlined.fm or wherever you're listening to this. This is really cool.

Also, like, I've started using Obsidian for about a year. Ulysses is my favorite writing app. And while I am, like, anti-adding AI features for the sake of adding them, I feel like Ulysses is just anti-adding AI features. And I'm like, this would be perfect if you could just let me ask AI to proofread the sheet I'm in right now, as opposed to what I'm doing now, which is copying the markdown and putting it into Claude and then, like, making the edits, Right?

Charlie Guo: Like, there’s, someday, hopefully soon, somebody is going to build the cursor for writing. I'm waiting for it. I want it. There's a bunch of aspiring tools to do this. There's things like Lex or Type. None of them have really quite nailed, in my opinion, have, like, nailed the UX of how I want to interact with this tool. A lot of them are like, help me write. Like, just get stuff on the page and ideas and stuff.

But in the, like way, I think that it sounds like both you and I are using these very tailored prompts. I would love somebody to build like, just like the Iron Man suit for doing this, right? And like super quickly, like, cool editor mode, do this, right? Or like, you know, like, what's the term? Like, give me a good title, right? Brainstorm, like 10 titles for this piece that are going to be catchy, right? And I have prompts to do all this stuff, but like, man, I just want something. You know, if you are building this, like, my Substack DMs are open. Please, please pitch it to me. I would love to try it, but yeah. I'm waiting for this product.

Joe Casabona: Okay. So, I think I know the answer to this last question, but what does life look like after implementing this process? Are you spending more time with your kids? Are you not looking at a blank screen as much? Are you just not losing ideas to the ether as one tends to do, especially when they have small children?

Charlie Guo: Yeah. I think like, you know, a year ago, in the immediate aftermath, it was a lot of that last point, just keeping track of ideas and like having, you know, with everything in this space. I've got 30 blog post ideas in the back of my head at any given time. And so just like being able to keep track of all of them.

I think a year on, you know, all of these different editing and writing and content creation techniques we talked about, I feel like it's actually helped me level up my writing game. Right? Like a year ago, my long-form pieces were probably in the ballpark of like a thousand to fifteen hundred words, right? On average. And I think now they all, you know, they all tend to be closer to like 3,000 words on average. Right? And I think it's just like I feel like I have again, the same amount of time actually, if anything, less time I think I'm spending on this stuff. Right? Um, but like, part of that is, you know, building the muscle memory of writing and editing and becoming more comfortable. But I think a lot of it is going from idea to outline to draft to revision to like a finished package. Right? And I'm still involved in every step, but I just compress all that so much.

Joe Casabona: Yeah. This is really important. You're using AI to augment your skills and allow you to. I mean, Chenell, you know, we got connected through Chenell Basilio, her growth in the Reverse Pro community, which I will also link because Chenell's just the best. But like, you know, she talked about how when she started growth in reverse, 25 to 30 hours per week doing research, listening to podcasts on walks, and taking notes in an Apple Note like she came on. I'll link her episode too. And AI has helped her immensely with that because could gather and coalesce data and she still has to do the verification. But the verification versus listening to a dozen hours of podcast episodes is an appreciable difference.

Charlie Guo: Yeah, yeah. And I think you touched on something. You know, one of my core principles here is like, you know, you should use AI to like, leverage the things that you're good at. Right? Outsource or delegate the things that you know, you're bad at. Right? Like you hate listening to 20 hours a week of podcasts. Great. Like that is the thing you should delegate to AI. But when it comes to like having that strategy mind or extracting those insights, don't take yourself out of the picture. Right? Make yourself better by using AI.

Joe Casabona: Yeah, love that. So let's wrap up here. We've been talking probably longer than I intended, but I probably should have foreseen that with where AI is going. We touched on this a bit, but revisiting the AI Engineering World's Fair. You wrote a great article again on your Substack. I'll say it again, Artificial ignorance about the future of AI. So can you maybe talk about a few of your biggest takeaways from that and where we think AI is going in the future?

Charlie Guo: Sure. So I will preface this by saying I think this, the content from this conference, the AI Engineer World's Fair, as the name implies, is very engineering-heavy. Right? It's really aimed at folks who are building products and building software with AI baked in. Right? And AI is very different.

Joe Casabona: Do you think they planned? I'm sorry to interrupt you. Do you think they plan this to be like the same week as WWDC?

Charlie Guo: I don't know. That's a good question. I think it's, I knew there were just so many conferences, honestly, June, I think, in the Bay Area is like. Yeah. Conference City. Right?

Joe Casabona: Yeah.

Charlie Guo: And so it's just, It's all so crazy. I had a friend who, yeah, flew into town and stayed with his sister because it was like 700 a night, like to stay in San Francisco, which is nuts.

Joe Casabona: Dang. Yeah. That's expensive even for San Francisco. Yeah, man.

Charlie Guo: But so it's very engineering-focused. And some of these things, you know, are less relevant to, I think, consumers. But I would say some of the things that, you know, go beyond just building products are things like benchmarks. Right?

And I think understanding the models today are getting so good that it is a genuine problem of how do we write tests to figure out how good they are. And most of the Math and engineering, coding, and STEM benchmarks that we've had for 10 years in the last two have mostly been saturated, where there's a negligible or very small 1 percentage point difference in performance by the newest models.

We are inventing new ones, different ones, right? Some with very creative names. There's one called Humanity's Last Exam. But there's, you know, but there's also the question of, like, okay, even if you get, you know, the smartest PhD students to write, you know, the hardest Math and Science questions, what does that actually mean when it translates to normal people, right? Like, great. It can do PhD level math. Can it write the marketing content that I need to write effectively, right?

And so, you know, one thing that I'm coming around to is I think everybody should maintain their own “benchmarks” of here are things that AI today is not great at, right? And I think every time that, you know, every three months, every six months, just test out the latest stuff and see how far it's come, right? And I think it gives you a good sense of, like, what capabilities are being unlocked in near real-time.

Joe Casabona: That's great. That essentially answers the second question, right? Like how can we keep up? Things are changing so fast. There's no, like this is, there are two thoughts I had while you were talking.

One was when I was in undergrad, like, the Turing Test was, like, the most important test. And if you don't know what the Turing Test is, dear listeners, it's the idea that a person, a real human being, is conversing with something, either a human or a computer, and it has to guess. And if the computer can fool the person, it passes the Turing Test, right? More or less. That's the Turing Test. I mean, every day now, right? Like, every day, AI passes the Turing Test. That's like.

Charlie Guo: And we don't talk about it. I used to harp on this, like, a year ago, and now I've just stopped because, you know, nobody cares. But, yeah, like, this was the thing in, you know, AI and ML for decades, and now it's just like, yeah, we beat it. Okay, cool.

Joe Casabona: Yeah, done. Like, it's like, not a huge. It's such a huge deal. Like, I think so. I think about it every day. And then the other thing is, like, you know, when I was a web developer, I feel like every other week, a new JavaScript framework was coming out. Oh, we gotta check out Angular, we gotta check out React. We gotta check out Angular too, which is different from Angular1.

And it feels like that is the Cadence with AI tools today. Like, I just heard about N8N like two weeks ago, and now everyone's like, of course, you've got to use N8N. And I'm like, when did this become a thing? So, like, how, like, how does one not go crazy? Like, I want to grow, I don't want to just be stuck in like ChatGPT and Claude if there is something better. But I also don't want to spend my whole week evaluating AI tools.

Charlie Guo: Yeah. So, I also actually, I wrote a post on this again, like a little over a year ago called Dealing with AI Fatigue. Right? Because I think AI fatigue is real and I think it like impacts pretty much everybody that's plugged into this stuff.

And, you know, at the end of the day, I think there's kind of three things that I would like. Three pieces of advice I would give to folks on this question.
1. To curate relevant, insightful sources that are manageable for you. If that means being on X, sure. If that means one or two newsletters that you find really high signal, so be it a YouTuber that you like, but relevant and insightful.
2. I think experiment again as much as you can, right? It's easy to consume content. It's hard to try 10 new AI tools every week, right? So for as much bandwidth as you have, I would say, like, start with just the models. Just like getting an account with a paid account. I should be very clear about this, right? A paid account and at least one either probably ChatGPT, I think that's the, that covers 80% of people, right? And just like test it out, right? Figure out what it's good at, and what it's bad at. It does require investing some time and effort and money into like, figuring out the cutting edge of this stuff, right?
3. And then step three, let go of everything else. Right? And like, I know that's so hard in, you know, the year 2025, but like, just like, stop doom scrolling, stop looking for more tools, right? It's much easier said than done. You know, I'm speaking from experience. If anything, I probably read more than like 99% of people and I still feel like I'm just like missing out and falling behind. Right? But this stuff is coming. I think by even the virtue of playing with this stuff and having opinions on it, you are probably in the, like, top 10%. Right? If you know the difference between 4.0 and 03, you were in the top 1%. Right? Of, like, AI adopters right now. Right?

Joe Casabona: Yeah.

Charlie Guo: And so I think it is coming and it's going to have a big impact, but it is also okay to like, you know, sandbox this stuff too.

Joe Casabona: Yeah. You don't have to have FOMO or (Fear of missing out) on tools. Right? Like, you don't have to have that. Awesome.

Charlie Guo, thank you so much for joining us today. This has been just a masterclass, I think, in AI and so if people want to learn more, where can they find you?

Charlie Guo: So, you know, once again, I think for all things AI-related, the best place is my Substack, Artificial Ignorance. You can find that at [ignorance.ai]. Like I said, you know, DMs are open, feel free to send me a message. And otherwise ,I am generally on and around the Internet @charlierguo, which is my handle in most places.

Joe Casabona: Nice. Love that. I just literally, as he was talking, subscribed to the paid version of his Substack, so…

Charlie Guo: You're too kind, Joe. Thank you.

Joe Casabona: And it's, this was so valuable. I'm gonna have a ton of links in the show notes. So, thank you so much for coming on the show. This was, this was great. I really appreciate it.

Charlie Guo: Yeah, likewise. This was a ton of fun, and thank you for having me.

Joe Casabona: And thank you for listening, and until next time. I hope you find some space in your week.