Joe Casabona: My coaching client, Laura, recently told me that she saved more money than she spent on my coaching because I helped her simplify and consolidate her tech stack. And now I want to do the same thing for you, too. I have put out a free tool called the Tools Audit. It will help you determine what tools you use, how much you're paying, where you can consolidate and eliminate to simplify your small business tech stack. You can go to streamlined.fm/tools to get your free tools audit today. That's streamlined.fm/tools.
I feel like 2025 was the year of me screaming into the void that AI is not necessarily the best thing for us. I've started a new yearly theme about digital detox, where I'm talking more about doing things in the analog world. But undeniably, AI has become part of the very fabric of our lives. And I think that more people are starting to see the appropriate uses, at least in my definition of AI. Which is why I'm so excited to have our guest on the show today.
Hey, everybody. Welcome to another episode of the Streamlined Solopreneur. I'm here with Christian Ulstrup. You have a pretty storied bio that I will link to for people to read. But you've done a lot of work in the AI space in like the big venture capital startup space. Right? Which I think makes you uniquely positioned to talk about this stuff as well as present alternative opinions. I don't like to have debates on this show necessarily, but I do like presenting more than just my opinion on it.
Christian Ulstrup: Yeah. Thanks for having me.
Joe Casabona: My pleasure. What encouraged us to hit record here was you asked me what I find particularly egregious about how people use AI.
Christian Ulstrup: Yeah.
Joe Casabona: And I unabashedly feel like using AI to write or create content, let's say from whole cloth is bad. And I think we're seeing more people think that. I know a few people who ChatGPT launched on like December 2, 2022 or something like that. And on December 3, 2022, bunch of people had AI expert in their bio.
Christian Ulstrup: Yeah.
Joe Casabona: I had a particular salient debate with my friend Alistair McDermott about at what point does AI get it right enough to say you wrote your book? If I dictate the whole book and I train it on everything I've ever written, then did I write the book? And I say no, because AI doesn't have your entire lived experience, which is, I think, a prerequisite for writing at least a compelling book or for you being the author of your book. So I'd love to Just kind of get your thought on that first before we get into like the practical non hype uses for AI.
Christian Ulstrup: Yeah, absolutely. And I've been working in, I like to call it applied AI for about 10 years and then was lucky that I had some extra time basically to tinker with the generative technology even before ChatGPT was out already.
Joe Casabona: I want to make this distinction, right? Most people are using AI synonymously with LLMs.
Christian Ulstrup: Yes.
Joe Casabona: Apple probably is really upset because they called their AI machine learning.
Christian Ulstrup: It is funny with the terminology. People are like, oh well, I was doing machine learning and stuff and at the end of the day it's like you're using computers in the scope of what computers can do is expand. And like the simplest way I used to try to explain this is everyone knows Excel, you have general purpose tools for crunching numbers and now computers are able to quote, crunch words. That means a lot. And it's changing and we'll sort of talk about that. But I think, and I'm looking forward to digging into, sharing the data from our engagements, the things that work really well, the things that don't, it's always surprising, which is what makes what we do super fun. Because every company is different, every workflow is different, every industry is different. And as a technology gets better, you don't really know before you go into it the way it's going to work out.
So one thing I wanted to call out on the writing side is that I was lucky in grad school, actually. I somehow was able to get credit for my MBA graduate program by taking two fiction writing workshops. Somehow they signed off on that. They were like, that's fine. And those classes were fantastic. And a lot of stuff stuck from that. That's been useful in business and very much in the context of working with these models, because you are writing, you're using language to drive them. But one of the things in particular was the lecturer said, writing is actually mostly thinking.
It's not typing, typing is not writing. Writing is mostly thinking. I think this is true even beyond sort of the written word, any kind of creative medium. To your point, you're taking a perspective, you've discovered something new. You're taking your embodied experience as a human, whether it's as a business person or in the context of your community, your family or whatever else. And then you're going to package that up into something that can be communicated to other folks and ideally made useful and maybe it's commercially viable in some way as well. And I think one of the challenges that we See, broadly with generative, AI is where there used to be a lot of bottlenecks in what are more, I would say, perennial processes when it comes to value creation and creativity and relating and communication and all these different things. AI actually is a valid substitute for pieces and parts of that sequence of steps to get, like your message out there, to communicate or to create something.
And it's such a general purpose technology like electricity, that the first thing that people do is they take the existing workflows they have or channels of communication or sort of norms around collaboration and working together, and they try to simply just retrofit the technology to that way of working. I think that makes a lot of sense as the first thing to do. I think oftentimes the results are much more disappointing than people intuitively feel like, this technology is amazing, we passed the Turing test. Like, it should be unlocking all this stuff. And I have a lot of faith that it will. Like it's going to show up in total factor, productivity gap growth, things like that. However, these seemingly initially obvious applications where you're like, well, I'll just get it to write my blog post is not only not oftentimes like the right way to use it, but is almost like inverted versus the way that you want to use it to produce material of higher quality. So very concretely, in the context of writing, what I have found to work super, super well is to use the models as compressors and sometimes as thought partners, this sort of other tool that you have in your toolkit to make whatever it is you produce much higher quality.
There are a few ways that you can do this in the context of writing. Like, one of my favorite things to do is at the end of the week, I will look at all of my Fireflies transcripts of all the client meetings, prospect engagements, everything else I have. And I was doing this manually for a while. We actually launched a spinoff product called Powerline that basically does this for you or makes it easier. We'll take all those transcripts, pipe it through the biggest context window LLM I can, and start asking it questions that I would want somebody who is like a relatively smart thought partner or somebody who could hold all that in their memory to produce information and then direct questions back at me. So I'll ask things like what's the most important thing across all of these conversations that nobody's paying attention to? Or what is the one super precise question that you could pose to me, Christian? Such that if I were to go for a walk and think about it and address it for the next 60 minutes, it would be likely to yield some kind of meaningful insight, like really precise sort of prompts or, you know, questions and things like that. And the thing is, versus people going into ChatGPT and being like, well, I'm in this industry, I want you to write a blog post about this instead. Taking all the raw material, which is the digital exhaust or sort of like the fruit of your labor, the stuff that's produced that's real, that's out in the world, that's coming from like experiments and interaction and creating value that exists outside of the model, using the model to compress that and distill it down into something that's more essential that you can wrap your arms around.
That's a great first step for them, producing higher quality content, higher quality writing. And a lot of this stuff I post on LinkedIn is a function of this process. I'm going to take all this material, I'm going to have it ask me questions, I go record a monologue. I come back, I might distill some of that down into content pieces and parts and put finishing touches on it. But at the end of the day, it's higher quality content, higher quality thinking. Higher quality writing is going to come from having this additional tool that you can use as a compressor and sort of a lens on this new data that you're producing. If you're not producing new data, we can talk about sort of what that means.
Joe Casabona: Yeah.
Christian Ulstrup: Then this technology is in the limit, not useful. And this requires a radical rethinking of how work happens in order for this as a sort of a compressor and a macroscope and whatever else to really unlock outsized value.
Joe Casabona: You said a lot of things here. I want to kind of break them down one by one. First of all, writing is mostly thinking. I love that. I have an article on my site called the First Draft is Where the Magic Happens. I talk about the core importance of writing is writing. People will be like, oh, yeah, I just have a. I write the first draft and I punch it up. And I'm like, that will inherently lack original thought.
Christian Ulstrup: Yes.
Joe Casabona: The thing I say is, next time you're in a meeting with a bunch of people, be the first person to give your opinion and then see how many people join you and then change your mind and see how many people change their mind with you.
Christian Ulstrup: Yeah.
Joe Casabona: It's like we're just highly influenced beings.
Christian Ulstrup: Yeah.
Joe Casabona: And so AI writes something, you're like, oh, this is really good. I don't need to make a lot of changes to it versus you sit with a blank cursor or like what you do, you crunch all of this data throughout the week and then you have a starting point that is completely yours. You just got the. There's a common thread in these conversations I had. Let me pull on that.
Christian Ulstrup: Yeah, a hundred percent. And like, if you want to even visualize it, the model is this. I think there's some sci fi writer who referred to it as like a blurry JPEG of the Internet. It's a compressed representation of all the data that it was trained on. And so the weights are actually these mathematical representations of relationships between the data points. And the fact that that can be output in language that can be understood is really kind of an almost miraculous thing. It's still mind blowing that this stuff works at all, but that's really what it is. And so anything that's beyond sort of the frontier of what it's seen, if you were to try to visualize this in some kind of like an object in high dimensional space, if you're venturing far outside of that, if you start by going to the model rather than thinking for yourself or experimenting sort of without using AI at all, it's going to sort of draw you back to the center.
And if you're working in a domain where things are known and you can apply that knowledge to solving a problem that you don't have, but it's like contextually appropriate, that can be really useful. But if you're really out on some frontier, and that's where the best art, the best innovation, the best kind of zero to one material comes from. I think oftentimes, especially at the very beginning, use of AI can be counterproductive. It's going to pull you back in when you really want to have some time, like you said with the blank cursor, blank page, struggle it with yourself, have like an unmediated experience with reality wherever you are. And then if you document that, if you record that process and you take that and you use it with the AI in this kind of discursive way to figure out what to do next, or to produce content or whatever else, then it can be incredibly useful. But to your point, it's like you gotta venture out to the frontier first, find something good and then figure out what to do with it.
Joe Casabona: This reminds me, this is not apocryphal. This actually happened to me. I was trying to fix somebody's computer and like it's not turning on, I don't know what's happening. And I Said, is it plugged in? And they said yes. I was like, all right, great, it's plugged in. I don't need to check that. I'm at it for like a half hour. And I'm like, let me make sure the monitor's plugged in. Let me make sure. And then I'm like, you know what, Let me just check to see if myself if it's actually plugged in.
Christian Ulstrup: Yeah.
Joe Casabona: And it wasn't. I was unable to solve my problem because of an assumption I was predisposed to. And that's kind of like going to AI first. You are predisposing yourself to something that might be preventing you from breaking through and solving the problem. There's this push and pull, right? The Renaissance was a bunch of artists building on techniques they learned from the past, and that's really important. But taking our knowledge and trying to solve a problem. You wouldn't paint over a painting from the 1100s to try to get to a brand new technique for painting.
Christian Ulstrup: I think this is a really interesting question of when and how to use it. Somebody should come up with a good term for this. But I think there is a skill. There's kind of the skill of intuitively knowing whether you're working in a space where things are known or whether you're actually kind of out on the frontier, because not everybody's out on the frontier all the time. And sometimes you need to be. And even in the course of a day you can sort of step in and out of the manifold, like the footprint of sort of what is known, what's represented in the model, like literally whatever else. And understanding if you're in or out and when to use it is itself, I think definitely some skill that you have to cultivate through practice. Now, the fundamental challenge with all this, I want to talk about the retrofitting thing for a second because then we can talk about like some concrete examples of this working, not working, and how to scope, experiments or application of the technology in such a way that you are going to get the upside without potentially taking on sort of the negative consequences, getting smooth brained or whatever. Right? Because you're using ChatGPT too much. Is that in fact a lot of the work that people are paid to do currently, which a lot of this is inherited from 20th century sort of bureaucratic roles. And the more highly specialized you are in a larger organization, the more likely you are to be doing, in some cases, relatively rote reading, writing, retrieval, research, and to a degree, reasoning. Although people quibble about, is the AI reasoning or not, and we don't have to go into that, but little bits of reasoning where you're processing the information using inductive, abductive or deductive reasoning, they're doing lots of tasks that sort of fall into that category. Because there was no alternative to processing information in that way. Except in the case of Excel and some quantitative data. Humans were always a bottleneck. So even if something was known, not just maybe knowledge within a domain amongst experts, but even common knowledge, you have a lot of roles that exist that are just applying that common knowledge in a particular sort of context.
Much if not most work that's happening in sort of services industries in the US and the developed world is application of things that are known to this particular context and that knowledge is in the models. So the things that people are getting paid to do, a large percentage of them and a growing percentage should be in some sense delegated to AI. Doing workflow transformation is a very challenging thing. That's what we specialize in. So that both employers and employees can kind of figure out how to rethink like how the work is done and get it. But that's kind of the truth now. I think there's time limited arbitrage on this where you'll have some people will be able to do this for like a little bit longer, but eventually AI is going to eat up those tasks. And on the other side of that what happens is the stuff that we're talking about where you have to fundamentally rethink how you're creating value, how you're participating in the economy, it being more about doing new things, not just applying common knowledge in some kind of rote process.
That I think is going to be the biggest challenge with retooling writ large. Because I don't think it's like a small sector or a small number of people who have to go through this transformation. I think it's everyone from new graduates to people who are mid career and then executives at the highest level rethinking fundamentally what are the shape, the roles, the departments and even fundamentally the economic incentives that we've implemented at our companies that allow people to get work done and for us to grow, people eventually have to get there. But I think it's totally rational right now. A lot of the work slop creation and everything else, delegation of those tasks, as long as you're continuing to get paid for it basically to AI. So there's a lot of stuff that is known that's in the models. There's an infinite amount of stuff that's outside, that's worth looking into. But we're sort of at the beginning stage of a transition where people are going to have to shift more of their attention towards the second mode of doing things that'll take quite a bit of time.
Joe Casabona: Again, you raised a lot of really good points here. I want to make my next point, but first I want to point out this really great episode of Deep Questions by Cal newport. It's episode 380 chat GPT is not alive, right? So you mentioned, like is is it thinking? Is it reasoning? Cal Newport lays out a very strong argument for why it is not at least like LLMs as we know them today. I'll just point people to the episode. I think it's really good. But the last bit you said there reminds me of, I think probably one of my first I hesitate to call anything I write a think piece, but I think some people would call it that. An article I wrote called ChatGPT is exposing our broken education system. An argument I got not really an argument, but a debate.
I got in with my friend about how teachers were handling AI and saying like, you can't use ChatGPT at all in your assignments. And I said basically, if you're just telling a kid to write a paper on the Battle of Gettysburg that is completely factual without analysis, yes. What is that doing for the kid besides making them memorizing facts that they're going to remember in 20 years and be like, oh, I think I learned that one time, right? There's no critical thinking in that. Maybe you're learning how to research, but using generative AI in research and then ideally fact checking, what it comes up with is research today, right? Just like using Wikipedia as a starting point is how we did research 20 years ago, right? So more recently I weighed in on a question of are computer science teachers getting left behind by banning Vibe coding in their classes? And my take is basically, sure, you can Vibe code if you understand the code that's being written. Understanding what's being generated is very important. You don't understand it until you actually do it. And so I think for a intro level programming class you should ban Vibe coding because the students need to learn and understand. Otherwise in 10 years we're going to have a bunch of programmers and project managers who have no idea how their.
Christian Ulstrup: Code works to kind of steal me on the other side of this. Almost nobody is mucking around with assembly language. Most software engineers are not messing with compilers. Like they don't understand the nuances of a lot of that stuff and they don't have to. So like layers of abstraction, as long as they're reliable enough, trustworthy enough, are incredibly useful primitives because they allow you to operate it sort of a higher level and thus create more value and so on and so forth. I think we're in a weird liminal space where this stuff is immediately accessible. The updates are distributed at the speed of light, which is unprecedented as I think in human history as far as like general purpose technologies are concerned. And how to again, this idea of reform, how to rethink how this fits in, seeing as it's not going away into something that is good, whether it's in work or education or whatever else that helps the students and the teachers and employers and employees and the participants, like achieve the sort of overarching goal that the institutions and everything exist to produce.
I just think we're going through this crazy period of experimentation and eventually good and best practices will emerge. But clearly to your point, the old structures and this new technology have gone very much disjoint in a lot of ways and so kind of useful to tool. I think for thinking about this is actually Aristotelian causality. I'm a fan. Mr. Aristotle. His model of causality was that there are four causes. The modern mind, we think mostly about cause and effect.
It's sort of like this one way thing. You do something, you get a result. But he basically called out there are three other causes that are just as important. And I would argue because we don't think natively in this way, in the context of AI, they're super unattended. I'll draw this back to the education piece in a second.
Joe Casabona: Trust me, this is great. This is the first time Aristotelian causality has ever been mentioned in over 500 episodes on this show. So I am psyched. As a liberal arts college educated person who had to take philosophy classes.
Christian Ulstrup: Yes, we're occupying like a nice little niche. I will say in general, I think Aristotle and Aquinas are two of probably the most important thinkers that you could go back to if you want to take AI seriously. That's a whole tangent. The way I think about the technology is super influenced by those two amazing talk about the primacy of the will or the intellect and all these sort of things. And what does that mean for model representation, like latent desires. And there's a lot we could talk about there. But Aristotelian causality, you have what he called efficient causes. So that's kind of the way we think of like, cause and effect, billiard balls, like, hitting each other or whatever, right? Then there are material causes, which it's like the stuff that something's made of.
And I think of AI very much as like a material cause. So it's this stuff that's now like electricity. Like, it's kind of getting into everything. It's sort of there. It clearly affects the environment and the context and the people that it goes into. And then the two other ones that I think are most important to think about very practically for leaders or teachers or whoever else would be final causes. Like, what does this overall system exist to do? So in the context of, like, education, what is the point of having somebody at school? It's a big question that we could go into. But I think that question is not sort of being asked enough in a business context.
I think it was Drucker or maybe it was Milton Friedman. The purpose of a business is to create customers, basically. And so that's an easy sort of final cause to come back to, is like, okay, are we creating customers? Are we making them 10 times better off, like the people that we have currently? And that's a question you can always come back to. If not, why not? What would have to be true for that to be the case? That's usually where we start. Our engagements is really getting to the essence of that and working backward to figure out how to get there 10 times faster. So you figure out what that final cause is. If it's for students, for them to have the skills to enter the workforce, that's one thing. But I think getting answers to that question first is important because then what you can do is you can look at what are called the formal causes.
So how is education structured? How is employment structured? What are the economic incentives? How do you know what good looks like? What are success criteria? What are people allowed to do and not do? When you're talking about AI can't be used, that's a formal cause. When you look at, you're going to have to take a test, and to take the test, you get these answers and you get an A. That's a kind of formal cause. If you start to rethink this and you go, okay, the point is for people to develop critical thinking skills such that they can do new things under conditions of uncertainty, which we should expect to be the case, like in the economy going forward. Because if AI commoditizes everything that's known and things that were privileged knowledge become common knowledge faster than ever, then what's scarce at the End of the day, it's risk and uncertainty. And so if people need to kind of like surf on that frontier to participate in the economy, then what you should be doing is figuring out how to arm students with the skills that they need to thrive in that context. To skip to sort of like the tentative answer around this, it probably ends up looking something more like project based learning. And so lots of interesting nuances around.
How do you track and monitor, how do you give feedback and everything else. But I think whether it's in education or in work, ultimately reforming the structures such that you make it really easy for people to come together and work toward some highly desirable but uncertain goal, that's where everything needs to go. That is absolutely not how education or sort of the economy writ large is structured for most participants. At the moment.
Joe Casabona: You nailed on the head my biggest concern about AI. And then again, you did it with like classic philosophy, which is fantastic. I went to the University of Scranton, so we're big on Aquinas, Socrates, Aristotle, Ignatius.
Christian Ulstrup: Good.
Joe Casabona: The critical thinking is extremely worrying to me. My wife, in a very casual example, right, we were going to the Skubal Trail and we were thinking about activities to do in case it was too cold to actually go into the creek, which it was fine. But she said, oh, I just asked ChatGPT to come up with a scavenger hunt for, for kids 4 to 8 years old. And I said, couldn't we have thought of that ourselves, that sort of thing? I think it seems so benign, but it's outsourcing a very gettable thing for us. Whereas, like, I think a perfect use of ChatGPT that she did was she planned a clandestine surprise 40th birthday party for me that she had planned over the course of a year. And it was incredible. I've talked about it on the show before, but it was just incredible. She used ChatGPT to generate a mock up of a cake to give to a baker.
I'll include a link in the show notes again, but it was awesome. It like included like cigars and whiskey and it was wonderful in like fondant, not like actual cigars and whiskey in the cake. And I was like, oh, this is great. Now you're using the tool to do something you couldn't possibly do to clearly communicate a craft that was made in the real world.
Christian Ulstrup: Yes.
Joe Casabona: Those two examples are always front and center in my mind, right? Because one, it's like I don't feel like thinking and so I'm going to outsource it to a machine and then the other is like, this is machine assisted skill.
Christian Ulstrup: I don't have a hundred percent. And I think the simplest way I put it is technology is good so long as it helps you go deeper into reality and not flee from it.
Joe Casabona: Love that.
Christian Ulstrup: So are you a hundred percent? Are you going deeper into reality? Are you drawing from the common knowledge pool in sort of a prudent way and then applying it to make contact with reality, get new data and then you learn and sort of close the loop and at the end of the day you produce something that's of higher quality.
Joe Casabona: In defense of my wife, I went through this whole thing in the summer before, right? Where it was like I was doing the same things that she was doing and I was like, am I losing my critical. I mean there's an episode about it. Am I losing my critical thinking skills? This is so important. And I think you're right. Schools were originally set up to prepare kids for like laborious jobs and then factory work. Right. Like the eight hour structure is like, hey, we need to train you to be able to sit for eight hours and do some benign task.
Christian Ulstrup: Yeah.
Joe Casabona: And it's still like that. It needs to change. And maybe AI is the accelerant to actually make it change.
Christian Ulstrup: I think it's the material. Cause this has been really fun and abstract. I'll give you some concrete examples of how we've applied this and help customers sort of do the same. My firm is myself and then I have associates. It's structured more as not like a traditional consulting firm. Nobody's full time, everyone's fractional. We're mostly remote and async. We have basically an optional meeting once every two weeks.
Now it's every week because doing this growth Sprint or whatever, but that's it. And everyone has the freedom and flexibility to kind of do the work however they want. Now what makes this work is the way that we've structured the economic engagement is that ideally it's a hundred percent. In reality it ends up being like 50 to 80% of the compensation that one of these associates who's a part of gsd, who's a part of my firm is at risk. When we start engagements, we don't do, okay, we're going to do a year long consulting engagement where we transform your company and that's, you know, $500,000 or $1 million or whatever. We don't do that. What we do is we don't charge for discovery at all. We try to figure out as quickly as possible using a lot of different AI tools what are the 1 to 3 highest leverage workflow transformations within your organization that are going to deliver real ROI and results now and give somebody who's going to get hands on with the tool a real magic moment where they through that get fired up and they learn some general purpose techniques that make it easier for them to evangelize the technology and get other people interested and experimenting.
Right. Because the experimentation is where the good use cases come from. And so the way that we do this is we do the discovery, we formalize those prospective engagements, then as time boundaries incontrovertibly binary outcome goals through that process to get to the AI point, it's like, okay, I know what the goal is, I know what sort of the constraints are, I know what tools I have available, I know who the people are. How can I use my critical thinking skills and decision making visceral intuition that I'm honing where. Because there's real money at stake and there's a time limit, it makes me quite uncomfortable. That's like a good experience, I think, to have that you don't get in sort of the classroom setting maybe as much as people should. And so every decision I make really counts because we have limited resources and it just forces everyone who's doing it in this way, the associates and myself, when I do them, to constantly rethink your process. And it's this continuous process of transformation.
Joe Casabona: I do want to stop you here because I want to ask a question that is sure, sure, sure, not really related to AI, but just kind of like the economics of this, recognizing that the associates are people on your side. Right. Who are solving the problem. Yes, they have agency, of course, but it sounds insanely risky to them to take on this sort of work relationship. Am I misreading that or is that accurate?
Christian Ulstrup: I didn't know if this was going to work the first few times I tried it. What I have found in practice is they love it for a couple of reasons I'm not going out, like recruiting people. Like all these people came to me because they were following me on social media or they knew what I was doing and they were curious about it. They're already kind of like, oh, this is cool and I want to try new things. Right? So I'm not saying this for everybody all the time, everywhere right now. I'm just saying it works kind of in our context. And I wasn't sure if this is going to work. And we've adjusted the methodology and implemented some good practices around being really rigorous around some of the scoping Being really rigorous around syncing.
If it's over the course of like multiple weeks or months. There are little things like that that we've learned when we have failed. Because that has happened and they've had to cough up money and it's like, well, you put 10, 15, 20 hours into this and you lost money and you got it 80% of the way. It feels pretty bad. But in reality, what tends to happen is that happens. We have a formal debrief, they cough up the cash, I literally send them an invoice and they pay it. And then what we do is, okay, we made a prediction, this is what actually happened. Now we're going to explain the gap in between.
And really having that space to take stock of all those insights. You're like, huh, Here are all the things that I learned. Then you go, okay, what do we do next? And usually what happens, it's like you go back to the sponsor and you say, we got it 80% the way there. Let's try this again with this date and maybe change one or two of the success criteria or something like that at the same rate. And generally they say, yes. And so what happens is you kind of quote fail that first time. You really feel it, which causes you to learn. But then it's like, I think we can get this over the finish line.
Joe Casabona: It's almost like higher stakes academic research. You're not awarded the grant money necessarily until you solve the problem. Like you said, it's not for everybody. It sounded like spec work to me initially, but it's not really that at all. It sounds like it's closer to research project, where you get paid out for completing the research.
Christian Ulstrup: There's a lot of uncertainty because you're tying the outcome to something that is at least like 50% out of your control. You're like, okay, well, how do I reduce that uncertainty? Through the actions and methodologies I developed. And this is a good forcing function too, to like, get people to take a really deep interest in, like the customers and the context and everything as well. Because if you don't have that, you're going to have all this stuff that you don't know, you don't know about.
Joe Casabona: We've also learned after a certain salary point, that's not the thing that causes happiness. Like, you need people in employment to actually care about what they're doing. And it sounds like this makes them care a little bit more.
Christian Ulstrup: I think so. I mean, it raises the stakes. I'll give you just one more example, because it's always Good to make this stuff concrete. In the case of this enterprise that do lots of recruiting and everything, the overarching goal was sort of very clear. We want to boost the quality of the hires and make it more efficient and all these different things. Good reasons for wanting to do this. And so when we did this kind of the first time, we set this like very ambitious goal of there are these two roles that we need to fill by a certain time. It's like, okay, this is pretty aggressive.
Let's figure out what we can do. And so we ended up fundamentally transforming their process where it was like instead of all these intermediate layers of translation between initial discussion with the hiring manager, which wasn't being recorded or anything at all, all the way to getting stuff into the ATS and turning it into a templ for the job description, you know, what you can do is just compress that down to really high quality one hour interview with the person who is responsible for filling the role at the very top, like the recruiter or whatever with the hiring manager. Record it. Take that transcript. Use the state of the art AI model if there's additional context at salient snippets from the code base, other material that has more information about the nature of the needs that the team has and everything else. Incredible job of producing job descriptions that were very precise. Had a lot of information that to be honest was probably illegible to the recruiter because they can't be experts in everything. When it comes to these technical roles in particular, that happens like immediately versus like the many layers that it was going through before and it's like, okay, that's big.
And now the question is you've got the usual places where you post this to attract inbound. It's very competitive, like it's increasingly like saturated. It's hard to get attention. Let's use the Deep research models and they're great across anthropic Gemini and ChatGPT. Like the deep Research agents, they're all like pretty differentiated. There's a little alpha for folks is if you're going to do like a really hard research question, like just pose it to all three and see what they come up with because there will be gaps. We asked this question of okay, given the content of that original interview, if I were looking for exactly the person who would be the best equipped to do this or maybe you know, the person who is, what are three to five super niche digital watering holes that exist out on the Internet that I almost certainly have never heard of that we should go into and put this opportunity in front of to see who bites. And it comes back and it's always like random slack group, this WhatsApp group.
Here's like some forum you've never heard of. All these different things where it was like surprising and you go look at it, you're like, that makes a lot of sense. Let's go figure out what are the one to five steps that I need to do to gain enough credibility to get into the community. Then you just use AI to get it over the finish line. And so they're able to now target these folks even beyond just the tentpole channels. And then finally at the end, of course there's stuff when it comes to application analysis, finding needles in the haystack. We actually open source some of the software that we built to help recruiters do this through greenhouse within just a few key presses, using voice assistants to do interviews to collect more data, things like that. There's other stuff that you can do downstream.
But my point is all of those techniques, you might be able to come up with that in the abstract. It kind of makes sense. But having the forcing function where you're like, we actually have to figure out how to affect this result. What are those highest leverage things that we can actually come up with? Test and then hand off to somebody who's maybe not an AI expert so that they can incorporate into their workflow and use it to accomplish the marginal result which in this case is fulfilling another role. That's a hard problem. It was perfectly scoped and a turbocharged learning ended up being worth a lot of to everybody. But because how you did it sort of open ended is sort of like just a really nice fit in terms of how the engagements work. Given that you have a technology that's.
Joe Casabona: Expanding in scope, it's the difference between wouldn't it be cool if and I have to make this work. Yeah, it's the difference between an academic problem and a real world problem. My 8 year old, I said oh, this is such an academic problem. And she's like, what do you mean by that? I had to explain it. And I think there's the thought experiment and then there's the practicality. We've been talking for a while. I want to wrap up but I think I want to bring it back to. I think probably the thesis of this conversation. You started with it, you ended on this really good example of it is data crunching. You use the term digital exhaust.
Christian Ulstrup: Yep.
Joe Casabona: I think these are really practical ways that as we record this it is before Christmas. This is coming out after I am in my two week end of the year sprint because I'm taking 17 days off to match my kids break. I want that whole time off and I have been having tons of meetings and recordings. Forget about the end of the week. By the end of the day I barely remember what I talked about. Oh yeah, at the beginning of the day. But because I have full disclosure, they are a sponsor of mine right now. Radiant.
I have Radiant running on my computer every time I'm in a call. And this interview too. I can take like you said, take all of that and process it the way I see best. How am I going to make more content from this? What big questions are people asking me in discovery calls, et cetera. So what is a practical way for people to take this? I want to say it's very clear that you're extremely intelligent and for some of our non technical listeners, what can they do to kind of take this digital exhaust and compress it?
Christian Ulstrup: That's a great question. All right, so here's what I would do. If you are not already using a meeting recorder, I definitely would get that installed.
Joe Casabona: Streamlined FM Radiant.
Christian Ulstrup: Perfect. It was a good practice. I'd say it's a best practice at this point.
Joe Casabona: I'll also just add on to that for a while I was like oh, I should disclose that my meeting notes is going. And people are like, I just assume.
Christian Ulstrup: That now you do have to assume that. I still think it's a good practice to disclose it. I use another vendor that'll like send people a heads up or whatever. But yes, the incentive is too strong for that. For better or for worse to not be the case. So yeah, I think everything being sort of instrumented and recorded that is as the norm. I think there will be private spaces and everything. But like that is the world that we're heading into.
The incentives are too great. Okay, but if you're not doing that, you should. Even if though you haven't done that, I want to give you something very practical that you can do today. You probably already producing a lot of digital exhaust. Even if you're not using the median recordings, best case scenario, you have lots of rich conversations, client interactions like whatever else. You can get all that from the vendor that you mentioned. I do it programmatically to Fireflies because they have a really sort of robust API. So I use like Claude code and whatever just to pull all this stuff really easily.
You take a bigger sample as possible and then you want to go to the Google AI Studio is what I would recommend choose Gemini 3 Pro. Literally just brute force copy and paste all that text over into it and then start asking questions. I'll give you a couple of good beginning ones. What's the most important thing that nobody's paying attention to? What's something that I can do in 60 seconds or less that I definitely haven't committed to that would improve my situation or somebody else's in here that I just don't know, I don't know about? And that can be good information right there. And that might be enough that it sort of compels you to take some kind of action. But if you want to create content and do writing very different than just tell ChatGPT to write a blog post to get the most value, what I would do is prompt it. And this is the million token context window. So put in as much of the recordings and stuff as possible.
Say, I want you to pose to me three extremely precise, very hard hitting questions, all of which are independent, have different coverage that if sufficiently answered, would yield some kind of transformative, genuinely novel, impactful insight and maybe make reference to the primary source material where appropriate in the context of the question. That's like a good prompt. You can sort of put it in the show notes or something like that.
Joe Casabona: Yeah, this will be in the transcript. I make all the transcripts of the show.
Christian Ulstrup: This will be in the transcript. Yeah, I'm going to make this super practical though, so people can get this over the finish line. You do that, it's going to give you those three. Don't look at the questions, kind of COVID your eyes and then copy and paste them into like Apple notes if you have an iPhone. And then what I want you to do is I want you to put on a jacket or whatever if it's cold, it's very cold here. Put on your shoes and carve out at least an hour to go for a walk. Throw in the AirPods and then when you're out there, open the notes with the questions and just look at the first one and read it like very carefully. Turn on an audio memo on your phone and just talk, just monologue, trying to grapple with that question for as long as it takes until you feel satisfied.
Like, ah, that's interesting. Mumble everything. Every single thought that goes through your head as you're trying to figure this out, you're asking yourself questions. You're like, I don't know about this. It's a good point. I didn't think about that. Whatever, right? Just keep going to get all that material out of your head and then once you're satisfied, you go to the second question, go to the third one and by the end of it you might find that you have a monologue that's anywhere from 40 minutes to two hours. Sometimes these questions are really good.
You have to work through things, bring that back over. Can either be a fresh session or the same session. You can do this as many times as you want, but at that point there's probably one to three things that came out of that that are really, really, really good insights. And then whether you use AI or not, you just distill it into something that you can share through the appropriate channel. That's a great way to use this to create like really high quality content. One other thing I'll say about this is if you don't have the recorders, the other sort of potential goldmine of digital exhaust would be your email outbox. Really easy way to get all those emails in the same way that you would the transcripts. Again, just like as text that you can copy and paste is to use a plugin called Mail Meteor for Gmail and you can select all or like a sample of emails, whatever filters you set up in Gmail to do it, just to export all that and that you can use as digital exhaust.
It's obviously it's not going to be as high fidelity as what you're going to get from conversations, but it's still most likely enough that you could put this into practice. That would be my recommendation. If you want to go crazy with the multimedia, you have a little bit of text, go create like an image with Nano Banana and be creative with some really esoteric artist or style that calls to you. Use your intuition or taste around that. I like Richard Scarry. So I just did a fun one. You know where it was me with my VR headset on walking on this ridiculous walking treadmill I have in my home office. It was on LinkedIn and yeah, you can use like nanobananapro for the image, animate it with veo and now you've got a nice little piece of content that's not slop.
It was AI catalyzed in sort of the right ways. But at the end of the day it's all about writing is mostly thinking like we said, getting the higher quality thoughts out of your head such that you can share em with the world and keep things moving forward whatever your project is. So go for quality what I just outlined. Super straightforward way to do this with the general purpose tools everyone should have access to at this point.
Joe Casabona: Really strong way to end today's episode. Christian Ulstrup, thank you so much for joining us today. If people want to learn more about you, which everyone should, where can they find you?
Christian Ulstrup: It's a great question. So I post a lot of content on LinkedIn, so if you go to Christian Allstrup.com that'll redirect to my LinkedIn page. You can follow me, send me a connection, request, anything else. And then if you're interested in workflow transformation specifically, which is what we do with the business, all of our offerings, case studies, everything else that's available at gsd. So get stuff done. Is the name of the firm GSD at G S D A T dot W O R K. You can learn about our firm, our services and what we do. So those would be the best two ways to get in touch.
I hope to hear from folks in the audience.
Joe Casabona: Yeah, I will link all of that in the description below. And over at Streamlined fm, as well as all of the episodes, we referenced the four causes and some of the tools we mentioned as well. Christian, thanks so much for joining us today. I really appreciate it.
Christian Ulstrup: Joe, thank you.
Joe Casabona: And thank you for listening. Until next time, I hope you find some space in your week.