Automate the Mundane, Elevate the Humane

There's a moment in every platform migration where someone has to say the uncomfortable thing out loud: we made the wrong call, and every day we don't fix it costs more than admitting it.
For BumptUp, that moment arrived about a year into a Kotlin Multiplatform build. The app worked. The engineering was sound. But every feature update, every bug fix, every small tweak required maintaining separate codebases for iOS and Android. For a startup running on grant funding with infrequent budget cycles, we were essentially paying twice for everything. The hemorrhaging was slow enough to rationalize and fast enough to be fatal.
I'd argued for React Native from the start. Lost that debate at the agency level. The engineering team wanted the architectural maturity of KMP, and they had reasonable arguments for it. But when I came on as Fractional CTO and we moved to a new development partner, the math hadn't changed. If anything, it had gotten worse. The question wasn't whether to migrate. It was whether the founder and the primary investor could stomach the sunk cost conversation.
That conversation is never fun. You're essentially saying the money you've already spent is gone, and spending more on the same path won't get it back. The sooner we stop, the more likely we are to recover what we've been losing. I had to socialize that without making anyone feel stupid for the original decision. (Nobody made the wrong call on purpose. The information changed. That's allowed.)
The new development agency agreed on React Native. They also wanted to explore something I hadn't fully considered yet: AI-assisted workflows for the migration itself.
The Translation
Here's what I expected. AI would handle some boilerplate translation, engineers would do the real work, and we'd save maybe 20% of the timeline.
Here's what actually happened. The AI-assisted workflow translated the Kotlin Multiplatform codebase to React Native. Database structures, styling, reporting systems. All with enough fidelity that the engineering team could focus on the hard domain-specific decisions rather than grinding through one-to-one code conversion. Direct translations of database information, styling, reporting structures? Really solid. The agent handled those cleanly.
Where it fell apart was where you'd expect. Database merges between the two ecosystems. Complex integrations where the business logic was subtle enough that an automated tool couldn't infer intent. The development agency's engineers earned their money there, and I was glad to have them. That work would have been well beyond my comfort level as a product-side CTO.
The final numbers: three months instead of eighteen. About 4% of the original build cost. One-sixth the time, one-twenty-fifth the price. And the output wasn't just equivalent. We actually gained some UX improvements along the way, patterns that were either recommended by the agent or simply easier to implement in the new framework.
That was my first real, viable client exploration of AI. And it changed how I think about every project since.
What Tuesday Actually Looks Like
The BumptUp rewrite was dramatic. The day-to-day reality of AI in my work is less cinematic but arguably more useful.
The biggest shift in my daily work has been standing up a local version of the BumptUp app that lets me prototype features using the actual codebase and React Native vernacular. I make styling changes, test interactions, and share them with the development agency. Not as production code, but as super high-fidelity prototypes that speak the language of the app as it's actually built. No more translating from Figma to engineering handoff. The prototype *is* the handoff.
Beyond that, I've been building tools. One is a design system generator. Not the typical theme generator that spits out color tokens, but a tool that takes a founder or small team's input about their brand, their feel, their product purpose, and generates a well-documented, cohesive design system they can adopt more or less wholesale. Another is a product called Sneaky Stacks that competes with Goodreads. (My wife and I are both professional folks who try to be at least a little public with our ideas, and Goodreads makes it far too easy for people to discover that we both love very weird books. Sneaky Stacks lets you document your reading without the public profile that exposes your taste in bizarre fiction.)
I've also been developing voice and tone systems that let me generate interview scripts, evaluate writing for cohesion across pieces, and maintain consistency at a level that would take hours manually. And I'm working with an AI startup in Austin, helping them stand up marketing sites and internal design systems quickly, figuring out how to let multiple people contribute copy and content within coding environments without being destructive to the output.
The through-line across all of it: AI lets me make more than one of me. I've worked from home for over a decade. It's just me. There are only 24 hours in a day and I'm only going to work eight or ten of them. What AI has done is let me splinter off the replicable parts of myself and assign them tasks. Build an army of junior-level production support to execute on work I know how to do, that I can brief effectively, without having to do it all myself. That frees me to apply domain expertise to the things I'm confident delivering at a high level. Systems thinking. Process strategy. Product design. The heavy lifting gets delegated. The heavy thinking stays with me.
Where to Start (If You Haven't)
If I were advising a design team that hasn't started using AI, I wouldn't tell them to start generating layouts or writing copy or automating their Figma libraries. I'd tell them to use it for critiques.
Designers have a long-standing problem. You can only know as much as you know. Which is why I've always believed in having engineering, product management, producers, and marketers at the table before anyone opens a sketchbook. We needlessly constrain solution sets when we design in isolation.
AI can break that constraint without touching your workflow, your product, your Figma file, or your code. Take your context, your rationale, your output, and ask it to evaluate. Where did you miss something? What haven't you considered? What questions would a skeptical stakeholder ask?
Here's the real test. If you can't explain your design decisions effectively to an LLM, you can't explain them to stakeholders or clients either. Using AI as a criticism engine develops your thinking and your ability to defend your work. It's a low-risk, high-value starting point that makes every other AI application easier to adopt later.
What Keeps Me Up at Night
I was skeptical early on. The doom and gloom hit the design community hard, and I sat in it for a while. The reality, as usual, is somewhere in the middle. AI will replace a significant number of tasks within any given role. If you happen to be someone whose job is mostly production without a ton of individual expression, that's a real problem.
But that's not what worries me most.
What worries me is the Dunning-Kruger version of AI adoption. Someone without deep domain expertise generates output using AI, and because the output uses the right terms, references the right concepts, applies patterns that *look* professional... they think it's good. They don't know what they don't know. The quality is lacking, but the ignorance is comfortable.
AI tooling can only represent things that already exist. It can recombine, it can apply, it can translate. But it can't generate net-new thinking. It can't place bets. It can't take the kind of risk that creates something genuinely better than what came before. And if we're only using tools that tell us what already exists, we're only ever creating things that already exist.
That's great for a lot of applications. But it's also how you end up with a world of products that are super homogenous. Remember when WordPress made it really easy to make a website and really hard to make a good one? I think that's where we're headed with AI-generated products if we're not careful. Not replacing great designers or great design thinking, but enabling super-rapid shipping of products that are fundamentally less than they could be.
The Actual Point
If there's one thing I want someone to understand about how I use AI, it's this. I don't use it to do the heavy thinking. I use it to do the heavy lifting.
Cognitively, my job isn't easier. If anything, the decisions I'm making and the depth at which I'm making them is greater than at any point in my career. The difference is I'm making them faster, because I'm not forced to then go document every rationale, build every argument artifact, capture every decision in a deck that exists primarily to prove I made it.
The thinking got harder. The execution got faster. That's the trade. And for someone whose entire career has been about making the foundation solid so the interesting work can happen... that's not a threat. That's the point.