What User Research Taught Me as a Technical Founder (With AI in the Loop)
- 14 minutes read - 2779 wordsIn my previous post, I shared the story of how I went from a burned-out developer to the founder of Menuvivo. I built an MVP in 15 days, full of excitement.
But then the excitement settled, and a familiar fear crept in. The same fear that killed my previous project, Fubito: What if I’m building something nobody wants?
As a developer and software architect, I used to treat user research as something slightly mystical.
Useful? Sure.
But also: “That’s a UX thing. I should focus on architecture and shipping features.”
This article is a story of how that changed.
Over the last months I:
- learned a structured, lightweight way to run user research (with AI as a copilot),
- applied it first to a learning case (FlowCraft during AI Product Heroes course) and then to my own product Menuvivo,
- and ended up reshaping my product strategy based on real conversations instead of assumptions.
I’m writing this for:
- solo founders and technical founders who avoid user interviews,
- product managers who want a more developer-friendly way to do discovery,
- anyone curious how AI can act as a research assistant and coach, not just a text generator.
At the end, I’ll also share the templates, prompts and reports I actually use so you can adapt them to your own product.
Table of content:
- Why I Was Avoiding User Research
- The Turning Point: FlowCraft and a Synthetic User
- Bringing the Process to My Own Product: Menuvivo
- Warming Up on a Synthetic User
- Running the Real Interviews
- The Analysis Pipeline: From Transcripts to Insights
- What I Actually Learned About My Users
- How This Changed Menuvivo’s Direction
- Lessons for Technical Founders
- How You Can Try This Next Week
- Resources: Templates, Prompts and Reports
Why I Was Avoiding User Research
I’ve been building software professionally for years. I’m comfortable with complex architectures, distributed systems, and production incidents.
What I was not comfortable with was this:
“Jump on a call with a stranger and ask them about their life, their problems and why your idea might not make any sense.”
Underneath that were a few very human fears:
- Fear of judgement – “What will they think of me and my product?”
- Fear of invalidation – “What if it turns out my idea is bad and I’m wasting my time?” (I had already spent months building Fubito in isolation only to hear crickets. I didn’t want a repeat.)
- Fear of not knowing how – “What if I ask the wrong questions and look unprofessional?”
So my default coping strategy was classic developer behaviour:
“Let me just think harder, design better, and write more code.”
It felt productive. It wasn’t.
The Turning Point: FlowCraft and a Synthetic User
The turning point came during the AI Product Heroes program. Our case study was FlowCraft – a tool for managing product work. As part of the curriculum, we had to run user research using a synthetic user in GPT.
The idea was simple but powerful:
- You design a research guide and interview script.
- You configure AI to act as a realistic, constrained synthetic user.
- You run “practice interviews” with this synthetic user.
- You analyse transcripts, extract insights and refine your questions.
Two things happened very quickly:
- I realised how much structure there is behind good interviews (it’s not just “talk to users”).
- Practising with a synthetic user made interviews feel less scary and more like a skill I could learn.
The most important mindset shift I took from the program was this:
An experiment that disproves your hypothesis is still a successful experiment.
It saves you time, money and months of coding in the wrong direction.
Once that clicked, something in my brain relaxed. I no longer had to protect my idea. I had to stress-test it.
Bringing the Process to My Own Product: Menuvivo
As a side hustle, I’m building Menuvivo – a SaaS app that helps households:
- keep track of what food they actually have,
- plan healthy meals based on those ingredients,
- and create shopping lists that reduce chaos and waste.
Before doing any research, I already had:
- a North Star for Menuvivo (what success means for the user and for the business),
- a lot of feature ideas and architectural decisions,
- a mental picture of “the user” – but mostly based on assumptions, friends, and my own habits.
After the FlowCraft case I thought:
“If this research process works for FlowCraft, why not apply the same rigor to Menuvivo?”
So I did.
Step 1: Codifying the Research Process (With AI)
I took my notes and transcripts from the AI Product Heroes sessions and asked AI to help me build a compact research playbook for Menuvivo:
- what are the key steps,
- what to prepare before interviews,
- how to structure the sessions,
- how to analyse the data afterwards.
The result was a Menuvivo User Research Guidebook (PL):
- clear steps from hypothesis to interview to synthesis,
- simple checklists I could follow before each interview,
- interview do’s and don’ts tailored to my context.
Step 2: Turning Assumptions into Explicit Hypotheses
Next, I used AI plus my existing product documentation to extract and structure my assumptions as explicit product hypotheses, for example:
- H1: “Users are willing to regularly take photos of their fridge/pantry if it results in real time and money savings.”
- H3: “Automatic meal plan matched to inventory and preferences increases cooking frequency / reduces ‘what to eat today?’.”
- H5: “Users feel the pain of throwing away food and want to counteract it.”
- H7: “Different preferences/allergies in the family is a frequent case; users expect ‘one plan, minor variants’.”
Writing those down immediately made one thing clear:
I wasn’t “building what users need”. I was betting on a set of untested hypotheses.
Step 3: Designing the Interview Script
Using the guidebook and hypotheses, I worked with AI to design a 45-minute interview script that focused on:
- real routines around planning meals,
- how people shop, store and throw away food,
- previous attempts to “solve” this problem (apps, spreadsheets, habits),
- emotional moments: frustration, guilt, satisfaction.
Crucially, the script was solution-agnostic. For most of the interview, I didn’t want to talk about Menuvivo at all.
That alone was a big shift for me as a founder.
Warming Up on a Synthetic User
Before I talked to real people, I ran several interviews with a synthetic user persona in GPT.
This served three purposes:
- Rehearsal – I could get comfortable with the flow of questions.
- Debugging the script – I saw where my questions were leading, confusing or too abstract.
- Building prompts – I experimented with the analysis prompts that I would later use on real transcripts.
This made my first real interviews feel less like jumping into cold water and more like repeating a familiar exercise.
Running the Real Interviews
Then came the real test: talking to actual humans.
I recruited a small group of people who matched my target audience:
- families trying to eat better,
- people frustrated with throwing food away,
- “system builders” who love organising their kitchen and routines.
I ran six in-depth interviews.
Where I Struggled (and How AI Helped)
After each interview, I fed (using OpenCode) the transcript into a dedicated AI prompt (PL) that had three jobs:
- Summarise the interview – goals, routines, pain points, key quotes.
- Map findings to my hypotheses – which ones are confirmed, challenged, or need to be reformulated.
- Critique my performance as an interviewer.
That third part was surprisingly powerful.
AI consistently pointed out patterns like:
- Leading questions – I was subtly suggesting the answer I wanted to hear.
- Double-barrelled questions – two questions in one, confusing to answer.
- Premature pitching – talking about Menuvivo too early, instead of staying with the user’s story.
- Projecting my own habits – “I also do X” in ways that nudged the conversation.
This turned AI into a kind of interview coach (PL).
By the later interviews, the feedback shifted from basic mistakes to more nuanced suggestions:
- where I could go deeper emotionally,
- where I could ask for concrete examples instead of generalities,
- how to probe trade-offs (health vs. convenience vs. budget).
I could literally see my skill progressing interview by interview.
The Analysis Pipeline: From Transcripts to Insights
From a process perspective, here’s what my analysis pipeline looked like for each interview:
Recording & transcription – I recorded the call (with consent) and generated an automatic transcript.
AI-assisted first pass – I used a structured prompt (PL) to get:
- a one-page summary,
- extracted key quotes,
- a first mapping of findings to hypotheses.
Manual review – I read the transcript myself, highlighted passages, adjusted the mapping.
Meta-analysis – after a few interviews, I asked AI to help me build a cross-interview summary (PL):
- recurring patterns,
- segments that emerged,
- tensions and contradictions.
The key is that AI didn’t replace my judgement. It compressed the mechanical work so I could spend my limited founder time on the thinking:
- “What does this actually mean for Menuvivo?”
- “What should we build first?”
- “Which assumptions should we drop or reframe?”
What I Actually Learned About My Users
Here are some of the most important insights that came out of the research.
1. The Real Job: “Make the Whole Food System in My Home Work”
People don’t wake up thinking:
“I wish I had a better meal planner.”
They think:
- “I don’t know what we have at home.”
- “I keep throwing away food and it feels bad.”
- “I’m tired of deciding what we’re going to eat every day.”
The real job is more like:
“Help me run the food system in my home in a way that feels under control, healthy and not wasteful.”
A meal plan is part of that system – but it’s not the system.
2. The Biggest Pain: Invisible Inventory
The strongest, most emotional pain point was invisible inventory, especially:
- things in the freezer,
- items at the back of the fridge,
- opened containers and leftovers,
- products that get bought again and again because people forgot they already have them.
People told stories like:
- “We keep discovering food that expired months ago.”
- “I bought the same sauce three times because I forgot it was there.”
This shifted my focus from “generate smart meal plans” to “make what you already have visible and usable”.
3. Segments With Strong Motivation and Willingness to Pay
The interviews surfaced a few distinct segments:
- Health-focused individuals and families – tracking macros, specific diets, trying to eat “clean”.
- Households with medical constraints – allergies, diabetes, other conditions that make food decisions more stressful.
- System builders – people who love creating systems and routines around their home.
These groups:
- feel more pain from disorder,
- have more at stake (health, family wellbeing),
- and are more willing to pay for peace of mind and control.
4. Households, Not Individuals
A key realisation: Menuvivo is not really a “single-user app”.
It has to work for households where:
- one person may be the “power user” (planner, organiser),
- others just want to quickly check what’s for dinner or add something to the list,
- kids might occasionally interact with it.
This has big implications for:
- permissions,
- interfaces,
- how “heavy” or “light” different user flows should be.
How This Changed Menuvivo’s Direction
Before the research, my mental model was roughly:
“Menuvivo is an AI-powered meal planner with some inventory features.”
After the research, it became:
“Menuvivo is a home food system assistant that helps you keep your kitchen under control and reduce waste, with meal planning as one of the key tools.”
That change cascaded into concrete decisions.
Strategic Shift
From: focus roadmap on recipes, AI meal plans, and calendar views.
To: prioritise Inventory & Waste Intelligence:
- fast, low-friction ways to capture what you have,
- smart reminders about what should be used soon,
- signals about duplicated items and waste patterns.
Meal planning is still important – but as part of a larger system, not the core product by itself.
Product & Design Implications
Some practical changes I’m working on:
Designing flows around households, not just single users.
Offering different “modes”:
- a power mode for the organiser,
- light touch interactions for everyone else.
Making sure the first value a user experiences is relief and clarity – not a complex setup wizard.
These are all direct consequences of interrogating my hypotheses with real people instead of just refining them on a whiteboard.
Lessons for Technical Founders
If you’re a developer or technical founder, here’s what I’d highlight from this journey.
1. User Research Is a Skill, Not Magic
There is craft and nuance in great research. But the basics are learnable.
You can start with:
- a clear set of hypotheses,
- a simple interview script,
- and genuine curiosity.
You don’t need a PhD in UX to get meaningful insights.
2. AI Can Be Your Research Assistant and Coach
AI won’t do the interviews for you.
But it can:
- help you turn assumptions into explicit hypotheses,
- co-write your research guide and interview scripts,
- crunch transcripts into structured summaries,
- and give you feedback on how you interview.
This dramatically lowers the activation energy for doing research as a busy founder.
3. The Real Risk Is Building in the Dark
The more emotionally attached you are to your idea, the more you need this:
- a few honest conversations with people who should benefit from it,
- structured analysis of what you hear,
- the courage to change your mind.
Every interview that disproves a hypothesis is a gift. It moves you from fantasy to contact with reality.
How You Can Try This Next Week
If you want to apply a similar approach to your own product, here’s a simple plan you can execute in a week.
Day 1: Capture Your Hypotheses
- Write down 5–10 assumptions about your users, their problems and what they value.
- Ask AI to help you structure them into clear hypotheses.
Day 2: Draft Your Interview Script
Define your target audience for the interviews.
Draft a 30–45 minute script focusing on:
- current behaviour,
- pains,
- attempts to solve the problem,
- emotional high/low points.
Use AI to critique and improve the script.
Day 3: Practise With a Synthetic User
- Configure AI to act as a realistic user persona.
- Run 1–2 practice interviews.
- Pro tip: Use ChatGPT Voice Mode. It feels surprisingly real and helps you practice interrupting, pausing, and handling ‘human’ rambling.
- Refine your questions based on how the conversation flows.
Days 4–5: Talk to Real People
- Recruit 3–5 people who match your target.
- Run the interviews (record with consent).
- After each one, run the transcript through your analysis prompt.
Day 6: Synthesis
Ask AI to help you produce a cross-interview summary.
Look for patterns that:
- confirm your hypotheses,
- challenge them,
- introduce new ones.
Day 7: Decide What Changes
Based on your findings, answer:
- “What should we stop assuming?”
- “What should we prioritise now?”
- “What experiments should we run next?”
Even a handful of interviews can transform the way you see your product.
Resources: Templates, Prompts and Reports
In my own process for Menuvivo I built a small toolkit of:
- a User Research Guidebook tailored to a solo founder context,
- structured analysis prompts for interview transcripts,
- templates for hypothesis lists and meta-analysis reports.
I’m sharing these materials, together with additional commentary and examples from real interviews, on my blog:
👉 Full resources, prompts and reports:
If you’re a technical founder, my hope is simple:
- that this makes user research feel less intimidating,
- and that you start using it as a core engineering tool for reducing risk, not as a UX afterthought.
And if you also want to bring more clarity and calm to how you manage food at home, you’re always welcome to check out what I’m building at:
Join the Journey
I’m building Menuvivo in public. If you want to see how these lessons translate into real product features (and inevitable future mistakes), follow me on LinkedIn or check my weekly founder log.