Home • About • Projects • Blog • Resources • Contact
I didn’t start Launchling because I had a burning startup idea I needed to bring to life. Instead, I started it because I wanted to upskill in AI product management fast, meaningfully, and through hands on learning rather than just theory. I could see that AI was becoming a core part of the modern product toolkit, and that if I was going to lead teams building with LLMs, I needed to walk the walk and not just talk about strategy from the sidelines.
So I set out to build a real product, powered by AI, with real users and real constraints. Not only did I have a lot of fun; but the more I built, the more I realised that Launchling might actually solve a genuine problem for early-stage founders (especially non-technical ones). So I kept going.
Now it’s both:
- 🧪 A learning lab for deepening my AI PM skills
- 🚀 A product I hope will help others start small and build big when they’re ready
Here’s what I’ve learned, where I’m still growing, and how Launchling is helping me do both.
✅ What I’ve Learned Through Building Launchling
- Prompt Engineering Is UX Design
Every output in Launchling, from startup ideas to tailored plans, is generated by GPT-4. But the success of those outputs depends entirely on how they’re framed.
I’ve learned how prompt design directly intersects with:
- User onboarding (what do I need to ask to generate something useful?)
- Tone of voice (how do I sound empowering, not prescriptive?)
- Ethical defaults (how do I steer away from harmful or exploitative ideas?)
- Output structure (how do I ensure consistency and readability?)
This isn’t just fiddling with text. It’s product design inside the model; and I think it might be one of the most powerful, under appreciated levers in AI products (at least I’m not sure I had fully understood this before).
- Designing Guardrails Is Product Work
When you’re building for beginners, especially non-technical founders, you need to bake in trust, safety, and clarity.
I’ve built Launchling to:
- Throttle usage
- Log token costs to Airtable for API control
- Avoid unethical or risky suggestions through prompt filters
- Provide clear, inclusive language that reduces overwhelm
- Handle privacy and consent with GDPR-aligned opt-ins and deletion flows
AI safety isn’t just an enterprise governance problem. It’s a product design challenge — and one I’ve approached head-on.
- From No-Code to Real Code: Evolving the Stack as I Grew
Launchling didn’t start with a full codebase and that was by design as I wanted to test some of the no/low-code tools out there, and because I wanted to spend my time on mastering the AI side of things.
I began with a Tally form triggering a Zapier flow, then layered on a Framer site. It let me test the concept quickly, but I soon hit hard limits in terms of logic, responsiveness, and state handling.
So I rebuilt the whole product as a React app, hosted on Vercel, using Airtable for structured user data and GPT-4 for generation.
That rebuild taught me how to build and maintain a scalable and maintainable codebase solo. It’s no longer a prototype. It’s a functioning AI product and I now own the full stack.
4. Evaluating AI Output with More than Gut Instinct
That work has helped me understand:
Rather than relying on manual review or qualitative feedback alone, I’ve already built a script to evaluate the quality and alignment of Launchling’s outputs against structured user input. I wrote more about this here.
- How embedding similarity and scoring logic can reveal misalignment
- Where GPT-4’s outputs drift from user intent or framing
- Why even ‘good-enough’ outputs require a clear quality definition
This seems to be one of the most critical but often overlooked areas in LLM product design - how do you know your model is doing what it should?
🧗 What I’m Still Learning
As much as I’ve learned, there are some big areas I’m deliberately stretching into next.
- Systematic Evaluation of AI Output
My current evaluation script is a strong start, but I want to go further by:
- Adding confidence scores at runtime based on user-plan alignment
- Exploring token-level coherence metrics and comparative output benchmarking
- Building systems to analyse not just performance but fairness, consistency, and utility across use cases
This is how I’ll evolve from basic output scoring to robust AI product evaluation frameworks.
- Agentic Workflows and Multi-Step Reasoning
Launchling currently uses single-shot generation. But real-world users often need more:
- Step-by-step refinement or simplification
- Dynamic follow-up based on feedback
- Self-critiquing flows (e.g. “Is this plan realistic?” → revise)
These are agentic patterns I want to experiment with inside Launchling but it’s important I don’t do this just because they’re trendy, but because they’re useful. So it might be something I experiment with but don’t ultimately release into production. But this is an opportunity to learn about which agentic patterns genuinely improve utility and clarity for my users.
- Retrieval-Augmented Generation (RAG)
All Launchling outputs are currently based on structured user input not external sources. I want to experiment with:
- A startup case study library for inspiration
- Contextual retrieval from real tool guides or founder FAQs
- A vector store that tailors advice based on embedding similarity
This will make outputs more grounded, credible, and helpful; and it’s a great chance to deepen my understanding of RAG pipelines in practice.
4. AI UX Research
So far I’ve focused on building good defaults and clear flows; but I want to explore more structured AI-specific UX research, such as:
- Expectation mapping for different founder personas
- Mental model analysis of what users think the AI “understands”
- Failure case interviews to improve plan resilience and utility
This will sharpen my ability to diagnose AI product gaps and align model behaviour with user expectations.
- AI Governance, Risk, and Regulation
I’ve already implemented product-level safeguards, but I want to deepen my fluency in:
- Upcoming regulation (EU AI Act, UK frameworks)
- Trade-offs around explainability, safety, and autonomy
- Responsible AI design patterns beyond just guardrails
This is essential for leadership in this space and something I’m exploring through self-study.
🛠️ Why Launchling Still Serves My Learning Goals
One of the most powerful things about this project is that it’s still useful, even as I grow.
I don’t need a new learning environment, I just need to increase the complexity, sharpen the metrics, and deepen the stack. Some of the next experiments I’m planning:
- ✅ Confidence scoring using embeddings
- 🤖 Step-by-step plan refinement agents
- 🔍 Tool database + retrieval for plan grounding
- 🧭 User archetype detection to tailor onboarding
- 🛡️ “Build responsibly” nudges and checklists
Every feature is both a product bet and a learning opportunity.
💡 Final Thought
If you want to learn AI product management, you don’t need a course, a title, or a team. You need a real user problem, a willingness to figure things out as you go, and a system that lets you learn in public.
That’s what Launchling has been for me - a way to test my thinking, sharpen my skills, and grow in a space that’s evolving fast.
It started as a deliberate learning project, it might still become a viable product; either way, it’s been a really effective way to upskill.
🪂 Try it now
Turn your idea into a tiny, tangible prototype — with a little help from Launchling.
👉 Try Launchling – no signup, no jargon, just a gentle push forward.