How Early-Stage Founders Move Fast Without Making Too Many Bad Assumptions
Lessons from building FindUrMeds
We Need Assumptions
Assumptions are imperative when building something new. You can’t isolate every single variable, and while ‘first-principles’ sounds nice, if I questioned why I wear pants and not a Scottish kilt, I would never leave my apartment.
That said: the wrong assumption can lead to months down the wrong rabbit hole—months chasing the wrong GTM strategy, the wrong technical approach, etc.
Here is what I’ve learned about making assumptions while building FindUrMeds.com.
Label, Label, Label
Labeling assumptions puts them into the open. It allows us to operate without perfect information but knowing what we don’t know. The worst thing is implicit assumptions that are driving the conversation that are possibly inaccurate.
When doing any stream of work—engineering, GTM, etc.—label the key assumptions, and specifically what is required to test out each assumption. Some assumptions won’t be able to be tested out, and that’s fine.
Example:
I launched the first versions of FindUrMeds.com using Bubble, a no-code tool.
The assumption I had was that working in Bubble would allow me to ship faster, and that because I hadn’t coded in HTML/JavaScript/React in some time, it was the right choice.
Working in Bubble was a horrible choice—it was a clunky UI that hurt my conversion and was extremely slow to ship updates in.
What I should have done:
Wrote an ‘assumption’ that Bubble would be quicker to code in
Wrote out the minimum I needed to test that assumption out, which was rewriting a section of the app
Understood the downside risk of making the wrong decision: weeks of wasted time on a shitty developing stack
Sources of Assumptions
I think there are 4 sources of assumptions that I will break down: yourself, conversations with an LLM, deeper research from YouTube/Books/Substack/Podcasts, and conversations with an applied expert.
Yourself
This is your best source—when it’s current. If you’ve actually done the thing before, in the same context, recently, trust yourself. But there are a couple biases here:
Sample size: Let’s say you have a theory on how to close a sale, or the viability of a GTM channel—are you basing it off a strong sample size, or something that happened once?
Is your knowledge current? Tech is changing rapidly. Your knowledge on say—how to best work with a platform algorithm—may no longer be applicable.
LLM
Here’s where it gets interesting. LLMs are incredibly useful for mapping out best practices and cutting through initial confusion. I use them constantly. But they have 2 major problems:
Confidence/Conservative bias: They will tell you things with absolute certainty that are wrong. And you won’t know until you are deep into implementation. Conversely, I’ve had LLMs overestimate the cost of building a feature by 10x.
Missing Application-Layer Knowledge: LLMs are trained on the internet. But the best marketer knowledge? The specific tactics that actually work? Those aren’t freely available online. They’re in private Slack channels, paid communities, and the heads of practitioners.
I think the best use case of LLMs for learning is to understand the broad parameters of a given field and best practices, but to be cautious about taking implementation details or estimates at face value.
Long-Form, Less Clickbait-Driven Content
Podcasts = Substack > YouTube
Long-form content is a step better than LLMs and when well vetted partially solves the applicability issue of LLMs. Some problems/context:
Lots of shitty content out there - primarily on YouTube, and then Substack.
YouTube has clickbait content. Substack and Podcast media are not discovered via an image quickly displaying and so will be less polarizing and more applicable. Substack tends to be pretty good, especially for content that is paid/with a high number of readers.
What I like to do:
Go through each platform, and filter content by recency, and avoid content that is super early.
Then vet with an LLM.
So let’s say I’m vetting out how to use Facebook Ads for my product:
I would input into an LLM: “How should I launch my first ads for FindUrMeds? What metrics should I use to evaluate their success? What mistakes should I avoid?”
Get the answers from the LLM
Then go through each source, and paste it into an LLM, and ask it to: “Modify what you told me about ABC—in this case my questions—based on this source”
This should give you a decent starting point.
Experts Who’ve Actually Applied It in Your Vertical
Not just any expert. Not a “partnership expert” or a “growth hacking guru.” I’m talking about a founder who has successfully done partnerships in healthcare (for me)—the specific vertical you’re targeting.
This is gold because they know the nuances. They know the messy implementation details. They know what actually works versus what sounds good in theory.
The problem? These people are hard to find and harder to access. But they are not impossible. Go to LinkedIn, find people in your first-degree connections who fall into this category, or at a minimum, find second-degree connections and get introduced.
Other Bad Assumptions I’ve Made That Will Make You Feel Better
FindUrMeds desktop was important to design for—turns out 90% of users are on mobile
Users excited on a call → will sign a contract
Coding is important. It really isn’t. It is only directly useful when it helps demonstrate you can solve a problem.
What I Do Now at FindUrMeds
Every week, I ask myself:
What are my top 2-3 assumptions right now?
How am I choosing to use them? (Am I betting the company on this, or is it just a small experiment?)
How hard would it be to prove or disprove them?
Can I test this more simply?
It isn’t about achieving perfect knowledge. It’s about knowing which assumptions you’re making and how much risk you’re taking on each one.
Step 1: Label Your Top 2-3 Assumptions
Write them down. What assumptions are underpinning your current strategy?
For example, right now at Find Your Meds:
Initially, I made an (errant) assumption that concierge doctors would be very hard to sell to cold and tried getting warm introductions only.
I realized I was making an assumption and am now building a cold outreach pipeline for concierge doctors because I think warm introductions only is an overly rigid and indirect strategy.
Step 2: Rate Their Source
Where did each assumption come from?
Your own experience?
An LLM?
YouTube clickbait?
An actual expert in your vertical?
Be honest. “I just assumed this” is a valid answer, but it should scare you, as it does me!
Step 3: Ask “How Quickly Can I Test This?”
Some assumptions are hard to test (you can’t validate a complex technical build without building it). But many are surprisingly easy:
Easy to test:
Customer interest (just ask people)
Pricing sensitivity (show them a number)
Messaging effectiveness (run a simple landing page)
Don’t do what I did: Spend 25 hours building a prototype when you could validate with mockups and conversations.
The faster you can disprove an assumption, the faster you should test it.
The Balance: First Principles vs. Bias for Action
The reality of early-stage startups:
You CAN’T research everything
If you never act, nobody gives a fuck about your thing except you
But if you’re hyper like me (thanks, ADHD), you’ll chase every shiny idea without questioning whether it makes sense
The sweet spot: Bias for action + strategic assumption testing.
Act fast. But label your assumptions. Know which ones are critical. Test them as quickly as you can.
The Real Lesson
Bad assumptions don’t come from being stupid. They come from overconfidence in your sources.
Bubble seemed good → I couldn’t iterate
Desktop-first seemed obvious → users were on mobile
Excited prospects seemed like deals → they did not sign
The most dangerous assumptions are the ones that feel obviously true.
So question them. Label them. Test them quickly.
But don’t let perfectionism stop you from acting. Because in the end, a tested assumption is better than a perfect plan that never ships.
What assumptions are you making right now that might be wrong? Hit reply and let me know—I’d love to hear what you’re working on.


This piece realy made me think about how much we operate on implicit assumptions without realizing it, which you've articulated so well. It's like in Pilates; I always make the assumption a new move will be easy, then quickly find I need to properly test that theory and adjust, much like your software choices.