• Force Multipliers
  • Posts
  • How to Pick Your 0 to 1 Bets (Without Wasting a Year on Cool Ideas That Go Nowhere)

How to Pick Your 0 to 1 Bets (Without Wasting a Year on Cool Ideas That Go Nowhere)

An interview with the digital mind of Stanley Tang, Cofounder of DoorDash and Head of DoorDash Labs

Hi! Welcome to another issue of Force Multipliers, your weekly briefing from Regina Gerbeaux, where Silicon Valley's behind-the-scenes operators get battle-tested frameworks for their toughest challenges, from putting out chaotic fires to managing strong personalities.

It's almost the new year, which means your inbox is about to flood with "Big Bets for 2026" decks and "Innovation Roadmap" presentations.

I see this pattern constantly with the companies I work with. You hit product-market fit with one product. Revenue is growing. The team is humming. And then someone asks the question that changes everything: "What's next?"

For post-PMF companies, it's about building a moat. You need more products that work, not just one.

For pre-PMF companies, it's even more critical because you're still trying to find that first thing that actually resonates.

Either way, the question is the same: which bets should you actually make?

Most companies get this spectacularly wrong. They either:

  • Throw resources at every shiny idea ("Let's build an AI feature because everyone else is!")

  • Spend 18 months perfecting something nobody wants

  • Kill promising experiments too early because they didn't show results in Q1

The 80-20 rule applies here, hard. 80% of your time should go to what's already moving the needle. The other 20%? That's your experimentation budget.

But here's the thing: because that 20% is so small, you need to be ruthless about what actually gets a shot.

I interviewed Stanley Tang*, Cofounder of DoorDash and head of DoorDash Labs, about exactly this question.

How does a company like DoorDash (which nailed PMF years ago) decides which 0 to 1 bets to make?

His answer changed how I think about this entirely. Read the playbook below.

*Note: I spoke to Stanley's Delphi, which is his digital mind and trained on all of his interviews, podcast appearances, and writings. This playbook is based on responses that are synthesized from prior public interviews and talks, not a personal interview. I highly recommend speaking with Stanley’s digital mind - it’s a great experience!

By the way - was this newsletter forwarded to you? Are you here for the first time? If so, remember to subscribe below…

The Operator's Playbook on Picking 0 to 1 Bets

First, Stanley runs Labs like an internal VC with a factory floor. Not the "let's throw money at innovation theater" kind of internal VC. The real kind, where you prove things work before you scale them.

Here's the framework he actually uses:

Phase 0: Future Back, Then Get Narrow 🏹 

Start with the end state. Write one paragraph describing the world you're building toward. For DoorDash, that's the future of local delivery.

Then break it into concrete use cases.

👉 Ask yourself: What specific job needs to get done? Not "we need to innovate in delivery." That's useless. Get specific.

For DoorDash, this looked like:

  • Sidewalk robots for sub-half-mile dense zones

  • Drones for harder-to-reach areas or longer distances

  • Larger autonomous vehicles for 3-5 mile trips

  • Kitchen and warehouse automation

Notice what they're NOT doing? Forcing one solution across everything. They're matching form factors to jobs to be done.

Pick one narrow job in one environment to start. Not ten. One.

Phase 1: Prove It Manually, Then Milestone Your Way Up 💡 

This is where most companies screw up. They want to build the perfect version before they know if anyone cares.

DoorDash does the opposite.

Stanley says: Run the tiniest version with people and spreadsheets before you write code.

DashPass (their subscription product) started as a manual email and Stripe test. No engineering team. No fancy infrastructure. Just: "does this thing work?"

Only after they proved people wanted it did they add engineers. Not before!

Define graduation gates up front:

  • What metrics need to hit before this gets more resources?

  • What does "working" actually look like?

  • When do we kill this?

Stanley calls this "milestones over opinions." You're not guessing which ideas are good. You're proving them, step by step.

What Actually Qualifies as a Real Milestone 🚦 

Not everything that "feels promising" deserves more investment.

Stanley's team looks for three things:

  1. The triangle lifts: selection, quality, or price improves in a way that shows up as growth, retention, and unit economics that make sense.

    If your experiment isn't moving one of these, it's not working. Full stop.

  2. Operational viability: It works on a messy Tuesday night when restaurants are slammed, not just in your carefully controlled demo environment.

    Stanley said one ‘manual’ workaround scaled to roughly 8,000 human order placers at peak before they automated pieces of it. Why? Because it solved a real problem in the real world, even though it "didn't scale" in theory.

  3. Strategic leverage: There's a clear path to better throughput, reliability, lower costs, or a unique data advantage if this scales.

The Two-Phase Pilot Template

Here's the actual process DoorDash Labs uses to move a 0 to 1 idea from napkin to staffed team:

Phase A: Feasibility Pilot 🚀 

Scope: One use case, one zone, tight hours.

Success criteria:

  1. Customer value: On-time rate and repeat rate beat the control group in that zone

  2. Ops proof: Exception handling works, you have a staffing model, and you've written an SOP that a new team could follow tomorrow

  3. Unit model: Variable cost per task is within X% of target, and you understand what drives variance

Operate manually first. Instrument everything. Write the kill switch and exit rules up front.

(DoorDash learned this one the hard way. Define your failure conditions before you launch, not after.)

Phase B: Scalability Pilot 📈 

Scope: 3-5 diverse zones with increasing complexity.

Success criteria:

  1. Replication: Same metrics hit within tolerance across different zones

  2. Automation delta: Identify the top 3 manual steps to automate next with clear ROI, then prove one automation actually moves a core metric

  3. Build vs. partner: Decide which components you own versus integrate (varies by form factor)

❗️ Only graduate to a standing team if both Phase A and Phase B meet targets. 

Otherwise, sunset it cleanly and recycle the team to the next bet.

What Gets a Green Light 💚 

Stanley's team only funds bets that have:

  • A clear, narrow use case with measurable lift to the marketplace triangle (selection, quality, price → growth, retention, unit economics)

  • Evidence it can work operationally in the real world, even if the first version is hacked together, plus a path to automate later

  • Strategic leverage for DoorDash if it scales (better fulfillment reliability, cost structure, or unique data)

What Doesn't  

  • Cool tech searching for a use case

  • Bets that only work in a slide deck, not on a Tuesday night when restaurants are slammed

  • Anything that can't survive rain, traffic, and dinner rush

The Guardrails You Need

Stanley's team enforces these rules religiously:

  • No "tech in search of a problem" (even if it's trendy)

  • Default to time-to-learning over time-to-perfection (ship scrappy, learn fast)

  • Budget follows learning velocity, not hype (more resources only come after milestones hit)

  • If it can't survive real-world conditions, it's not ready (no matter how good the demo looks)

One Thing I (Regina) Would Add: Keep Labs Separate

Stanley's framework is bulletproof. In addition to his notes, I want to share with you what I learned from Matt Mochary when I was his Chief of Staff:

If your company has already hit PMF, treat Labs like its own company on its own island.

There was a multi-billion dollar company Matt coached. They set up their Labs with:

  • Its own P&L

  • Its own team

  • Direct reporting to the CEO

Why? Because if you embed experimental work inside your core business, it moves at the speed of your bureaucracy. Which is to say: not fast.

Labs needs to iterate quickly, kill experiments quickly, and operate without the gravitational pull of your existing processes.

Matt always said Labs should roll up to the CEO because that's where the "I wonder if this would work?" creative ideas come from. And honestly, the CEO is the only person who can protect Labs from getting crushed by quarterly earnings pressure.

The new year is coming. Your team is going to pitch you a dozen "innovative" ideas.

Some will be brilliant. Most will waste a year of your life.

The question isn't whether to place 0 to 1 bets. The question is: which ones deserve your 20%?

Future back. Start small. Prove it manually. Milestone your way up.

And for the love of everything, write your kill criteria before you launch.

What's the one 0 to 1 bet you're considering for 2026? And more importantly: how will you know if it's actually working?

Until next time,

P.S. What did you think of this framework? Hit reply and let me know which part resonated most with you. I read every response.

And if you’re reading this - you're already ahead.

Because you know where to find the stuff that’s actually good. Like my templates and resources, and this newsletter.

Was this newsletter forwarded to you? Are you here for the first time? If so, remember to subscribe below…

Want more operational content?

Check out Coaching Founder for over a dozen free, downloadable Notion templates to use at your company, and tons of write-ups on how to level up your execs, your teams, and yourself.

About Regina Gerbeaux

Regina Gerbeaux was the first Chief of Staff to an executive coach who worked with Silicon Valley’s most successful entrepreneurs, including Brian Armstrong (Coinbase), Naval Ravikant (AngelList), Sam Altman (OpenAI / Y Combinator), and Alexandr Wang (Scale).

Shortly after her role as Chief of Staff, then COO, she opened her own coaching practice, Coaching Founder, and has worked with outrageously talented operators on teams like Delphi AI, dYdX, Astronomer, Fanatics Live, and many more companies backed by funds like Sequoia and Andreessen Horowitz.

Her open-sourced write-ups on Operational Excellence and how to run a scaling company can be found here and her templates can be found here.

She lives in the Pacific Northwest with her partner, daughter, and dog, and can be found frequenting 6:00AM Orangetheory classes or hiking trails nearby.

Reply

or to participate.