Skip to content

Why the best digital builders think like scientists

2026-04-14 · 5 min read

I spent seven years in a lab before I ever touched a product roadmap. Most of that time was materials science — nanoscale printing, electron microscopy, the kind of work where you run an experiment, it fails, and you learn more from the failure than you would have from success. I didn't know it then, but that training shaped every digital product decision I've made since.

Here's what I mean. In a lab, you don't start with the answer. You start with a hypothesis — a specific, falsifiable claim about how something works. You design an experiment to test it. You control variables. You measure. You look at the data and accept what it tells you, even when it contradicts what you wanted to hear. This sounds obvious. In practice, almost nobody in digital product development actually does it.

What most teams do instead is start with a solution. Someone senior says "we need a platform." A vendor gets hired. Features get specced. Timelines get drawn. Six months later, you have a thing that technically works and nobody uses. The problem was never properly defined because nobody treated the problem as something that needed evidence. They treated it as something that needed a decision.

The scientific method is not complicated. Observe. Hypothesise. Test. Measure. Revise. But it requires a specific kind of discipline — you have to be willing to be wrong. Not philosophically willing, like you'd say in a meeting. Actually willing, as in you designed the test so that it could prove you wrong, and you'll change direction if it does.

I've shipped products where the first three hypotheses were wrong. A healthcare education platform where we assumed clinicians wanted video content — they didn't, they wanted structured text they could scan in two minutes between patients. We only found that out because we tested it. Not with a survey. With a prototype and usage data. The survey said video. The data said text. We shipped text. Engagement tripled.

That's the thing about surveys and stakeholder interviews — they tell you what people believe they want, which is often a reflection of what they've seen before. Data from actual behaviour tells you what they do. A scientist learns this early. A product team often learns it late, or not at all.

There's a second piece that's harder to articulate. In science, you develop an instinct for controlling variables. When an experiment fails, you don't change five things and try again. You change one thing. You isolate. This sounds tedious, and it is, but it's also the only way to know what actually caused the outcome.

In product work, I watch teams change the design, the copy, the user flow, and the pricing model all at once, then look at the metrics and claim the redesign worked. Maybe it did. Maybe one change drove all the improvement and the other three made it worse. You'll never know, because you didn't isolate.

The best digital builders I've worked with — and there aren't many — think this way without being taught it. They run small experiments before big bets. They instrument everything so they can measure what changed. They treat a launch as the beginning of learning, not the end of building. They're not scientists by training, but they've internalised the method.

The rest rely on pattern-matching from previous roles. "We did this at my last company and it worked." Maybe it did. Different company, different users, different market. The conditions changed. Your prior result is an anecdote, not evidence.

I'm not arguing that every product manager needs a PhD. I'm arguing that the core operating system of science — hypothesis, test, measure, revise — is the most underused tool in digital. Not because people don't know about it. Because it's slower than guessing, and most organisations reward speed over accuracy.

The irony is that guessing is actually slower. You ship fast, learn nothing, rebuild in six months. The team that tested first ships later but ships once. I've seen this pattern in every domain I've worked in — research, organisational builds, digital platforms, commercial operations. The teams that treat their assumptions as hypotheses outperform the ones that treat them as plans.

If you're building something new — a platform, a capability, a function that didn't exist before — you're running an experiment whether you admit it or not. The question is whether you designed it to learn from, or just to survive.