Users own the present. You own the future.

A few years ago I sat in a research session at Moonfare. Private equity is a premium product. Our clients are C-level executives, founders, people who have spent decades being the person in the room with the answer. He was one of them.

I asked him about a part of the platform. Within a minute he was telling me, in precise detail, exactly what we should build next. He had a roadmap. He had the rationale. He had the feature list.

He was wrong.

Not because he was stupid. He was one of the sharpest people I’d spoken to that month. He was wrong because he’d been asked the wrong question, and his instinct, trained by a lifetime of being the person who brings the answer, was to give me one.

The smarter your users, the more convincing their wrong answers.

The want and the need

Jared Spool has a line I think about often.

A user says they want ice cream. What they want is ice cream. What they need is to cool down. Their body wants sugar. It’s hot. There’s a memory somewhere in there, a summer ritual, something cold in their hand.

The want closes off options. The need opens them.

Take “I want ice cream” at face value and you sell them ice cream. Understand the need and you can sell them a popsicle, a cold drink, air conditioning, a swim in the sea. The want is one of many solutions to the need. The need is the territory.

Strawberry ice cream cone Photo by ian dooley on Unsplash

Most user research stops at the want. You see it in how teams write jobs-to-be-done. The format is usually something like “when I [situation], I want to [action], so I can [outcome].” Fine in theory. In practice I’ve seen PMs write things like “when I open the app, I want to be reminded to use my credit card, so I can earn cashback.” That isn’t a job. It’s a feature the PM wanted to build, written in user-voice. The actual need is save money on what I spend. Cashback might be one answer to that. So might spending alerts, better rates, a tool that cancels unused subscriptions. Notifications are one answer in a room full of answers, and most of them are probably better.

Research that stops at the want produces faster horses. Research that finds the need opens product space.

It gets worse at the top

In consumer markets the hard part of research is getting people to talk. In premium and B2B it’s the opposite. The hard part is getting them to stop talking about solutions.

The Moonfare client wasn’t an outlier. I think a lot about why this happens. Part of the answer, I think, is that the people we were interviewing had been trained, explicitly, to produce answers. Many of them came out of consulting or finance. At Bain, where I spent time earlier in my career, the core discipline is what’s called the answer-first approach, or the A1. You lead with the answer. Then you work backwards. Build the hypothesis, then gather evidence that confirms or denies it. It’s a useful way to run a 6-week engagement where a client is paying a lot of money for a clear recommendation.

It’s a disastrous way to sit in a research session as a user.

An executive trained that way walks in and the instinct takes over. They feel the absence of an answer as pressure. They want to be useful. They want to look smart. They give you the A1, and it’s precise and articulate because producing precise articulate answers is what they are paid to do. You leave the session with notes full of what sounds like signal.

It isn’t. It’s a very confident faster horse.

And here’s the part most writing on research misses. Interviews don’t fail the same way for everyone. Ambiguity actually helps you. When a regular user says “I dunno, maybe?” the fuzziness is information. It tells you you’re asking the wrong question. The executive doesn’t give you that. You have to know to discount the clarity.

Metrics fail the same way

Teams that think analytics protects them from bad user research are making the same mistake in a different register.

At Moonfare we tracked logins. More logins looks good on a dashboard. Looks like engagement. But private equity is a 5-to-10 year product. For most of that time nothing is supposed to happen. The fund is working. The client doesn’t need to log in. Getting them back on the platform because logins is the number you track doesn’t mean anything, unless the login happens at the right moment.

The right moment isn’t a platform question. It’s a life question. When does this person have cashflow? When’s bonus season? What does their portfolio look like right now, and is there a product we offer that fits the gap? The real need isn’t log in more. It’s be present when a decision is being made. Five well-timed touchpoints in a year beat fifty random ones.

Same disease, different uniform. The surface metric is a faster horse. Data can be interpreted any way you need it to be, which makes it dangerous in proportion to how much you trust it.

What research is actually for

Here’s the division of labour I’ve come to believe in.

Users own the present. They’re the only people who know what their day actually looks like. What they do when the thing breaks. Where they’ve already spent money trying to fix it. That’s their expertise. It isn’t transferable.

You own the future. The synthesis, the pattern, the product that doesn’t exist yet. The leap.

How deeply you observe scales with how specific your question is. At the shape-of-life level you’re learning the territory. I’m working on a bank for SMEs now. Before I know what to build I need to understand what running a small business actually feels like. How much it matters that a client pays on time. What a late invoice does to someone’s week. Where fear lives in the business. Cashflow, a single customer who’s 30% of revenue, a tax bill they haven’t put money aside for. That isn’t a feature conversation. It’s context.

Once I know I need to do something about invoicing, I zoom in. What do they do today? How do they chase payment? What tools have they tried? What did they pay for that didn’t work, and what did it cost them when nothing worked? Now I’m at the behaviour level. Still not asking what to build. I’m watching and listening carefully enough that when I do make the call, the call is grounded in something real.

The leap is yours. But it has to be a leap from somewhere.

The learnability trap

Evaluative research has its own version of the same problem.

New designs test badly. Usually not because they’re bad. Because they’re unfamiliar, and unfamiliarity reads as friction in a first session. Snapchat’s navigation was almost unusable the first time most people saw it. A week later it was muscle memory. Most large redesigns test worse than the thing they’re replacing for as long as it takes users to stop noticing the change, which is usually not long.

Snapchat navigation screenshot Image from newsroom.snap.com

A team that only trusts first-session feedback will never ship anything that requires learning. Which is most things worth shipping.

The honest version is harder. Test for whether people figure it out on their own. Whether they remember next time. Whether the friction fades. Whether the new thing does something the old thing couldn’t, once the unfamiliarity is gone. Not whether they liked it the first time they saw it.

Research is intake, not verdict

The failure mode underneath all of this is using research as a way to avoid deciding.

Research is intake. You take it in. You synthesise. Then someone has to make the call and own it.

I like continuous discovery. Frequent behavioural touchpoints beat set-piece studies for most questions. The part I’m careful about is the way the product-trio idea, popularised by Teresa Torres, gets practiced in the wild. The theory is good. Three disciplines feeling the user’s pain together should produce better decisions than one person working alone. In practice I’ve watched it produce three biases averaged into a consensus nobody owns. Someone has to own the interpretation. It can be a researcher, a founder, a PM. But it’s one person’s job, and it comes with the accountability for the call that follows.

The alternative is research-as-stalling. Enough interviews, enough data, enough frameworks that nobody ever has to say I think this is the answer and I’m willing to be wrong about it. That produces products that are carefully informed and quietly mediocre.

The client at the beginning of this piece was trying to help. He wasn’t wrong because he was a bad user. He was wrong because he was being asked the wrong question. And so, in a different way, was I, when I walked in looking for an answer instead of context.

Users own the present. The rest is yours.