Home/ Learn/ strategy research workflow
Learn · Pillar guide

Strategy research workflow: how deadlines, not frameworks, shape the brief

Most strategy research guides teach frameworks — MECE, Porter, BCG, Ansoff. Frameworks are scaffolding. The actual workflow is decided by the deadline, the reversibility of the underlying decision, and the audience the brief has to survive. Get those right and the framework choice is mostly downstream.

Updated Apr 2026
Cluster Workflows
3,189 words

The original-content test for this topic

Most strategy research guides teach frameworks — MECE, Porter’s Five Forces, BCG Matrix, Ansoff Matrix, McKinsey 7-S — and call that the workflow. The frameworks are real and useful. They are also not the workflow. A strategy team can know every framework in the consulting handbook and still ship briefs that fail in IC. McKinsey’s own Global Survey on strategic decision-making found that process mattered more than analysis by a factor of six in predicting whether a decision met expectations. The framework lives inside the analysis half. The process — deadline, reversibility, audience, citation discipline — is the half that decides whether the work survives.

The honest framing is different: strategy research workflow is deadline-driven craft, not framework ritual. Frameworks are scaffolding. The workflow is the operational chain that turns a partner’s question into a deck the partner will defend in two weeks of follow-up conversations, with one Friday to assemble it. Get the deadline, the reversibility lens, and the citation discipline right, and the framework choice is mostly downstream.

The reference data this page anchors to: McKinsey Global Survey on decision-making (process > analysis by 6×); McKinsey research finding that ~2/3 of strategic decisions met or exceeded expectations and ~20% of revenue-growth decision outcomes were never measured; Bezos’s 2015 and 2016 Amazon shareholder letters on one-way and two-way doors; Kahneman & Klein (2009) on conditions for trustworthy intuition (reliable feedback environment); Barbara Minto’s Pyramid Principle (1987) for brief structure; and the older consulting framework canon (Porter 1980, BCG 1970s, Ansoff 1965).

A page that teaches frameworks without teaching the workflow they sit inside is teaching the apprentice’s toolkit without the craft. This page treats the craft as the actual subject.

Why strategy research is deadline-driven, not truth-driven

The most common mistake in strategy research workflow design is to treat it like academic research with a tighter timeline. It is not. Strategy research is a different category of work, with a different success criterion, and the workflow follows from that.

Academic research is truth-driven. A dissertation is finished when the answer is defensible by peer review, however long that takes. The five-year doctoral project exists because the question being investigated does not yield to faster work. Strategy research has the opposite property: the deadline is fixed first, and the workflow is whatever fits inside it. A category brief due Friday is due Friday whether the analyst feels confident or not.

This single difference cascades through every workflow decision. Sources are picked for retrievability under deadline, not for definitive coverage. Depth is calibrated to “what survives partner challenge,” not “what survives peer review.” Synthesis is shaped by the deck the partner will read in twenty minutes, not by the chapter a committee will spend weeks evaluating. The craft of strategy work is to do real research inside that constraint without pretending the constraint is academic.

The teams that ship the best strategy briefs are the ones that internalize this and design the workflow around it. The teams that struggle are usually treating the deadline as an inconvenience to be mourned rather than the defining property of the work.

The feedback-loop problem nobody on a strategy team talks about

Strategy decisions take 18 to 36 months to play out. A junior analyst writes a category brief in March; the recommendation gets acted on in April; the consequence — the company entered the category, the founder did or did not raise the next round, the product line did or did not gain share — becomes visible somewhere between the next year and the year after that. By the time the feedback arrives, the analyst has written forty more briefs.

This is the feedback-loop problem and it is structurally different from academic research. A doctoral student gets feedback in months: papers get reviewed, advisors flag flaws, methods get challenged. A strategy analyst gets feedback in years, and the feedback is almost always confounded by other factors — market conditions, execution, leadership changes — that make it impossible to attribute the outcome to the analysis.

The implication is that individual analysts cannot really learn from their own track record at the speed they would need to. Calibration has to come from somewhere else: institutional patterns (the firm has seen many briefs play out), case-based learning (post-mortems on past decisions, including the ones that lost), and apprenticeship (working with senior people who have lived through several full feedback cycles).

A strategy research workflow that ignores the feedback-loop problem produces analysts who think they are good because nobody has told them otherwise yet. The workflows that produce calibration build in deliberate review of past briefs against actual outcomes — including, especially, the ones that worked for reasons other than the analysis.

The McKinsey Global Survey on strategic decision-making found that ~20% of revenue-growth decisions and ~16% of cost-savings decisions had outcomes that were never measured at all. The implication is harder than it sounds: the modal strategy decision is one nobody learns from, because nobody finishes the loop. Kahneman & Klein’s 2009 paper on the conditions for trustworthy intuition stated this directly: intuition can only be trusted in environments with reliable, repeated feedback. Strategy work, structurally, is not such an environment for the individual analyst. Calibration has to come from somewhere else, or it does not come at all.

What separates framework from craft in strategy work

A consulting framework — MECE, Porter, BCG Matrix, Ansoff, McKinsey 7-S — is a structured way to think about a class of problems. Frameworks are useful and most strategy briefs use one or two. They are not, however, the workflow itself.

The difference between framework and craft shows up in three places. First, framework choice: a junior analyst applies the most familiar framework to whatever problem is in front of them; a senior consultant picks the framework that fits the specific question, and modifies it when the question does not quite fit. Second, framework depth: a junior fills in every cell of the framework even when most cells are noise; a senior leaves the cells that do not matter empty and spends the time on the two cells that do. Third, framework abandonment: a junior keeps the framework even when the analysis has already exceeded what the framework can express; a senior drops it the moment it becomes a constraint instead of a tool.

This is what people mean when they say strategy work is a craft. The framework is the apprentice’s toolkit. The craft is knowing which tool to use, when to modify it, when to abandon it, and how to communicate the decision to a partner who is going to ask “why this and not that?” The craft cannot be reduced to a framework, which is why the most useful consulting books — Barbara Minto’s Pyramid Principle, Ethan Rasiel’s McKinsey Way, the early McKinsey Quarterly archives — read more like cooking notes than like operations manuals.

Reversibility: the lens that decides how much research is enough

Strategic decisions vary in how reversible they are. Some are one-way doors: enter a new geography, acquire a competitor, sunset a product line. Once committed, the cost of reversing is high — sometimes prohibitive. Others are two-way doors: adjust pricing on a pilot SKU, run a marketing test, change the GTM motion for a quarter. The cost of reversing is low; the cost of being slow to test is higher than the cost of being slightly wrong.

This distinction comes from Jeff Bezos’s 2015–2016 Amazon shareholder letters, where he argued that the standard for one-way doors should be much higher than the standard for two-way doors. The principle is older — most experienced operators arrive at it independently — but the framing is useful for strategy research workflow design.

The implication: not every strategy question deserves the same workflow. A category brief informing a one-way door should run deeper, gather more sources, and survive more scrutiny. A brief informing a two-way door should be tight enough to make the decision faster than the alternative — better to pilot and learn than to research and delay.

The workflows that get this wrong run every brief at the same depth. The brief informing the one-way door is therefore under-researched (because the average depth is set by the volume of two-way decisions), and the brief informing the two-way door is over-researched (because it inherited the depth budget of the one-way work). Calibrating depth to reversibility is one of the highest-leverage workflow decisions a strategy team makes.

Decision typeExamplesCost of being wrongWorkflow depthVerification standard
One-way door (irreversible)Enter new geography · Acquire competitor · Sunset product line · Hire C-level · Long-term leaseHigh to prohibitiveDeep — multiple weeks if neededEvery quantitative claim cited; partner challenge anticipated; counterargument written
Two-way door (reversible)Pilot SKU pricing · A/B test GTM motion · Run marketing test · Trial vendor · Test new sales scriptLow; easy to reverseTight — days, not weeksSource-cited but lower verification bar; better to pilot than research
Hybrid (partially reversible)Rebrand · Restructure team · Change CRMMedium — reversible at a real costMid — calibrated to switching costCitation discipline plus an explicit “what would change the answer in 6 months?”

A useful falsifiable test: write down whether the decision the brief informs is one-way or two-way before starting research. If you cannot tell, stop and ask the decision-maker — because the workflow you should run depends on their answer, not on yours.

The Tuesday-to-Friday rhythm and why most teams break it

Strategy teams converge on a Tuesday-to-Friday rhythm without anyone deciding. Tuesday is research — getting the lay of the land. Wednesday is synthesis — finding the shape of the take. Thursday is internal review — getting the rough edges off. Friday is delivery. Most teams that work this way produce one good brief a week and feel busy doing it.

The rhythm is more useful than it looks. Tuesday’s research time is bounded — by the next morning, the analyst has to have something to synthesize, which forces source selection to be tighter than it would be with unlimited time. Wednesday’s synthesis time is bounded too, which forces the take to converge before all the questions are answered. Thursday is the only flexible day, which is exactly where flexibility is most valuable: pressure-testing the take with someone who was not in the analysis.

Teams break the rhythm in two ways. The first is to extend research into Wednesday, on the theory that “the take will get better with more sources.” It usually does not; the take gets denser without getting clearer. The second is to skip Thursday’s review on the theory that “the deadline is tight.” This is the more dangerous failure: the brief ships without the pressure-test that would have caught the weak claim, and the partner finds it instead.

A useful workflow rule: Thursday’s review is the floor, not the ceiling. If something has to give, Tuesday’s research depth gives first, not Thursday’s review.

The five questions every strategy brief has to answer

Strategic briefs vary by topic — category landscape, competitive teardown, market sizing, M&A target screen — but the questions that survive partner challenge are remarkably consistent. The brief that holds up answers, explicitly:

  1. What is the question, in one sentence the partner would write? If the analyst’s framing differs from the partner’s framing, the brief will fail in the meeting regardless of how good the analysis is.
  2. What is the answer, in one sentence? Strategy briefs are not detective novels. The answer goes early — Minto’s Pyramid Principle is right about this — and the rest of the brief defends it.
  3. What are the two or three pieces of evidence that most support the answer? The full source list is in the appendix; the body of the brief shows the partner the evidence that decides.
  4. What is the strongest argument against the answer, and why does the analyst still hold it? This is the question partners actually ask. Briefs that anticipate it survive; briefs that are blindsided by it do not.
  5. What would change the answer? A brief that says “we recommend X” without saying “we would recommend Y if we observed Z” is taking a position without inviting the conversation that decides whether the position holds.

These five questions are useful as a workflow constraint, not just a writing template. A brief whose research could not answer all five at the level of detail the partner will demand is not a brief; it is a draft. The Thursday review pass is, at minimum, an audit against these five questions.

Sources that survive partner challenge

The discipline that separates a defendable strategy brief from a chat-thread summary is citation. Every claim in the brief — pricing, customer count, regulatory change, competitive move — must trace to a current public source. The audit is non-negotiable: if a partner can ask “where did this number come from?” and the analyst cannot answer in five seconds, the brief loses credibility regardless of whether the underlying claim is correct.

The sources that survive: pricing pages and product surfaces (verbatim, dated), regulatory filings (10-K, 10-Q, S-1), recent earnings calls (transcripts, not summaries), customer reviews on independent platforms, hiring patterns from public job postings, and any PDF the analyst has loaded from a firm subscription. The sources that do not survive: AI summaries without underlying citations, third-party blog posts of unknown provenance, Reddit threads cited without verification, and any number whose origin the tool cannot show on demand.

The mechanic that makes this work is paragraph-level citation that survives the deck export. The analyst writes; the citation attaches; the deck preserves it as a clickable footnote that opens to the source the analyst saw. For a deeper treatment of how this works in practice, see AI competitive intelligence and cited AI research reports.

Synthesis is where craft replaces framework

Synthesis is the part of strategy research that frameworks help least with. A framework can structure the inputs — what is the competitive landscape, what is the regulatory environment, what is the customer segmentation — but the take is the analyst’s view of what the inputs collectively imply. The framework does not produce the take; the analyst does, by reading across the inputs.

This is also where AI tools fail most visibly. AI can produce a fluent paragraph from any set of inputs. It cannot tell whether the paragraph is the right paragraph — whether it captures the strategic implication the partner needs, or whether it papers over a tension between sources that the analyst should have surfaced. The fluent-but-wrong synthesis is the most dangerous failure mode of AI-assisted strategy work.

The craft move is to treat AI synthesis as scaffolding the analyst reacts against. The first draft surfaces what the AI thinks the take is; the analyst reads it, notices what is missing, what is over-claimed, and what the AI failed to see. The second draft is the analyst’s actual take, written from scratch with the AI draft as a foil. This pattern — generate, react, rewrite — produces better work than either pure-AI or pure-human drafting because it forces the analyst to articulate why their take differs from the obvious one.

Common mistakes in strategy research workflow

Three patterns recur across briefs that did not survive partner review.

Optimizing for the wrong audience. The brief is written to impress an internal reviewer (the senior associate, the case team lead) rather than to inform the actual decision-maker (the partner, the executive sponsor). The two audiences want different things — the reviewer wants thoroughness, the decision-maker wants the answer. Briefs that confuse them get praised in the case team meeting and fall flat in the IC.

Ignoring the reversibility of the underlying decision. Running the same deep workflow on a two-way door wastes a week the team could have spent piloting. Running a shallow workflow on a one-way door produces a brief that looks adequate until the decision turns out to have been irreversible and under-supported.

Treating frameworks as the deliverable. A brief whose conclusion is “we applied the BCG matrix and here are the four quadrants” has not yet produced a take. The framework structures the analysis; the conclusion is what the analysis implies, not the analysis itself.

A note from building Innogath

We made a deliberate product choice in Innogath: there is no ‘research forever’ mode. Every project carries a soft deadline that the workspace nudges toward, and the synthesis surface appears earlier than users expect. This came from a strategy user in early access who said: ‘the AI tool that researches forever is the tool that misses Friday.’ The shape of the workspace reflects that the deadline is the work, not an inconvenience to be mourned.

Where Innogath fits

Innogath is built around the deadline-driven, citation-disciplined view of strategy research this guide describes. The branching tree opens one branch per competitor, segment, or buying motion, with citations attached to the public surface, hiring signal, and pricing. The synthesis branch reads across the others. The deck export preserves citations as clickable footnotes — every number opens to the source the analyst saw, which is the property the partner challenge depends on.

For the persona-level walkthrough — what the workflow looks like end to end for a strategy analyst — see the market research use case. For adjacent methodology, see the deep research guide, the sibling academic research workflow pillar, and the AI competitive intelligence sub-cluster.

References

Decision process research: McKinsey & Company, How companies make good decisions: McKinsey Global Survey Results. Source for the “process matters more than analysis by a factor of six” finding and the unmeasured-outcome rates (~20% revenue-growth, ~16% cost-savings).

Reversibility framing: Bezos, J., 2015 and 2016 Letters to Amazon Shareholders (one-way doors / two-way doors). Earlier formulations of the same idea exist in operations research and decision theory; the 2015–2016 framing is the modern reference.

Feedback loops and intuition: Kahneman, D. & Klein, G. (2009), “Conditions for intuitive expertise: A failure to disagree,” American Psychologist, 64(6). Establishes that trustworthy intuition requires repeated experience in environments with reliable feedback — a condition strategy research structurally does not satisfy for the individual analyst. Klein, G. (1998), Sources of Power: How People Make Decisions, MIT Press, for the naturalistic decision-making perspective. Kahneman, D. (2011), Thinking, Fast and Slow, for the high-validity / low-validity environment distinction.

Brief structure and consulting frameworks: Minto, B. (1987), The Pyramid Principle, for the answer-first brief structure. Rasiel, E. (1999), The McKinsey Way, for MECE and the apprenticeship view of consulting craft. Porter, M. (1980), Competitive Strategy, for the Five Forces framework. Henderson, B., BCG Perspectives series (1970s) for the BCG Matrix. Ansoff, I. (1965), Corporate Strategy, for the Ansoff Matrix.

For adjacent methodologies, see the sibling academic research workflow pillar and the AI competitive intelligence sub-cluster.

FAQ

Questions this guide should settle

What is a strategy research workflow? +

A strategy research workflow is the operational chain that turns a strategic question into a defendable brief — usually a deck, memo, or category landscape. It differs from academic workflow in that the deadline is fixed and the deliverable has to survive a partner challenge, not a peer review. The workflow is judged on whether the brief can be defended with the time available, not on whether the answer is provably correct.

How long does strategy research take? +

A focused category brief typically runs three to five working days from question to deck. Larger engagements — full strategic plans, market sizing for a new entrant, multi-year category bets — run two to six weeks. The tight rhythm matters because partners and executives expect strategy research to fit inside their decision cadence; research that takes longer than the decision needs is research that arrives after the decision has already been made.

What is the difference between strategy research and academic research? +

The deadline. Academic research is truth-driven: the project is done when the answer is defensible, however long that takes. Strategy research is deadline-driven: the project is done by Friday, and the standard is whether the brief survives a partner challenge with the evidence assembled by Friday. This single difference changes almost every workflow decision — sources, depth, citation discipline, synthesis approach.

What frameworks should a strategy research workflow use? +

The frameworks taught in consulting handbooks — MECE, Porter's Five Forces, BCG Matrix, Ansoff Matrix, McKinsey 7-S — are scaffolding for thinking. They are not the workflow itself. A senior consultant uses one or two frameworks per brief, often modified, and spends most of their time on the parts of the brief the framework does not cover. Junior consultants over-apply frameworks; the craft is in knowing when not to use one.

How do you avoid hallucinated numbers in strategy research? +

Every number in the brief has to trace back to a current, public source the partner can click through to. The test is whether the analyst can produce the source page on screen during the meeting when challenged. Numbers that come from "industry reports" without specific sources, or from AI summaries without underlying citations, are the failure mode. The discipline is verbatim citation at the paragraph level, preserved through the deck export.

What is the role of AI in a strategy research workflow? +

AI compresses the assembly half of the workflow — pulling competitor surfaces, comparing pricing pages, deduplicating analyst reports, drafting the comparison matrix. It does not compress the synthesis half, which is where the analyst's judgment about what matters strategically lives. Workflows that apply AI uniformly produce briefs that look slick but cannot be defended; workflows that apply AI to assembly and reserve human time for synthesis are the ones that ship faster without losing rigor.

How do strategy teams know when a brief is good enough? +

The brief is good enough when every claim in it can be defended with a citable source, the recommendation follows from the evidence assembled, and the partner can trace any number to its origin in under five seconds. This is a deliberately operational test, because "good enough" in strategy work is not "as good as possible" but "defensible by Friday." Teams that confuse the two end up shipping later than the decision needed and over-investing in briefs that the decision-maker has already moved past.

Turn the guide into a research workspace.

Bring one serious topic into Innogath and let the first report become a cited map, branch tree, and writing surface.