Playing to Win for AI Strategy: The 5-Box Cascade Leadership Teams Actually Ship
Apply Roger Martin's Playing to Win cascade to AI strategy. The five choices that separate AI leaders from organisations stuck in pilot purgatory.
Why Most AI Strategies Fail Before They Start
Every serious AI strategy deck eventually arrives at the same graveyard: a slide titled 'Strategic Priorities' populated with three to five aspirational verbs — accelerate, optimise, transform — followed by a roadmap that is really just a list of tools the organisation is already piloting. The deck gets applauded, the budget gets approved, and eighteen months later the executive team is asking why the return on investment never materialised.
The problem is almost never the technology. It is the absence of genuine strategic logic. Organisations conflate AI strategy with AI planning. Planning answers 'what will we do?' Strategy answers the harder question: 'what will we not do, and on what logic do we bet that our chosen moves will produce durable advantage?' Without that logic, AI investment disperses into hundreds of micro-projects, each locally sensible, collectively incoherent.
Roger Martin's Playing to Win framework, developed with A.G. Lafley at Procter and Gamble and later formalised in the 2013 book of the same name, is the most rigorous tool for building that logic at the corporate level. It forces five cascading choices — winning aspiration, where to play, how to win, required capabilities, and enabling management systems — that must be mutually reinforcing. Each choice both constrains and is validated by the others. The cascade is not a checklist; it is an interlocking system where a weak link collapses the whole.
Applying this cascade to AI adoption is not a rhetorical exercise. It produces genuinely different strategic decisions than the standard AI maturity model or technology roadmap approach. This article maps each of the five boxes onto the specific choices AI leadership teams face in 2025, and explains what it takes to move from a cascade on paper to a cascade that actually ships.
Box One — Winning Aspiration: What Does AI Victory Look Like for This Organisation?
The winning aspiration is not a mission statement and it is not a financial target. It is a specific definition of what winning means for a defined set of customers or stakeholders, and it must be ambitious enough to require genuine trade-offs. For AI strategy, this means naming the concrete advantage the organisation intends to hold in three to five years that would be difficult for a well-resourced competitor to replicate quickly.
This is where most leadership teams make their first strategic error: they define the aspiration at the technology layer rather than the outcomes layer. 'We will be an AI-first organisation' is not a winning aspiration. It describes an input, not a competitive position. A winning aspiration sounds more like: 'We will compress the time from customer signal to product response to under 72 hours, making us the fastest-adapting firm in our category' or 'We will reduce the fully-loaded cost of compliance operations by 40 percent while improving audit defensibility, freeing capital to accelerate underwriting.' Both statements imply where AI will be deployed, how it must perform, and what a competitor would have to match.
The aspiration must also be honest about the nature of the advantage being sought. There are fundamentally three types of AI-enabled advantage available to most enterprises: speed advantage (faster decisions, faster products), cost structure advantage (lower marginal cost per transaction or per knowledge worker output), and insight advantage (proprietary data loops that improve predictions over time). These are not mutually exclusive but they require very different capability investments. Trying to pursue all three simultaneously is the organisational equivalent of having no aspiration at all.
Leadership teams that invest time here — usually two to three focused working sessions with the right cross-functional voices in the room — almost always report that the exercise surfaces genuine disagreement about what the organisation is actually trying to achieve with AI. That disagreement, once visible, is the most valuable strategic input available.
Box Two — Where to Play: Choosing the Arenas That Reward Your AI Bets
Where to play defines the specific arenas in which the organisation will compete using AI-enabled capabilities. In classic strategy terms, this means customer segments, geographies, product categories, and value chain positions. In AI strategy terms, it requires an additional dimension: which data environments, workflow categories, and decision types are you competing to own?
The where-to-play choice is the most consequential constraint in the cascade because it determines which capability investments are necessary and which are distractions. An organisation that declares it will win in AI-augmented customer service faces entirely different capability requirements than one that declares it will win in AI-driven supply chain optimisation — even if both are using broadly similar model infrastructure. The mistake of treating AI as a horizontal capability that improves everything simultaneously is a direct consequence of never making a clear where-to-play decision.
For enterprise AI strategy, the where-to-play choice should evaluate arenas on three criteria. First, data advantage: does the organisation hold proprietary data in this arena that models trained on that data would make genuinely hard to replicate? Second, workflow criticality: are the decisions made in this arena consequential enough that modest AI-enabled improvement translates to material business outcome improvement? Third, regulatory tractability: given the EU AI Act's risk-tiered architecture, is this arena one where the organisation can deploy at the pace required without being throttled by compliance overhead it has not yet built?
That third criterion deserves particular weight in 2025. Arenas involving high-risk AI applications as defined under the EU AI Act — including employment decisions, credit scoring, and certain safety-critical systems — carry conformity assessment, fundamental rights impact assessment, and post-market monitoring obligations that add meaningful time and cost to deployment. Organisations that have not mapped their intended arenas against the Act's risk classification framework before finalising their where-to-play choices will consistently find that their deployment timelines are optimistic.
Box Three — How to Win: The AI Capability Logic That Makes Your Position Defensible
How to win is the box where genuine strategic differentiation either appears or does not. It answers: given our chosen arenas, what specific source of advantage will make our AI position durable rather than temporary? This is not a technology choice. It is a logic about why customers or the market will prefer our AI-enabled offering over alternatives, and why that preference will compound rather than erode.
In AI strategy, there are a small number of genuinely defensible how-to-win positions. The most powerful is the proprietary feedback loop: an AI system deployed in a high-frequency decision environment accumulates labelled outcomes that continuously improve model performance, creating a widening gap between the incumbent and any new entrant. This is the logic behind financial fraud detection leaders, recommendation engines, and the better clinical decision support tools. It requires not just a good initial model but an intentional data flywheel design built into the product or process from the start.
A second defensible position is workflow integration depth. An AI capability that is deeply embedded in the operational workflows of an enterprise — where extracting it would require retraining hundreds of staff and re-engineering adjacent processes — has a switching cost moat that a technically superior competitor will struggle to overcome. This is the strategic logic that enterprise software incumbents are racing to establish by integrating AI natively into existing platforms rather than selling standalone AI tools.
A third position, increasingly relevant for regulated industries, is compliance-as-differentiation. An organisation that can demonstrably operate AI at scale with full EU AI Act conformity — documented technical documentation, auditable human oversight protocols under Article 26, systematic post-market monitoring — can serve regulated customers and public sector buyers that competitors without that infrastructure simply cannot. This is not a soft advantage; it is a hard market access gate that compounds as the Act's enforcement mechanisms mature.
The how-to-win statement should be specific enough that a competitor reading it would understand exactly what they would need to match.
Box Four — Required Capabilities: Building What the Cascade Demands
Required capabilities are the activities and competencies the organisation must have in place to execute the where-to-play and how-to-win choices. This is where most AI strategies encounter their most painful reality check. The gap between the capabilities implied by an ambitious AI strategy and the capabilities an organisation actually possesses is almost always larger than the leadership team estimated at the outset.
For AI strategy specifically, required capabilities cluster into four domains. The first is data capability: the ability to acquire, clean, label, govern, and continuously improve the training and evaluation data that AI systems depend on. This is not a one-time investment; it is an ongoing operational discipline that requires dedicated staffing and tooling. Organisations that treat data capability as a project rather than a function consistently discover that their AI systems degrade in production rather than improve.
The second domain is model lifecycle management: the organisational ability to evaluate, deploy, monitor, retrain, and retire AI models in a governed and auditable way. This is the operational backbone that Article 72 of the EU AI Act will eventually scrutinise through market surveillance, and that Article 73's serious incident reporting obligations require to be functioning before an incident occurs rather than after. Without model lifecycle management as a genuine capability, AI deployment is a series of one-time events rather than a compounding programme.
The third domain is AI literacy at decision-maker level. Not every executive needs to understand gradient descent, but every executive whose decisions are informed or accelerated by AI systems needs to understand the limits of those systems — their confidence thresholds, their known failure modes, their data currency. Organisations that deploy AI into decision workflows without investing in this literacy consistently find that human oversight exists on paper but not in practice, which is both an operational risk and an Article 26 compliance risk.
The fourth domain is governance infrastructure: the policies, registers, assessment workflows, and monitoring systems that make AI deployment auditable and correctable. Platforms that systematise this — mapping deployed systems, tracking human oversight protocols, generating conformity documentation, and synthesising post-market signals — compress the time and cost of building this capability significantly. But the capability itself must be owned by the organisation, not outsourced to a tool.
Box Five — Management Systems: The Organisational Plumbing That Makes Strategy Real
Management systems are the structures, processes, metrics, and incentive designs that reinforce the four choices above and make them self-sustaining rather than dependent on individual champions. This is the box that separates organisations that successfully institutionalise AI strategy from those that lose momentum when the original sponsors move on or budget cycles tighten.
For AI strategy, the most critical management system is the AI performance measurement framework — a suite of metrics that tracks not just AI output quality but the actual business outcomes the strategy was designed to produce. This means connecting model performance metrics (accuracy, drift, latency) to operational metrics (decision cycle time, error rate, cost per transaction) to strategic metrics (margin improvement, market share gain, customer retention). Without that vertical connection, AI teams optimise for model metrics while the business wonders why the strategy is not delivering.
The second critical management system is the governance cadence: a regular rhythm of review at which the AI portfolio is assessed against both performance and compliance standards. This is not a quarterly IT review; it is a strategic review that includes the AI lead, the compliance function, and at least one business unit leader for each material deployment. The EU AI Act's post-market monitoring requirements under Article 72 effectively mandate a version of this cadence for high-risk systems. Organisations that build it proactively rather than reactively find that it becomes a genuine strategic asset rather than a compliance burden.
Third is the incentive and accountability structure. If AI adoption is measured in the business units purely by speed of deployment and in the compliance function purely by absence of incidents, the inevitable result is a tension that produces either reckless deployment or paralytic caution. Effective management systems align incentives around a shared definition of success that includes both business outcomes and governance quality. This alignment is not automatic; it requires deliberate design at the leadership level and visible commitment from the C-suite.
Fronterio's Strategy Canvas was built precisely for this fifth box: a persistent, living representation of the cascade that connects strategic choices to deployed AI systems, maps each system's governance status, and surfaces the gaps between the strategy leadership has declared and the operational reality on the ground. It does not replace the strategic thinking, but it makes the strategy's implications continuously visible.
Making the Cascade Mutually Reinforcing — and Knowing When to Revisit It
The power of the Playing to Win cascade is not any individual box but the mutual reinforcement between all five. A winning aspiration that cannot be served by your chosen arenas is fantasy. A where-to-play choice that does not connect to a defensible how-to-win is a market presence, not a strategy. Capabilities that do not map to the how-to-win are waste. Management systems that do not measure the capabilities required by the cascade are governance theatre. The test of a well-constructed cascade is simple: can you trace a logical line from every significant AI investment back to the winning aspiration through each of the five choices? If you cannot, either the investment is misaligned or the cascade is incomplete.
This internal consistency test should be conducted at the outset of strategy development and repeated at meaningful intervals — typically annually for the full cascade and quarterly for the management systems and capability assessments. The AI landscape is moving fast enough that arenas that were low-priority in 2023 may be strategically critical in 2025, and capabilities that required expensive custom development may now be available as commodity infrastructure. The cascade is not a document that gets approved and filed; it is a living strategic hypothesis that should be updated as evidence accumulates.
Organisations in regulated industries face an additional prompt to revisit the cascade: regulatory change. The EU AI Act's obligations are being phased in through 2027, and each phase shift changes the cost and complexity of certain where-to-play choices and potentially the viability of certain how-to-win positions. Leadership teams that treat the Act purely as a compliance programme miss the strategic signal embedded in it: that demonstrable AI governance quality is becoming a genuine differentiator in regulated markets, and that organisations which build that capability early will hold an advantage that late movers will struggle to close.
The cascade is ultimately an act of intellectual honesty — a leadership team's willingness to make explicit choices, accept the constraints those choices impose, and resource the capabilities those constraints require. In AI strategy as in any strategy, the organisations that win are almost never the ones that tried to do everything. They are the ones that chose well and then executed relentlessly on that choice.
From Cascade to Execution: The First 90 Days
A complete Playing to Win cascade for AI strategy is typically the output of four to six weeks of structured leadership work: initial framing, arena analysis, capability assessment, cascade drafting, internal consistency review, and a final commitment session at which trade-offs are explicitly named and accepted. The process is not long by strategic planning standards, but it requires genuine leadership time and intellectual honesty that many organisations struggle to protect against the daily pressure of operational demands.
The first 90 days after the cascade is finalised should focus on three things. First, a gap analysis between the capabilities required by the cascade and the capabilities currently in place — not a theoretical gap analysis but a concrete inventory of what exists, what is being built, and what must be acquired or hired. Second, a portfolio review that maps every current AI initiative against the cascade's where-to-play choices and terminates or deprioritises projects that do not serve the declared arenas. This is politically difficult but strategically necessary; a cascade that does not result in stopping things is not a cascade, it is a decoration. Third, the establishment of the management system cadence — the first governance review scheduled, the metrics framework defined, the reporting structure confirmed.
Organisations that use a platform like Fronterio to systematise the governance and monitoring dimensions of the cascade find that the discipline of the platform itself reinforces strategic focus. When every AI system in the portfolio must be registered, risk-classified, assessed for EU AI Act conformity, and connected to documented human oversight protocols, it becomes immediately visible which projects were speculative bets that never connected to strategy and which are genuine capability investments. That visibility is not just a compliance benefit; it is a strategic clarification tool.
The organisations that will lead their industries in AI capability over the next five years are not necessarily the ones deploying the most models today. They are the ones making the clearest strategic choices about where AI will produce durable advantage for them specifically, building the capabilities those choices require, and governing their AI portfolios with the rigour that compounding returns demand. Playing to win, not playing to play.
Frequently asked questions
What is the Playing to Win framework and how does it apply to AI strategy?
Playing to Win is a strategy framework developed by Roger Martin and A.G. Lafley that structures strategy as five cascading choices: winning aspiration, where to play, how to win, required capabilities, and management systems. Applied to AI strategy, it forces leadership teams to make explicit, mutually reinforcing choices about which AI-enabled competitive positions they are building, rather than defaulting to broad AI adoption programmes that distribute investment without strategic logic.
Why do most enterprise AI strategies fail to deliver competitive advantage?
Most enterprise AI strategies conflate planning with strategy. They produce lists of tools, projects, and maturity targets without ever defining a specific competitive position AI is meant to create or the trade-offs required to pursue it. Without a clear where-to-play and how-to-win choice, AI investment disperses across hundreds of locally reasonable projects that collectively produce no durable advantage. The Playing to Win cascade addresses this by forcing explicit, interconnected choices that constrain each other.
What are the five boxes in an AI Playing to Win cascade?
The five boxes are: winning aspiration (the specific competitive position AI will create), where to play (the arenas — workflows, segments, decision types — where AI will be deployed), how to win (the logic that makes the position defensible, such as a data flywheel or compliance differentiation), required capabilities (data, model lifecycle management, AI literacy, governance infrastructure), and management systems (metrics, governance cadences, and incentive structures that sustain execution).
How does the EU AI Act affect where-to-play choices in AI strategy?
The EU AI Act's risk-tiered classification system means that different arenas carry very different compliance overhead. High-risk applications — including employment decisions, credit scoring, and certain safety-critical systems — require conformity assessments, fundamental rights impact assessments, post-market monitoring, and documented human oversight under Articles 26 and 27. Organisations that do not map intended arenas against this risk framework before finalising strategy consistently find their deployment timelines are optimistic and their governance costs are underestimated.
What is a defensible how-to-win position in AI strategy?
Defensible AI positions typically fall into three types: proprietary feedback loops (where deployed AI accumulates labelled outcomes that continuously improve model performance, creating a widening gap with competitors), workflow integration depth (where AI is embedded deeply enough in operations that switching costs become a moat), and compliance differentiation (where demonstrable EU AI Act conformity enables access to regulated markets and public sector buyers that competitors without that infrastructure cannot serve).
How often should a leadership team revisit its AI strategy cascade?
The full cascade should be reviewed annually at minimum, with quarterly reviews of the capability and management systems boxes. Significant external triggers — a new phase of EU AI Act implementation, a major shift in foundation model capabilities, or a competitor move that changes the landscape of a chosen arena — should prompt an immediate partial review. The cascade is a living strategic hypothesis, not a document that gets filed after approval.
What AI governance capabilities are required to execute an AI strategy cascade?
Required governance capabilities include a systematic AI system register, risk classification processes aligned to EU AI Act categories, conformity documentation workflows, human oversight protocols that satisfy Article 26 obligations, post-market monitoring systems that surface performance degradation and incidents, and an Article 73 serious incident reporting workflow. These are not optional compliance additions; they are the management system infrastructure that makes a strategy cascade operationally real and auditable.
What is the difference between an AI roadmap and an AI strategy cascade?
An AI roadmap is a sequenced plan of initiatives and deployments. An AI strategy cascade is a set of mutually reinforcing choices about competitive position. A roadmap answers 'what will we build and when?' A cascade answers 'where will AI make us distinctively better than alternatives, and why will that advantage compound?' Most organisations have roadmaps and call them strategies. Organisations that have genuine cascades find that their roadmaps become much more focused and their resource allocation decisions become significantly clearer.
Ready to get started?
Fronterio helps you implement everything discussed in this article — with built-in tools, automation, and guidance.