Sometimes today’s lessons can be learned from the past.
Early in the history of the Trader Joe’s retail brand, founder Joe Coulombe was faced with a fundamental dilemma. Trader Joe’s was trying to establish itself as a differentiated option for both convenience stores and generic big-box retailers. The selection and assortment of items its stores offered needed to tell a story about what the brand was about, while aligning with the company’s operational needs. As the story goes, Joe devised a principle called “The Four Tests.”, four singles “checks to sniff” that each product in the inventory had to pass. These elements were: each item required a high value per cubic inch, a rapid consumption rate, ease of handling, and a reason to exist distinct from that of its competitors. These were pragmatic rules that helped shape Trader Joe’s enduring identity as a high-turnover, high-loyalty retail brand.
Businesses today face a similar challenge with AI.
The C-suite demands rapid implementation of AI in the workforce and AI-enabled operations transformation. The vendor landscape offers seemingly limitless options: co-pilots, AI chatbots, solutions and agents in all their forms, promiseing transformation. Use case ideas abound, especially those that claim “productivity” benefits that seem immediately achievable but often prove elusive. execution. At the same time, the potential of AI extends well beyond these superficial victories, but it is much less clear where the real value lies or which problems merit attention.
Every organization on its AI journey ultimately faces the same questions: Where should we focus our scarce bandwidth and what problems are really worth solving with AI?
The five tests
I first heard about Joe Coulombe’s Four Tests on a recent episode of the excellent podcast “Acquired”, hosted by Ben Gilbert and David Rosenthal. This got me thinking: could businesses adopt a similar discipline for AI?? Forrester already offers advice on granular use case prioritization; but What if we had a simple heuristic (a “sniff test”, so to speak) for executives to cut through the noise and focus on areas of opportunity that matter most?
I offer the following five principles as basic filters for leaders deciding where to apply AI:
- Does the opportunity offer high business value? To prioritize AI initiatives that directly advance strategic priorities or solve important business challengesthat is to say where AI enables tangible results such as cost reduction, productivity gains, new sources of revenue or an improved customer experience.. These are opportunitiesies which offer a clear conversion of a marginal token, an hour or integration into business impact.
- Can we quickly learn from this? Or, dthis is the offer of opportunity A high cornering speed? Prefer apps or workflows this offerra high turn speed where results can be observed quickly and iteratively, allowing the organization to adapt and evolve what works. Prioritize processes with frequent cycles and visible results, so that lessons can be quickly captured and applied across the organization. A corollary of this is to ensure by design that every AI “solution,” whether created by citizen developers or engineering teams or introduced via a vendor offering, must be instrumented with success labels, costs, and failure modes to enable continuous evaluation.
- Do we have the right data for this? Focus on opportunities where high quality, accessible and well-governed data is available. Ensure data meets compliance, security and ethics standards. Avoid initiatives that rely on fragmented, poor quality, or uncontrolled data sources. Citizen builds work well when data is in trusted repositories with clean schemas, while engineering products should leverage organized domains with clear ownership and versioned semantics. As a corollary, ensure that each use case can be securely managed and governed. Ship only what you can operate within a well-defined control envelope, where governance, risk management and accountability can be integrated at every step, enabling confidence in results and resilience in the face of failure.
- Does this build on or give us a defensible advantage? Select opportunities where proprietary data and context, differentiated processes, specialized knowledge or domain specific preview can be combined with AI to create defensible differentiation. Acancel generic applications that can be easily replicated by competitors or commoditized over time. The most valuable AI use cases mix general models with your unique data, processes and expertise. This does not prevent the use of Basic AI for basic work (e.g. automating functions like payroll, by all means, if it is effective), but do not confuse operational efficiency with strategic advantage or market efficiency. Focus your construction efforts on opportunities that make your beer taste better.
- Does this make the next opportunity easier? THE mental model for AI is double: underlines create reusable ‘SKILLS‘ who are also long-term cognitive ‘product‘ assets for your business. Prefer use cases that create such assetsexecutives or agentic skills that can be applied beyond initial deployment or that create building blocks that transform marginal cost or value of the following case. This approach creates a flywheel that increases the organization’s AI maturity and lowers the barriers to future innovation. (rather than ending up in a wasteland of abandoned use cases, which seemed like a good idea at the time).
These five principles form a compact decision-making discipline for enterprise AI. High value density ensures that every effort pays off in terms of complexity. High rotation speed accelerates learning and drives adoption. Data in hand anchors feasibility while operability at scale ensures trust and compliance. Exclusive edge ensures long-term differentiation. When applied together, these principles focus the enterprise AI portfolio on use cases that drive impact and build a strong foundation for AI-driven transformation. on a large scale.