On Assessing the Value of a Project
A simple framework for picking the most impactful work — with practical tips for estimating chance of success, effect size, and applicable scenarios.
Less than 20% of work leads to more than 80% of the impact we make. The one thing that determines the value we create is how well we choose the most impactful work to invest in. Here I introduce a simple framework I learned a few years back. We use it on my team to choose research projects, along with some practical tips.
The expected value of a project can be approximated by three factors:
- the chance of success ,
- the expected effect size for each applicable scenario if the project succeeds,
- the expected number of applicable scenarios .
Putting it together:
Estimating chance of success — Pr(P)
Determining a priori whether a new project will succeed is challenging because (1) we can’t always see all the “unknown unknowns,” and (2) we can’t predict the outcome of the “known unknowns.” Because any of these unknowns can fail a project, these effects compound as the scope encompasses more and more such unknowns. Successful prediction comes down to identifying the unknowns. A few ways to do it:
Strive to be informed. In research, this means a literature survey. Finding existing work on related problems maps out what others have found — both previously known and unknown unknowns — and offers insight into how others approached them and what outcomes they reached. Free academic resources combined with powerful AI tools leave very little room for the excuse of not being informed.
Reduce complexity. Reducing the amount of unknowns improves both the estimate and the chance of success. This is often achieved by clearly articulating the key assumptions, testing those assumptions as early as possible, and removing aspects of the project formulation that aren’t essential to the project idea.
Practice intentionally. Better estimates come with practice, like any other skill. The key to effective practice is a truthful feedback loop. Such a loop can be distorted by a key information asymmetry: we know more after a project than we did before it. When we try to learn from experience, hindsight bias makes “what I knew back then” blurry — everything appears more predictable than it really was. Overcome this with three steps:
- Write down the reasoning and supporting data points with the guess in the project planning phase.
- Make the guess with a numerical probability, not vague adjectives (likely, probably, not impossible).
- Seal the documented guess and examine it after obtaining the outcome.
Deriving effect size — ES(P)
While it’s difficult to estimate how much utility a research project produces due to incomplete information, it’s usually possible to estimate the equally informative measure of the upper bound of expected improvement. Start by assuming the project works perfectly and brings the intended improvement on the target dimension. At the same time, identify other factors that affect the same dimension that the project will not change. The invariance of these factors puts a ceiling on the improvement even with perfect execution. When assessing multiple projects with different target improvements, convert the effects to a common metric (e.g., dollar amount).
Understanding applicable scenarios — AS(P)
Typically, a project is applicable to a scenario where all of its assumptions hold. Fewer assumptions make the project more widely applicable, but often reduce the chance of success (handling more general cases adds complexity) and the effect size (since case-specific improvements are not pursued). When scoping a new project, intentionally control this tradeoff.
When starting in a new direction, it’s usually a good idea to start with more assumptions and target more specific scenarios. A successful niche project not only yields value — it also produces insights and information that enable better tradeoffs in more general follow-up projects.