PXL prioritization
📖 3 min readUpdated 2026-04-19
PXL (from CXL) scores ideas on evidence, impact, and effort with more structure than ICE. Harder to manipulate.
Evidence
What data supports this test? (user research, analytics, prior tests, expert opinion)
Impact
Above/below fold? Key page? Revenue-adjacent?
Ease
How much engineering and design time?
Output
Similar to ICE but with explicit weights for evidence quality.
What to do with this
- Start with ICE on early CRO programs, graduate to PXL as you have more evidence sources to weight
- Match framework to team maturity, PXL's rigor is wasted on a team with no heatmaps or surveys
- Document which evidence is cited per experiment, the PXL checklist forces explicit backing for each score
- Re-weight PXL criteria quarterly, the evidence sources that matter most to your test outcomes shift over time
- Use PXL to filter "feels important" tests without real evidence, the checklist surfaces where data is missing