Imagine an iBuyer holds 6 active listings within a single square mile in Chandler, Arizona. All 6 are competing for the same pool of buyers — the same people scrolling Zillow on the same Saturday afternoon. Every additional listing doesn't just add supply; it steals demand from the others, stretches time-on-market, and triggers the slow price decay that kills iBuyer margins.
This is price cannibalization. Unlike the Winner's Curse (buying too high), cannibalization is about selling too low — a problem entirely of your own making.
The Mechanism
The logic is simple: buyers arrive at a rate determined by the local market. With active listings competing for those buyers, each home's time-to-sale stretches proportionally. And price decays with time-on-market — empirically 1–2% per month.
Combining buyer arrival rate and price decay rate , the expected sale price with nearby listings is:
The cannibalization discount factor:
This function is bounded between 0 and 1, monotonically decreasing (more inventory always hurts), and convex (diminishing marginal damage). All of these properties are formally verified in Lean 4.
How Bad Is the 5th Listing?
For a dense/slow suburb ( buyers/month/mi², monthly decay):
| Nearby listings | Cumulative discount | Marginal impact |
|---|---|---|
| 0 | 0.00% | — |
| 1 | 0.127% | −0.127% |
| 2 | 0.255% | −0.127% |
| 3 | 0.381% | −0.127% |
| 4 | 0.508% | −0.127% |
| 5 ⭠ | 0.634% | −0.127% |
On a $400K home, the 5th nearby listing costs $508 in marginal cannibalization — before factoring in holding costs, price reductions, or the ripple effect on the other 4 listings.
Not All Neighbors Are Equal
The integer-count model above treats all nearby homes the same. In reality, a competitor 0.1 miles away cannibalizes far more than one at 0.9 miles. The full model uses an Epanechnikov kernel to weight each nearby listing by proximity, producing a continuous density measure instead of a raw count.
But density alone still misses something: clustering. Two holdings 2 miles apart might each have low local density, but together they signal a concentrating portfolio. The algorithm uses Local Moran's I (a spatial autocorrelation statistic) to detect these clusters and apply an extra discount. If your inventory is spatially clustered (High-High pattern), the penalty increases. If a holding is isolated, no extra penalty.
The Full Algorithm
The ComputeMaxBid pipeline runs in effectively constant time per query:
- Query a KD-tree for all inventory within 1 mile —
- Compute kernel-weighted density
- Apply base cannibalization discount
- Compute Moran's I and apply cluster adjustment
- Apply service margin to get final max bid
- Circuit breaker: if 8+ listings within 1 mile, max bid = 0. Hard stop, don't buy.
Calibration
| Parameter | How to calibrate | Typical range |
|---|---|---|
| Cannibalization radius | Hedonic regression on DOM vs. nearby inventory | 0.5–2.0 mi |
| Price decay rate | Regression of sale price on DOM | 0.5–3% / mo |
| Buyer intensity | MLS showing data / days-to-pending | 5–50 / mi²·mo |
| Cluster sensitivity | Cross-validated on historical P&L | 0.1–1.0 |
Seven properties are machine-verified in Lean 4: the discount is always positive, always at most 1, always decreasing with inventory, the bid never exceeds FMV, the bid is never negative, and the marginal impact is always negative.
Cannibalization is a portfolio-level problem disguised as a per-home pricing problem. The algorithm is per-home, but the circuit breaker and parameter calibration is where portfolio-level risk management actually lives.