AI × Real EstateApril 2025

You're Competing Against Yourself — And Losing

6 listings in one square mile. Same buyers. Each one making the others cheaper. Here's the algorithm that stops iBuyers from cannibalizing their own portfolio.

Imagine an iBuyer holds 6 active listings within a single square mile in Chandler, Arizona. All 6 are competing for the same pool of buyers — the same people scrolling Zillow on the same Saturday afternoon. Every additional listing doesn't just add supply; it steals demand from the others, stretches time-on-market, and triggers the slow price decay that kills iBuyer margins.

This is price cannibalization. Unlike the Winner's Curse (buying too high), cannibalization is about selling too low — a problem entirely of your own making.

The Mechanism

The logic is simple: buyers arrive at a rate determined by the local market. With active listings competing for those buyers, each home's time-to-sale stretches proportionally. And price decays with time-on-market — empirically 1–2% per month.

Combining buyer arrival rate and price decay rate , the expected sale price with nearby listings is:

The cannibalization discount factor:

This function is bounded between 0 and 1, monotonically decreasing (more inventory always hurts), and convex (diminishing marginal damage). All of these properties are formally verified in Lean 4.

How Bad Is the 5th Listing?

For a dense/slow suburb ( buyers/month/mi², monthly decay):

Nearby listingsCumulative discountMarginal impact
00.00%
10.127%−0.127%
20.255%−0.127%
30.381%−0.127%
40.508%−0.127%
5 ⭠0.634%−0.127%

On a $400K home, the 5th nearby listing costs $508 in marginal cannibalization — before factoring in holding costs, price reductions, or the ripple effect on the other 4 listings.

Not All Neighbors Are Equal

The integer-count model above treats all nearby homes the same. In reality, a competitor 0.1 miles away cannibalizes far more than one at 0.9 miles. The full model uses an Epanechnikov kernel to weight each nearby listing by proximity, producing a continuous density measure instead of a raw count.

But density alone still misses something: clustering. Two holdings 2 miles apart might each have low local density, but together they signal a concentrating portfolio. The algorithm uses Local Moran's I (a spatial autocorrelation statistic) to detect these clusters and apply an extra discount. If your inventory is spatially clustered (High-High pattern), the penalty increases. If a holding is isolated, no extra penalty.

The Full Algorithm

The ComputeMaxBid pipeline runs in effectively constant time per query:

  1. Query a KD-tree for all inventory within 1 mile —
  2. Compute kernel-weighted density
  3. Apply base cannibalization discount
  4. Compute Moran's I and apply cluster adjustment
  5. Apply service margin to get final max bid
  6. Circuit breaker: if 8+ listings within 1 mile, max bid = 0. Hard stop, don't buy.

Calibration

ParameterHow to calibrateTypical range
Cannibalization radiusHedonic regression on DOM vs. nearby inventory0.5–2.0 mi
Price decay rateRegression of sale price on DOM0.5–3% / mo
Buyer intensityMLS showing data / days-to-pending5–50 / mi²·mo
Cluster sensitivityCross-validated on historical P&L0.1–1.0

Seven properties are machine-verified in Lean 4: the discount is always positive, always at most 1, always decreasing with inventory, the bid never exceeds FMV, the bid is never negative, and the marginal impact is always negative.

Cannibalization is a portfolio-level problem disguised as a per-home pricing problem. The algorithm is per-home, but the circuit breaker and parameter calibration is where portfolio-level risk management actually lives.