Courts and competition agencies increasingly treat “non‑price” attributes—product quality, privacy practices, and advertising burden (ad load)—as potential sources of consumer harm in digital markets. This article explains how those harms are framed, the kinds of evidence and metrics decisionmakers use, and practical approaches litigants and enforcement agencies apply to quantify and compare non‑price effects.
1. Legal framework and theories of harm
Antitrust analysis typically centers on price and output, but U.S. merger and monopolization guidance recognises that harm can be ‘‘manifested in non‑price terms and conditions’’ (e.g., product quality or privacy). Theories of harm in digital cases commonly include:
- Quality degradation: reduced functionality, slower innovation, worse user experience, or increased friction that consumers cannot avoid.
- Privacy harms: greater collection, sharing, or retention of personal data that reduces users’ welfare or increases risk of misuse.
- Ad load harms: increased volume, intrusiveness, or targeting of advertising that degrades usefulness or enjoyment of a service.
2. How courts and agencies translate harms into actionable questions
Decisionmakers break non‑price concerns into measurable subquestions they can test against evidence:
- Do consumers view the attribute (privacy, ad experience, quality) as a meaningful parameter of competition?
- Would a market change (merger, conduct) likely make that attribute materially worse for a substantial set of users?
- Can any negative change be plausibly linked to reduced competition rather than benign business choices or regulation?
- What is the magnitude—how many users and how severe—of the expected harm, and how does that compare to offsetting benefits (innovation, lower monetary price)?
3. Common empirical approaches and metrics
Because non‑price harms are multidimensional, courts and economists rely on a mix of qualitative and quantitative evidence:
- User surveys and stated‑preference studies — Measure how much users value privacy or lower ad load relative to other attributes; can estimate willingness‑to‑accept (WTA) or willingness‑to‑pay (WTP) tradeoffs. Caveat: survey design and sampling bias matter.
- Revealed‑preference evidence — Observe user behavior (switching, retention, time‑on‑site) after changes in quality, privacy policy updates, or ad experiments. This shows real tradeoffs rather than hypothetical ones.
- Field and lab experiments — A/B tests of ad load, personalization intensity, or privacy settings can identify causal effects on engagement, conversion, or churn.
- Market‑level metrics — Aggregate indicators such as user engagement (DAU/MAU), session length, churn rates, app ratings, and third‑party measurements of page load times or ad impressions per session.
- Technical audits — Code reviews, network traces, and privacy assessments to document data flows, collection scope, retention practices, and third‑party sharing—used to substantiate privacy‑related claims of increased data capture.
- Ad inventory analysis — Counts of ads per page/session, measures of ad viewability and intrusiveness (e.g., video autoplay, interstitials), and tracking pixel prevalence to quantify ad load and tracking intensity.
- Econometric welfare estimation — Structural demand models or difference‑in‑differences estimates translate behavioral changes into consumer welfare impacts (e.g., quality‑adjusted utility loss) for comparison with price effects.
4. Practical evidentiary patterns from recent cases and guidance
Regulators and courts tend to treat non‑price claims cautiously but seriously when supported by multiple evidence streams:
- Regulatory guidance (e.g., U.S. merger guidelines, OECD notes) accepts privacy and quality as competitive parameters if consumers consider them significant.
- Agencies look for objective benchmarks—measurable changes in engagement, documented increases in data collection, or experiments showing clickthrough or abandonment changes—rather than rhetoric alone.
- Mergers have rarely been blocked solely on privacy grounds, but privacy evidence has shaped remedies or conditioned approvals when there is clear risk of worse privacy practices post‑transaction.
5. Building a persuasive non‑price case: a checklist
- Demonstrate consumer valuation: present surveys or behavioral evidence that users care about the attribute.
- Show causation: use experiments, before‑and‑after comparisons, or credible counterfactuals to link the challenged conduct to the attribute change.
- Quantify scope and magnitude: estimate affected user numbers and the utility loss (or monetized equivalent) per user where feasible.
- Corroborate with technical proof: privacy audits, ad‑count logs, and product telemetry that make the alleged change concrete.
- Compare benefits and offsets: model any efficiencies, innovation gains, or price effects to weigh net consumer welfare impact.
6. Limitations and common defenses
Respondents commonly argue non‑price claims are speculative, reflect heterogeneous preferences, result from efficient integration, or are better remedied through privacy regulation rather than competition law. Courts therefore demand careful measurement, transparent assumptions in welfare calculations, and robust causal inference.
7. Conclusion
Non‑price harms—quality declines, increased data collection, and heavier ad loads—are legitimate components of antitrust analysis in digital markets, but they require multidisciplinary evidence: economic models, user behavior, technical audits, and experiments. Successful claims combine measures of consumer valuation, demonstrable causal links to the challenged conduct, and reasoned welfare comparisons against offsets.
Sources
- The intersection between competition and data privacy – Background note (OECD; 2024-05-10; Official source)