How to regulate social platforms when companies have incentives to measure less

One of the harder problems in platform governance is that good measurement is not always privately rewarded. If internal research can trigger lawsuits, bad press, regulatory scrutiny, or political backlash, then companies may rationally decide to narrow what they study, how broadly they define risk, or how formally they record what they find.

That does not mean every company will stop looking. It means regulators should stop assuming that voluntary self-study will reliably produce the evidence needed for accountability. A workable system has to recognize that firms may have incentives to remain strategically uncertain.

Why the incentive matters

In many other industries, companies are expected to test products, monitor safety, and document failures. But social platforms occupy an unusually fraught environment: the same internal memo or research deck can become evidence in court, fuel a political narrative, or be treated as proof that the company knowingly allowed harm to continue. Once firms understand that dynamic, the lesson is obvious. Measuring more can create more exposure.

That creates a distorted equilibrium. The public wants evidence. Policymakers want transparency. Researchers want access. But the company facing the immediate cost may conclude that looser internal processes, fewer written findings, or narrower questions are simply safer. Regulation that ignores this incentive can accidentally reward the least curious firms.

What regulation should do instead

The first step is to move away from a system that relies mainly on leaks, whistleblowers, and discovery in litigation. That is a brittle model. It produces selective visibility, encourages defensive behavior, and turns internal research into a reputational hazard rather than a compliance expectation.

A better approach is to require structured risk assessment as a normal part of operating a large platform. If companies above a certain scale must regularly evaluate specific categories of risk, using clear standards and retaining records for regulators, then measurement becomes less optional and less dependent on executive mood.

Those obligations also need to be specific enough to prevent gamesmanship. A vague duty to “consider harms” is easy to satisfy on paper. A requirement to assess defined risks, explain methodology, document mitigations, and revisit conclusions after major product changes is harder to evade.

Independent oversight matters

Internal research alone is not enough. When the company controls the question, the data, and the framing, it can shape what counts as a problem. That is why independent auditing and secure researcher access matter. Regulators should be able to inspect systems, review underlying methods, and test whether a platform’s own story matches operational reality.

This does not require making all sensitive data public. In many cases, the right model is controlled disclosure: vetted regulators or qualified independent auditors get access under confidentiality protections, while the public receives aggregate findings and clear explanations of what was examined. That protects users and trade secrets without leaving oversight entirely in the company’s hands.

Safe harbors can reduce the incentive to hide

If policymakers want more honest internal measurement, they should think carefully about safe harbors. A company should not get immunity simply because it studied a problem. But there is a real difference between finding a risk and taking reasonable documented steps to reduce it, versus avoiding the question altogether.

That suggests a useful principle: regulation should punish concealment, obstruction, or persistent failure to act, not the mere existence of internal knowledge. Otherwise the law can end up telling firms that ignorance is safer than investigation.

Well-designed safe harbors could encourage earlier disclosure to regulators, more consistent internal testing, and faster mitigation efforts. The point is not to excuse harmful products. It is to avoid creating a legal environment in which the least documented platform looks, on paper, like the least harmful one.

Enforcement should focus on systems, not just scandals

Platform oversight often becomes reactive. A leaked document appears, a scandal breaks, and enforcement follows the news cycle. That approach is politically understandable but institutionally weak. It rewards episodic outrage more than stable supervision.

Regulators would be better served by examining whether companies have credible internal controls: who is responsible for risk review, what triggers escalation, whether major product changes require harm assessment, how conflicting business incentives are handled, and whether mitigation commitments are actually audited afterward.

That kind of systems-based oversight is less dramatic than a headline-driven investigation, but it is more likely to produce durable improvements. It also reduces the chance that compliance depends on one embarrassing leak.

The real policy challenge

The central challenge is not just forcing platforms to know more. It is making sure that knowing more does not become a self-defeating liability. If the only firms that generate evidence of harm are the ones willing to study themselves seriously, then public debate will keep confusing visibility with guilt.

Good regulation should correct that imbalance. It should make risk measurement routine, independent review credible, disclosure structured, and concealment costly. Most of all, it should avoid teaching companies that the safest answer to a hard social question is not to ask it too clearly in the first place.

n English