Zuckerberg’s unsealed email raises an uncomfortable question: should platforms study their harms less?

One of the stranger incentives in modern tech policy is the research paradox: the companies that do the most internal work to measure harms can end up looking like the worst actors, simply because they have the most data — and because that data can leak, be subpoenaed, or be unsealed in court.

That paradox is at the center of a newly unsealed internal Meta email, reported by The Verge, in which Mark Zuckerberg suggests the company consider changing its approach to “research and analytics around social issues” after media coverage of internal findings (notably around teen wellbeing on Instagram) blew up in 2021.

This isn’t just an inside-baseball story about PR management. It’s a window into how social platforms think about accountability — and how the threat of litigation and leaks can shape what gets measured, what gets published, and what never gets asked.

Below is a practical explainer of what the email said, why it matters, and what a healthier incentive structure might look like.

What the unsealed email actually revealed

According to the reporting, Zuckerberg wrote to senior executives on September 15, 2021 — a day after a Wall Street Journal story based on internal documents (later tied to whistleblower Frances Haugen) highlighted Meta’s own research about teen girls and Instagram.

The key point wasn’t “Meta did research.” Many large platforms have research teams. The striking part is that the CEO explicitly connected doing proactive social-issues research with creating liabilities and reputational risks when findings become public.

The email’s framing is essentially:

  • We study sensitive issues (teen safety, mental health, child exploitation, misinformation, etc.).
  • When our findings leak or are reported out, the public narrative can turn into: “You knew, and you didn’t fix it.”
  • Some peer companies appear to do less proactive research — and therefore create fewer documents that can be used against them.

That’s an uncomfortable but real governance issue: if “measure and document the problem” increases the cost of operating, there’s a built-in incentive to measure less.

The research paradox: when transparency becomes a competitive disadvantage

In a world where platforms are scrutinized by regulators, journalists, and courts, you can imagine two broad strategies:

  1. Study harms deeply and build internal dashboards, experiments, and postmortems.
  2. Study harms minimally, focus on narrow compliance requirements, and avoid producing “bad-sounding” documents.

If the downside of strategy (1) is that it creates discoverable material — emails, slide decks, experiment readouts — then a rational corporate actor may drift toward strategy (2), even if strategy (1) is better for users.

That is not a defense of doing less research. It’s an explanation of the incentive.

The policy challenge is to design a system where the “do the responsible thing” approach (studying harms and acting on findings) does not become self-punishing.

Why Meta’s email is surfacing now: lawsuits and discovery

The email was unsealed after being collected in discovery by the New Mexico Attorney General’s office in a case alleging Meta deceptively positioned Facebook and Instagram as safe for teens while being aware of harmful design choices.

That case sits alongside a broader wave of litigation and legislative pressure focused on child safety, youth mental health, and product liability theories for social platforms. Regardless of how any single case turns out, the process matters: discovery turns internal debate into public evidence.

That has two second-order effects:

  • It shapes future internal writing. Executives become cautious not only about what they do, but how they describe it.
  • It shapes future research. If a study is likely to generate politically explosive charts, someone will ask whether it’s worth doing at all.

“Apple doesn’t seem to study any of this stuff”: what’s the argument here?

The email reportedly draws a comparison to Apple, suggesting Apple “doesn’t seem to study” these issues in the same way and therefore avoids a lot of the criticism.

Even if that comparison is incomplete (Apple does publish security and privacy material, and it faces intense scrutiny in other domains), the underlying point is about product category and risk surface:

  • Social platforms host massive volumes of user-generated content, including abusive content.
  • Messaging products (especially end-to-end encrypted ones) structurally limit what the provider can inspect.
  • Device platforms can push responsibility downward (“this is what users do on their devices”) while social feeds sit closer to editorial-like amplification.

So the “why do they get less heat?” question has a nontrivial technical component.

The child safety lens: reporting volume can look like guilt

One line of argument in the reporting is that Meta points to the fact that it reports a lot of child sexual abuse material (CSAM) to the National Center for Missing and Exploited Children (NCMEC) — and that high reporting volume can be interpreted as meaning “there’s more abuse on Meta,” even when part of the reason is more detection and reporting.

NCMEC’s own public data helps illustrate the complexity of that interpretation. For example, NCMEC notes that in 2024 it received 20.5 million reports (29.2 million incidents when adjusted), and it also describes changes like report “bundling” that can reduce raw counts without implying less underlying abuse.

Counts alone are a blunt tool. What matters is rates, detection coverage, and downstream outcomes:

  • How quickly are accounts taken down?
  • Are perpetrators identified and referred to law enforcement?
  • How often are minors proactively protected (e.g., restricting contact features, limiting adult reach)?
  • How do false positives and false negatives trade off?

When the public debate focuses only on “who has the biggest number,” companies can be pushed toward under-reporting or under-measuring.

Why doing less research would be bad — even for Meta

If you take the email’s concern at face value, the “solution” of studying less is seductive: fewer studies, fewer slides, fewer subpoenas.

But it’s also self-defeating.

1) You can’t improve what you don’t measure

Many platform safety problems are systems problems — created by ranking, recommendations, contact mechanics, and abuse-adjacent growth loops. Those aren’t fixed with a single policy statement. They require measurement.

Without internal research and analytics, the company’s “safety posture” becomes:

  • reactive (respond to scandals)
  • anecdotal (trust what the loudest complaints say)
  • non-auditable (no baselines, no evaluation)

2) Regulators will demand evidence anyway

Even if a company tries to avoid sensitive research, regulators can still require transparency reports, risk assessments, and auditability. In other words: if you don’t generate the evidence voluntarily, someone else may force you to generate it — and now you have to do it under pressure.

3) You lose the ability to distinguish tradeoffs from negligence

A major theme in the email is that not all recommendations are reasonable to implement because everything has tradeoffs.

That’s true. But the only credible way to argue “we considered X and chose Y because…” is to show your work. Otherwise, it can look like hand-waving.

Research is what turns “trust us” into “here’s the model, the experiment, the measured outcome, and the decision memo.”

The deeper issue: litigation turns internal candor into a liability

A healthy organization wants candor: “This feature might be harmful,” “This cohort is at risk,” “This metric looks bad,” “We need to change ranking.”

But litigation and leak dynamics can punish candor in two ways:

  • Selection effect: executives stop putting sensitive thoughts in writing.
  • Cultural effect: teams avoid questions that might produce “bad” answers.

Both effects make the platform worse.

And this isn’t unique to Meta — it’s a general problem for any company operating at the intersection of consumer tech and public safety.

What would better incentives look like?

If society wants platforms to measure and reduce harms, it needs to make that path survivable.

A few practical ideas that show up repeatedly in policy circles:

1) Safe harbors for good-faith internal safety research

Imagine a framework where companies get a limited protection when they conduct documented, good-faith research into harms and take meaningful steps based on findings — similar in spirit to how some safety-critical industries handle incident reporting.

This doesn’t mean immunity for wrongdoing. It means reducing the incentive to stay ignorant on purpose.

2) Standardized, audited reporting (so comparisons are fair)

If every platform reports safety metrics using different definitions, raw numbers become weaponized.

Standard definitions, third-party audits, and clearer denominators (rates per user, rates per message, rates per view) would make “we reported more” less of a PR trap.

3) Separation between safety research and product growth incentives

When safety research sits inside the same chain of command as growth targets, it can become politically inconvenient.

Structural separation — even if not full independence — can help ensure safety questions keep getting asked.

4) Better public literacy about what metrics mean

The public conversation often treats internal research like a confession.

Sometimes it is. But sometimes it’s the opposite: a sign the company is looking.

A more mature literacy would ask:

  • Was the harm measured responsibly?
  • Were the findings shared with appropriate oversight?
  • What mitigations were tested?
  • What changed as a result?

What to watch next

The email is one artifact. The broader story is the tension between three forces:

  • Transparency: we want to know what platforms know.
  • Accountability: we want consequences when harms are ignored.
  • Learning systems: we need platforms to keep measuring and improving.

When those forces are misaligned, the equilibrium outcome can be perverse: less measurement, less candor, and slower improvement — even while public anger increases.

The best version of the internet is not one where platforms hide their own research. It’s one where internal research is routine, audited, and used to drive product changes — and where the legal and political system can distinguish between “we studied the harm and improved” and “we studied the harm and deliberately did nothing.”

Bottom line

The unsealed Zuckerberg email matters less as a “gotcha” and more as a clue about incentives.

If doing serious internal safety and social-issues research reliably turns into reputational and legal exposure, companies will do less of it — and the public will get less visibility into real risks.

The policy goal shouldn’t be to shame platforms for having research. It should be to demand measurable improvements and create incentives that make responsible measurement the default, not the exception.


Sources

n English