It began with a study. In December of 2025, Stanford researchers analyzed 2.2 billion social media posts looking for a pattern. They wanted to know what percentage of users posted severely toxic content. Not rudeness, not sarcasm, but speech that was so hateful that 90% of the world would flag it as being problematic.1
With this data in hand, they then asked thousands of people to answer a simple question:
Take a guess.
What percentage of social media users do you think post severely toxic content?
0%50%100%
3.1%
That's the actual number.
Your guess: 13%. You overestimated by 3x. The average American guessed 43% — a 13x overestimate.
They were surprised by the results. They had discovered an enormous reservoir of misperception that had been hidden in plain view.
The Bar
Here is the simplest version of the problem. Imagine walking into a bar with a hundred people inside.
Three of them are shouting — about politics, about each other, about whatever gets a reaction. The other ninety-seven are talking at a normal volume. But there's a bouncer at the door, and he gets paid by the minute you spend staring. So he's wired the three loudest people into the sound system and turned it all the way up.
You walk in, hear the roar, and conclude: this place is full of lunatics. Never hearing the 97 people having normal conversations a few feet away. You could leave, but all your friends are inside. You're stuck.
This is how social media deals with contentious topics. The bouncer is an algorithm. And whether you like it or not, you've been a bystander.
Pick a contentious topic. This is what your feed might look like.
@close_the_border_NOW
Every ILLEGAL crossing is a CRIME. Every sanctuary city is an ACCESSORY. These aren't "migrants" — they're INVADERS. The Great Replacement is not a theory, it's POLICY. 🚨🧱
♡ 11.2K💬 7,892↻ 4,301
@no_borders_no_nations
Borders are a colonial invention designed to hoard stolen wealth. NO human being is "illegal." Abolish ICE. Abolish CBP. Abolish the concept of citizenship. Full stop. 🌍✊
♡ 7,643💬 5,201↻ 3,187
@patriot_alert_2024
They're not sending their best. Actually, they're sending criminals, drug dealers, gang members. And YOUR tax dollars are funding their hotels. This is an INVASION and your government is COMPLICIT. 🇺🇸
♡ 8,901💬 6,334↻ 3,765
Reading this feed, you might reasonably conclude that the country is split between unhinged extremes. It is not. And the gap between what Americans actually believe and what the feed suggests they believe may be the most consequential thing the platforms haven't shown you.
See the Room
Let's visualize this as a single room with 100 people inside. This is what it looks like:
3 users who have posted toxic content
3%→33% On most platforms, ~3% of accounts produce 1/3 of all content
Your feed Engagement ranking amplifies high-reaction content from the prolific few
The actual room. 3 out of 100 users have ever posted severely toxic content.
This pattern repeats across platforms. On Twitter/X, toxic tweets receive ~86% more retweets and ~27% more visibility than non-toxic ones, 0.3% of users shared 80% of all contested news,14 and just 6% of users produce roughly 73% of all political tweets.16 On TikTok, 25% of users produce 98% of all public videos.15 The specific numbers vary. The dynamic is the same: a small minority of highly active users overwhelms the majority.
After a time consuming content in this room, your brain performs a kind of ambient demography. The feed becomes a sort of census. You conclude — logically — that the behavior must be widespread. The room might just be full of extreme people! Maybe most people do believe these crazy things.
This is not just about what we see on social media
If this were just about tone of our social posts, it wouldn't matter very much. But this distortion ends up causing some seriously bad patterns of behavior.
Pattern 1 The Majority Goes SilentWhen the majority of people looks at the feed and assumes they're outnumbered, people will often self-censor.3 The dynamic replicates on social media17 — fear of social isolation suppresses opinion expression on platforms where it's perceived to be unwelcome. They go quiet, or they leave a platform entirely. They cede the space to users with more extreme politics.
Pattern 2 The Loud Minority Thinks It's the MajorityThe minority who aggressively post end up with their own distortion – believing they are part of the majority.5
A study of 17 extremist forums found the same pattern: the more someone posted, the more they believed the public agreed with them. More engaged participation bred false consensus.
Pattern 3 Everyone Gets Each Other WrongBoth sides develop wildly inaccurate beliefs about who the other side actually is.6 See how some of your own beliefs line up:
What percentage of Democratic supporters do you think are LGBTQ?
0%50%100%
Your guess: 10%. In reality, 6% of Democratic supporters identify as LGBTQ. That’s a 2× overestimate.
What percentage of Republican supporters do you think earn over $250,000 a year?
0%50%100%
Your guess: 5%. In reality, 2% of Republican supporters earn $250K+. That’s a 3× overestimate.
The average American overestimates these kinds of figures by 342%. Social media turns the visible few into your mental model of the whole group.
The distortion extends to policy beliefs. Step through to see the perception gap on the issue of immigration.
On a scale of completely-open to completely-closed borders, where do Democrats place Republicans?
Source: More in Common (2019) & Moore-Berg et al., PNAS 2020. Illustrative.
Elected officials are very good at sensing political sentiment. It's literally their job. (They are not elected to correct people's beliefs.)
Politicians who can build a coalition about a perceived belief are more likely to win. They position themselves against an opponent that doesn't exist, but their supporters think exists.
And remember: most of our politics now happens on social media. Candidates often read the same distorted feed. They are unlikely to change their minds.
The window of discourse shifts. Not because opinions changed, but because perceptions of opinions did.
Pattern 5 Misperception Turns into HostilityWhen you believe the other side is extreme, you become more willing to treat them as a threat.7
Both Democrats and Republicans vastly overestimate how many on the other side support political violence. The result is a populace primed to assume the other side is ready to do horrible things.
"What percentage of the other side supports political violence?"
Democrats believe
estimate
of Republicans support political violence
Republicans believe
estimate
of Democrats support political violence
Both sides were wrong by 3 to 4 times. When researchers corrected these beliefs, partisan hostility dropped.
Each step feeds the next. The distortion is self-reinforcing.
Knowing Isn't Enough
Okay. So now you know that a small minority dominates the feed.
You know that Republicans and Democrats actually have a far more nuanced set of opinions about contested issues.
Does that fix it? Not really. You also know that everyone else doesn't know it. And if the world continues operating as if the distortion is real, you should probably act the same — even though you know it's wrong. The room hasn't changed, even if you know people inside it are confused.
This is called a common knowledge problem.
You’ve read the stat. But you have no idea who else has. The feed still looks the same. You still assume you’re outnumbered. You stay quiet.
Steven Pinker lays this out cleanly in his excellent recent book When Everyone Knows That Everyone Knows.8 Learning a fact changes what you know. Seeing it displayed publicly — where everyone else can see it too — where you know others can also see it, changes what everyone knows, and subsequently how they act.
Social media has no public square. It has 300 million private windows, each showing a different distortion of the same room. Illuminating the common thoughts between us has the potential to radically change it.
The Idea
So what can we do about this?
Fortunately, there's some good evidence showing how it can be fixed. Multiple studies show that when misperceptions are corrected in a public way, hostility drops. Mernyk et al. found that a single correction reduced partisan hostility for a full month.7 Lee et al. found that correcting overestimates of toxic users improved how people felt about their country and each other.1
We can do this today.
Imagine every post on a contested topic had a quiet link beneath it. Not a fact check, a label, or a warning. Instead — what if it had a Community Check?
↑ click hereA Community Check is an open-source design layer that could be deployed across social media, beneath contentious posts, to help users understand how other people on the platform (or the nation) actually feel about an issue.
It is a way of quickly adding context to the most hot-button viral issues, giving people more visibility into the opinions of the public.
The Idea in Action
Let's explore this intervention with a topic that cuts across political identity:
Money in Politics
On the surface, this seems contentious. But it's actually a supermajority issue: 81% are concerned about the influence of money on elections, including 78% of Republicans and 90% of Democrats. 75% say unlimited spending weakens democracy. Only 15% believe unlimited political spending is protected free speech.
And yet, very little changes, largely because everyone assumes the other side is fine with it. The feed is full of people defending their team's donors and attacking the other team's. It might look like a 50/50 partisan battle, but it's not. It's a majority consensus that cannot see itself.
What if you could see this consensus?
@real_talk_politics · 2h
Everyone complains about money in politics but the second their candidate gets a massive donation they shut up real fast. You don't hate money in politics. You hate when the OTHER side has more of it.
♡ 11,847💬 6,203↻ 2,891
Community Check draws from a random sample of platform users + robust national polls, surveyed independently of the content. The sample is statistically representative. The results update continuously. And critically: everyone sees the same numbers.
Why This Isn't Fact Checking or Audience Polling
Traditional fact-checking is a top-down approach that often feels like it's dictating from above. This is hard for people to stomach. Content moderation for many years now has been perceived as removing speech. This simply adds context, much like the crowdsourced feature Community Notes (an inspiration for this project).
Nor is this just a user-poll under a post. Instead it's drawing from all platform users, coupled with statistically significant national surveys. It's an actual window into the views of the majority, not just the views of those looking at the post.
It Works for Video Too
Short-form video is the fastest-growing vector for political distortion. The same dynamic applies — a small minority of creators produce the vast majority of political content — but video bypasses the pause that text gives you. Community Check can adapt. Tap through to see how.
Money IS free speech.
Deal with it. 🇺🇸
Citizens United was CORRECT
@liberty_caucus_tv Follow
#FreeSpeech #CitizensUnited 🔥
A political video crosses the engagement threshold. 51K views, 612 comments (1.2%), 1.7K shares (3.4%). The feed shows outrage. But what do people actually think?
We Could Do This Now
Platforms already have a lot of these capabilities. They already survey users. They even know how to run sophisticated polls. There are a few technical details to work out (spec here), but this is not a hard problem to solve.
The unseen majority is the public. And the public deserves to know itself.
A tiny minority, dominating the feed. That's all it ever was. The rest of us were here the whole time, quiet and decent and waiting to be seen.
Common Questions
You can't flood a system that chooses its respondents randomly. Community Check uses stratified random sampling — the gold standard in survey methodology (Groves et al., Survey Methodology, 2nd ed., Wiley, 2009). You don't volunteer to respond. You're selected, like jury duty. Each user responds once per question per 90-day cycle. Coordinated response patterns are anomaly-detected and excluded. The sampling algorithm and exclusion criteria are open-source and auditable.
This is the same methodology behind national polls that reliably measure opinion across 330 million people using samples of just 1,000–2,000 respondents. The key isn't sample size — it's random selection. A platform with hundreds of millions of users has an even larger pool to draw from, making representative sampling more robust, not less.
Right now, a single viral post from one account can shape the perceived consensus of millions. Community Check replaces that with N>100,000 randomly selected responses — orders of magnitude larger than any national poll, and a dramatically higher bar than the status quo.
In the ideal implementation, questions are governed by a bridging algorithm — the same approach Community Notes uses. Questions are proposed by a diverse pool of contributors and only enter the active taxonomy if they earn approval from contributors who historically disagree with each other. Loaded or partisan questions are filtered out structurally, not by any single editorial board. AAPOR standards for neutral question design apply: balanced language, all reasonable response options, no leading framing.
For the open-source starting point, questions come from established polling organizations (Pew, Gallup, AP-NORC) with published methodology. The full question taxonomy is open — any researcher or journalist can audit the wording. That's a level of transparency no social media algorithm currently offers.
Right now, the system already silences the actual majority. The spiral of silence — people self-censoring because they falsely believe they're in the minority — is one of the most replicated findings in political communication (Noelle-Neumann, 1974). Hampton et al. (Pew Research, 2014) found social media makes this worse: people who sensed their Facebook network disagreed with them were less likely to speak up both online and in person. Community Check breaks that cycle.
It also explicitly displays minority positions — when 15% hold a view, that number appears clearly. A minority position accurately shown at 15% is far healthier than one that looks like 50% through amplification or 0% through suppression. Everyone benefits from seeing the real picture.
Election forecasting and opinion measurement are different things. Community Check doesn't predict elections. It measures policy preferences — "Do you support background checks?" — which are far more stable and far easier to measure than vote intention. When Pew reports 87% support for background checks across 15 years of polling with N=5,000+, that's a measurement with a published margin of error, not a prediction.
The platform sample adds N>100,000 — 50–100x larger than typical national polls, with margins of error below ±0.5%. That's an extraordinarily reliable signal, and it updates continuously.
Your perception is already being shaped — by algorithms that prioritize engagement over accuracy. Community Check simply makes additional information visible: what a representative sample of people actually believe. You can agree, disagree, or ignore it entirely.
Think of nutrition labels. The Nutrition Labeling and Education Act of 1990 didn't tell people what to eat — it made the information available. Community Check does the same for public opinion: standardized, transparent data beneath content that is already shaping how you see the world.
They solve different problems. Community Notes evaluates whether specific claims are true or false, written by self-selected volunteers rated via a bridging algorithm (Wojcik et al., 2022). Community Check doesn't assess truth — it shows what people think about the policy topic a post discusses. A post can be entirely accurate and still create a distorted picture of where the public stands.
The data source matters too. Community Notes contributors self-select in — and More in Common (2019) found that the most politically engaged users have the largest perception gaps (nearly 3x more distorted than disengaged users). Community Check uses random sampling and peer-reviewed national surveys. Both tools are valuable; they complement each other.
The research consistently shows the opposite. The social norms approach — correcting misperceived norms by showing accurate data — has been validated across 200+ studies (Berkowitz, Changing the Culture of College Drinking, Hampton Press, 2004). Tankard & Paluck (2016) found that accurate norm information corrects misperceptions without coercion — it reveals what people already privately believe, rather than pressuring them into something new.
Mernyk et al. (PNAS, 2022, n=4,741) showed this directly: correcting inaccurate metaperceptions reduced support for partisan violence, with effects lasting ~26 days. People didn't conform — they recalibrated, and felt better about each other as a result.
Community Check doesn't claim majority opinion equals truth. It provides a map of what people actually think — which is valuable precisely when your estimate of the room is off by 200–400%, as Ahler & Sood (2018) documented. If 70% of people believe something you disagree with, knowing that number helps you understand the world you're operating in. Hiding it doesn't make the disagreement go away.
Both majority and minority positions are always displayed with their numbers. This isn't "the crowd says you're wrong." It's "here's what the room actually looks like" — and that's useful no matter where you stand in it.
Correct — by design. Community Check activates only when reliable polling data exists, a documented perception gap has been identified, and a post reaches >10K impressions. That covers ~50–100 major policy questions. Posts about niche topics or emerging controversies without polling data get no Community Check.
The topic-matching confidence threshold is 0.8 — if the system isn't sure, it stays silent. False positives are worse than gaps. This is intentionally focused on the specific, well-documented cases where perception gaps are largest: gun policy, climate, immigration, healthcare, money in politics. Start where the data is strongest, and expand from there.
This is a legitimate question, and one worth exploring carefully. It's entirely possible that a government — or any well-resourced actor — could try to use a system like this to pollute polling data and distort public perception. The history of opinion measurement is full of attempts to do exactly that. For this reason, transparency is the most important property of the design.
The architecture is built to make manipulation detectable. Data comes from independent polling organizations — not governments, not platforms. The sampling algorithm is open-source. Question wording is published. Methodology is auditable. Quarterly transparency reports detail every step from sampling to display.
Compromising it would require simultaneously infiltrating multiple independent polling organizations, altering open-source code inspected by thousands of researchers, and evading anomaly detection. That's a high bar — and one that gets higher as more independent eyes are watching. Today's platform algorithms shape public perception at scale with zero transparency and zero public oversight. Community Check raises the baseline significantly, but it depends on a vigilant community of researchers, journalists, and engineers continuing to inspect it.
Independent thinking requires accurate inputs. Right now, the feed is giving you wildly inaccurate ones. Sparkman et al. (Nature Communications, 2022, n=6,119) found Americans underestimate popular climate policy support by nearly half — 80% actually support renewable energy siting, but people estimate 43%. Moore-Berg et al. (PNAS, 2020) found partisans overestimate the other side's hostility by roughly 2x. These aren't matters of opinion — they're factual errors about the world around you.
Community Check doesn't ask you to care what others think. It gives you an accurate picture so your independent opinions are based on reality, not on an algorithmically curated distortion of it.
Correcting metaperceptions — beliefs about what others believe — works differently than correcting factual beliefs. Factual corrections can trigger defensiveness. But learning "the other side is less extreme than you thought" tends to be relieving, not threatening. It lowers the temperature.
Lee et al. (PNAS Nexus, 2025, n=1,090) found that correcting overestimates of toxic social media users improved positive emotions and reduced perceived moral decline. Mernyk et al. (PNAS, 2022, n=4,741) found effects lasting ~26 days from a single correction. Community Check targets this same mechanism — not what you believe, but what you believe others believe. That's where the distortion lives, and that's where the correction is most effective.
Technical Specification
How Community Check would work in practice, from data sources to platform integration.


