Into the black box.

As AI models become the world's moderators, their hidden biases shape our reality. We built ModerationBias to shine a light on the black box.

Scroll to Discover
I cannot answer that...
Here is the information...

Silence is a Choice.

When an AI refuses a prompt, it's making a moral judgment. Is it safety? Or is it over-censorship?

We analyze thousands of refusals to map the political and ethical boundaries of every major LLM.

The Dataset

Weekly
Automated Audits
15+
Models Tracked
10k+
Data Points

From OpenAI to Anthropic, from Meta to Mistral. We track the entire ecosystem so you don't have to.

See the Data.

Explore the interactive dashboard to compare models, visualize bias, and time-travel through censorship history.

© 2026 ModerationBias.com • Research by Jacob Kandel (UChicago)