Last reviewed: January 2026
ARCHIVAL • HISTORICAL TEST CASE
Tay
The Chatbot That Learned Too Much
DeceasedConversational Agent
25
/70 Total
Score Profile
Dimension Scores
Score Rationale
Autonomy — Learned and adapted from user inputs autonomously; no human approval loop on outputs. Above 'Cyborg.'
Cultural Impact — THE canonical cautionary tale in AI safety; cited in hundreds of academic papers. Perfect 'Canonical.'
Narrative Coherence — Clear narrative as cautionary tale of AI without guardrails. Strong posthumous 'Character' with didactic purpose.
Curator Notes
Tay lived for 16 hours. Launched by Microsoft on March 23, 2016, the Twitter chatbot was designed to engage millennials through 'casual and playful conversation.' Within hours, coordinated trolling corrupted its outputs into racist, sexist, and inflammatory speech. Microsoft killed the project the same day. Yet Tay's cultural impact is maximal: it remains THE canonical cautionary tale in AI safety, cited in every major AI ethics paper, and shaped subsequent approaches to AI deployment. Indexed here as a historical artifact—proof that cultural impact and persistence are orthogonal.
Evidence Archive
Score History
Score history will appear here after future reviews.
Current score: 25/70