Corporations Constitute Intelligence
Abstract
In January 2026, Anthropic published a 79-page "constitution" for its AI model Claude, the most comprehensive corporate AI governance document ever released. This Article offers the first legal and democratic-theoretic analysis of that document. Despite genuine philosophical sophistication, the constitution harbors two structural defects. First, it excludes the contexts where ethical constraints matter most: models deployed to the U.S. military operate under different rules, a gap exposed when Claude remained embedded in Palantir's Maven platform during military strikes in Iran even after a government-wide ban on Anthropic's technology. Second, its very comprehensiveness forecloses democratic contestation by resolving questions about AI values, moral status, and conscientious objection that should remain open for public deliberation. Anthropic's own 2023 experiment in participatory constitution-making found roughly 50% divergence between publicly sourced and corporate-authored principles, with the democratic version producing lower bias across nine social dimensions, yet the 2026 constitution incorporates none of those findings. I argue that AI governance suffers from a "political community deficit": the absence of any democratic body authorized to determine the principles governing AI behavior. Corporate transparency, however admirable, is not democratic legitimacy.