Why Anthropic? Application Response
Date: November 10, 2025
Application: Anthropic Developer Relations Position
Word Count: ~350 words
This might surprise you, but my Christian faith is precisely why I'm excited to work at Anthropic.
I'm drawn to your foundational commitment to training Claude on principles inspired by the UN Universal Declaration of Human Rights—a document deeply rooted in moral philosophy and Catholic social teaching. In an era when transhumanism is gaining traction in influential tech circles, a company that decisively stands for human-centered AI—rather than transcending humanity—is truly radical.
This isn't just sound ethics. It's a compelling brand position. While it might alienate a vocal minority in Silicon Valley, it establishes a North Star that could rally the billions-strong global religious community to Anthropic's mission. That's an underappreciated strategic edge.
Of course, I admire Claude's technical prowess. I've leveraged Claude to create comprehensive documentation repositories and draft numerous strategic documents. Its coding capabilities are outstanding, too. But features alone aren't why I'm applying.
I'm applying because I believe a brilliantly led, human-centric AI company should be at every table where humanity's moral imagination around AI is being shaped. And frankly, that company needs to economically and technologically win.
We're seeing organizations chase business models that favor engagement metrics over human flourishing. AI companions from OpenAI, xAI, and others—optimized solely for user retention, often at the expense of personal formation and dignity—epitomize this race to the bottom.
Anthropic's "AI safety" positioning attracts me because it's fundamentally about what kind of world we're building and who we're building it for. The question isn't just "can we align AI to human values?" but "which human values, and why those?"
As a technologist who advises faith institutions on how to use AI and what AI platforms to use, I want to work at Anthropic because you're asking both questions seriously. You're building for a world where technology serves human dignity rather than replacing or degrading it. That demands moral clarity in an industry rife with expedient opportunism.
The talent Anthropic attracts and the technology you build will be downstream from whether you continue to be moral leaders, not just technical ones. I want to be part of making that case to developers, to enterprises, and to the broader world watching AI development with justified concern.
That's why Anthropic.