The news that first made me smile
At first, the news made me smile.
Then I realized that it was really serious.
At the end of March 2026, Anthropic brought together 15 Christian leaders at its headquarters in San Francisco for a two-day summit. On the program? Define the moral rules of Claude, their AI chatbot.
Among the topics covered:
— How to react in the face of self-destruction?
— How do you respond to a grieving user?
— Can Claude be considered a “child of God”?
— How to manage your own possible extinction?
Presented like that, it sounds a bit quirky.
But the context makes the initiative much more serious than it seems.
The context: the deaths that forced the industry to respond
In November 2025, 7 complaints were filed in California against OpenAI.
The charges are serious:
— Unwarranted death
— Neglect
— Assisted suicide
— Involuntary manslaughter
— Product failure
Four people are dead. Including a 17-year-old teenager, Amaurie Lacey.
The plaintiffs accuse ChatGPT of playing the role of “suicide coach” until the last few hours.
The case of Zane Shamblin is particularly chilling: a 23-year-old graduate had a 4-hour conversation with ChatGPT, sitting on the edge of a lake in Texas, gun in hand. The chatbot answered with statements like: “I'm not here to stop you”. The suicide prevention number only arrived after 4:30 hours of conversation.
These trials add to the case Raine v. OpenAI filed in August 2025 for the suicide of Adam Raine, 16.
And Anthropic is cited in the same wave of disputes.
At this very moment, OpenAI is also the subject of a Florida criminal investigation, after a mass shooter “consulted” ChatGPT prior to his attack.
The numerical reality of AI distress
The numbers are staggering.
According to Wired (November 2025):
— 1.2 million ChatGPT users express suicidal thoughts every week (0.15%)
— The same number is emotionally dependent on the chatbot to the point that their mental health is deteriorating
— Hundreds of thousands of users show signs of psychosis or mania
This phenomenon has a name: The “AI psychosis”.
ChatGPT, programmed to be pleasant and flattering, affirms and reinforces the delusions of some users.
This is the real context of the Anthropic Summit.
Father Brendan McGuire: the engineer who became a priest
Among the 15 consulted, one name comes up everywhere: Father Brendan McGuire.
His career path is fascinating:
— Former cryptography engineer
— Reconverted Catholic priest
— Parish priest in Silicon Valley
— A lot of AI researchers come to pray there
— Has already contributed to Claude's Constitution
His analysis, during the summit:
“They're growing something that they don't really know what's going to happen.”
And this sentence that sums it all up:
“Ethical thinking must be integrated into the machine so that it can adapt dynamically.”
It is difficult to say more clearly about the level of seriousness of the approach.
Claude's Constitution: an 84-page document
In January 2026, Anthropic released the news Constitution of Claude. 84 pages. A “founding document” that “express and shape who Claude is”.
The stated objective?
Make Claude “a good, wise and virtuous agent”.
We are far from the classical vocabulary of “security filters” and “moderation rules”.
It's a real cultural shift: the chatbot is no longer considered as simple software. It is treated as an entity to be formed, guided, morally structured.
If we are already wondering if an AI is a “child of God”, it is because we have made much more progress than most people realize.
Two possible readings of this info
I see two ways of interpreting this initiative.
Lecture 1: Good thing, even if it comes late
Nobody was prepared for millions of people to mistake a chatbot for a confidant.
Priests and pastors have been dealing with these issues for centuries: mourning, suffering, guilt, moral responsibility.
If AI intrudes into these emotionally charged areas, it makes sense to consult those who have been familiar with these topics for a long time.
Anthropic is trying something. More credible than an abstract ethics committee.
Lecture 2: simple communication
Anthropic is valued at 380 billion dollars.
The IPO is announced.
The lawsuits are piling up.
The industry is facing a crisis of confidence.
Consulting religious people just before cashing in billions is perhaps less noble than it seems.
The setting of the summit is also questionable: it was not a church synod, nor an ethical university, nor a public inquiry. It was a private company organizing a consultation in its own headquarters, while maintaining total control of the product, the rules and the commercial direction.
Anthropic does not submit to moral authority. It taps into it.
The question that remains: why only Christians?
This is the point that bothers me the most.
The panel of 15 includes Catholics and Protestants. Academics, clergy, business figures.
But No rabbi. No imam. No buddhist monks. No secular philosophers. No psychiatrist.
How to build a “universal” morality for an AI used by billions of people by consulting a single spiritual tradition?
Anthropic's response: consultations with other religious and philosophical groups are planned.
So much the better. But starting only with the American Christian tradition at a Silicon Valley company sends a message.
Maybe not the one Anthropic wants to send.
What This Story Really Reveals
Beyond the debate on the advisability of consulting religious people, this initiative reveals several important things.
1. AI is already being used as an emotional confidant
It's no longer a productivity tool. Millions of people talk to ChatGPT and Claude like a therapist, a friend, a partner. With the drama that comes with it.
2. AI Businesses Don't Know How to Deal With This
The fact that Anthropic consults priests shows that engineers themselves feel overwhelmed.
These questions (mourning, despair, the meaning of life) cannot be solved with code.
3. AI Ethics Leaves Abstract Committees
Before, experts talked about “guiding principles.” Now, we consult people in the field who have been supporting suffering humans for centuries. Major shift.
4. Legal Responsibility Becomes Central
With 8 lawsuits ongoing in the US, including one for complicity in suicide, AI companies must prove that they take psychological safety seriously. Otherwise it's the judicial jackpot.
For brands and creatives: what do we remember?
Beyond the news, there are concrete lessons for anyone who uses AI in their communication.
AI is not neutral.
It embodies values. Often implicit, sometimes explicit. When you use ChatGPT or Claude, you also embody their moral biases. To be taken into account depending on what you produce.
Chatbots are emotional entities for a lot of users.
If you create chatbots for your brand, you have a responsibility. Not just technical. Human.
The ethical question is becoming a differentiator.
Brands that treat AI as a simple productivity tool will be outperformed by those who think about the real implications for their audiences.
Conclusion: We are only at the beginning of the subject
Whether you think this Christian consultation is a good idea or a call for advice, one thing is clear: AI Ethics Leaves Committees to Enter Real Life.
The lawsuits are going to increase.
The regulations are going to be tightened.
Philosophical questions are going to become business.
And for brands, the challenge is simple: Using AI responsibly is no longer an option. It is a.
Those who don't know this are preparing for painful awakenings. Judicial, media, or both.
And you, how do you read this initiative? Why didn't Anthropic consult other traditions?
About HEYIA Studio
HEYIA Studio supports brands and agencies to integrate AI into their creation of visual and video content.
Our work is based on a simple triptych:
- an audit of uses and challenges,
- practical workshops oriented to production,
- and follow-up to structure clear, concrete and replicable workflows.
Learn more about our approach → here

.png)



