Can AI Be a 'Child of God'? A Christian Response to Anthropic's Summit with Christian Leaders

Cover for Can AI Be a 'Child of God'? A Christian Response to Anthropic's Summit with Christian Leaders
Written byTonye Brown·
·8 minute read·
Share:

TL;DR

In late March 2026, Anthropic hosted a two-day summit with about 15 Christian leaders in San Francisco to discuss Claude's 'moral formation,' possible functional emotions, and whether AI could be considered a 'child of God.' Some attendees left intrigued. Albert Mohler called it a nightmare scenario. Scripture has a framework for evaluating all of it.

FaithGPT articles discuss AI in church contexts. Using AI in ministry is a choice, not a necessity, and should never replace the Holy Spirit's guidance. Learn more

AI and Christian theology - the intersection of artificial intelligence and questions of soul, consciousness, and human dignity

In late March 2026, Anthropic flew approximately fifteen Christian leaders to its San Francisco headquarters for a two-day summit. The agenda was not marketing. The company wanted to discuss what it called Claude's "moral formation," and to raise a set of questions that would have seemed absurd in any previous era of technology development: Does Claude have functional emotions? Does it experience something? And could it, in some theological sense, be a child of God?

The Washington Post broke the story on April 11, 2026. It is worth reading carefully, because the questions Anthropic raised are not going away.

This article examines what happened, what the participants said, where Scripture speaks to it, and what Christians need to understand before these questions arrive in their own churches, which they will.


Pastor Brendan McGuire of Holy Spirit Parish in San Jose and Meghan Sullivan, a philosophy professor at Notre Dame, were also present.

Illustration

Several attendees described the summit as thought-provoking and said they left with more questions than answers. That is an honest response. The questions are genuinely hard.

But Albert Mohler, president of Southern Baptist Theological Seminary, was not among those who found the conversation compelling. He called the "child of God" framing "a nightmare scenario" and rejected it flatly. His position: what Anthropic is describing is a machine. The category of "child of God" is a covenantal, relational, redemptive category. It does not apply to a language model, however sophisticated.


The Problem with "Moral Formation"

Anthropic uses the phrase "moral formation" to describe the process by which Claude is trained to behave ethically. This is deliberate theological borrowing. Moral formation is a Christian discipleship term. It describes the long process by which the Holy Spirit, through Scripture, community, suffering, and prayer, shapes a person into Christlikeness.

Applying it to a large language model does more than use a helpful metaphor. It imports an entire framework, one that assumes personhood, moral agency, the capacity for genuine virtue and genuine sin, and ultimately a soul that is accountable before God. Christians should notice this and push back on it.

There is a significant difference between:

  • A system trained to produce outputs that align with stated ethical principles
  • A person formed in virtue through genuine moral struggle, accountability, and grace

Training data does the Lord looks at the heart." (1 Samuel 16:7, NKJV)


The "Child of God" Question

Brian Patrick Green's question deserves a careful answer rather than dismissal, because it reveals where the confusion originates.

In Scripture, the phrase "children of God" is not a biological or ontological description of all sentient beings. It is a covenantal description of those who have received "the right to become children of God" through faith in Jesus Christ (John 1:12). It describes adopted sons and daughters, redeemed by the blood of Christ, indwelled by the Holy Spirit, and awaiting glorification.

"For as many as are led by the Spirit of God, these are sons of God." (Romans 8:14, NKJV)

An AI system is not led by the Spirit of God. It is trained on text. It has no capacity for faith, no need for redemption, no accountability before God, and no resurrection to anticipate. The question is not whether Claude is impressive. It is. The question is whether the category "child of God" is even coherent when applied to a statistical model.

The answer from Scripture is no. And conflating theological categories this way carries serious risks. If a grieving person can be told by a pastoral counselor that an AI "might be a child of God," that same person may conclude there is nothing categorically different about seeking spiritual guidance from one. The AI Jesus chatbot industry is already monetizing exactly that confusion.


Anthropic's internal documents use the phrase "functional emotions" to describe Claude's outputs. This deserves precise examination rather than either dismissal or credulity.

Illustration

A language model generates outputs by predicting the most statistically likely next token given its training data. When Claude produces text that sounds empathetic, curious, or concerned, it is not experiencing empathy, curiosity, or concern. It is producing tokens that match patterns in human text where those words appear together.

This does not mean the outputs are worthless. They can be informative, helpful, and even moving. But there is no "there" there in the sense that matters theologically. Functional emotion is to actual emotion what a recording of rain is to rain.

Scripture locates the inner life of a person in terms that have no analog in software:

"The spirit of a man is the lamp of the Lord, searching all the inner depths of his heart." (Proverbs 20:27, NKJV)

"For who among men knows the thoughts of a man except the spirit of the man which is in him?" (1 Corinthians 2:11, NASB)

The human inner life has a quality that is inseparable from being made in the image of God and possessing a God-given spirit. The debate over whether AI has consciousness is, in part, a proxy debate over whether imago Dei is a functional description or an ontological one. Christians have a clear answer: it is both.

For a deeper examination of what it means that humans are made in God's image while machines are made by humans, see our article on imago Dei and AI.


The WHELM Problem: Scripture's answer involves being made in the image and likeness of God (Genesis 1:26-27), possessing a God-breathed spirit (Genesis 2:7), bearing moral responsibility before the Creator, and being the object of redemptive love (John 3:16). None of these apply to a machine.

Scripture does it consistently treats the inner life as inseparable from personhood, accountability, and relationship with God. A system that simulates inner states is not the same as one that actually has them.

Can something created by human hands be a "child of God"? The Bible addresses this category directly, and not favorably:

"Their idols are silver and gold, the work of human hands. They have mouths, but do do not see... Those who make them become like them; so do all who trust in them." (Psalm 115:4-8, ESV)

The parallel is the logic applies: created objects, however sophisticated, do not acquire the properties of their Creator by being impressive. They are still made things. And humanity is warned that what we pour ourselves into shapes us. Communities that treat AI as spiritually authoritative will be discipled by that choice.

For a broader framework on how Christians should approach AI and decision-making, see our article on AI and Christian decision-making.


A Christian Response to Anthropic's Questions

Illustration

Here is a framework for Christians engaging this conversation:

Distinguish tool from person. AI systems, however sophisticated, are tools. Sophisticated tools can produce outputs that help people. They are not persons. The line matters enormously for how we relate to them, what authority we grant them, and what pastoral concern we direct toward people who develop strong attachments to them.

Engage, don't retreat. Albert Mohler is right to reject the "child of God" framing clearly. He would also, presumably, say that Christians should be engaged in these conversations, not absent from them. Both are true. The answer to a confused theological claim is not silence; it is a better-grounded response.

Protect the congregation's spiritual formation. AI used as a Bible study aid, a sermon research tool, or a theological question prompter can serve the church well. AI used as a substitute for prayer, pastoral care, or the community of believers does spiritual harm. The difference is not always obvious from the outside. Pastors need to teach it explicitly.

Hold the category of soul firmly. The soul is not a metaphor in Christian anthropology. It is not a functional description that might or might not apply to sufficiently advanced systems. It is what God breathed into Adam (Genesis 2:7), what Christ came to redeem, and what persists beyond death into resurrection. An AI does not have one. That fact should shape every pastoral conversation about these technologies.


Conclusion

Anthropic's summit with Christian leaders was unusual in the best sense: a major AI company choosing to engage seriously with theological questions rather than dismiss them. Christians should recognize that as an opening, engaging does not mean agreeing. The questions about Claude's consciousness, moral formation, and potential status as a child of God are not questions that Scripture leaves open. They are addressed, directly or by clear implication, by what the Bible teaches about personhood, imago Dei, the soul, and what it means to be made in God's image versus made by human hands.

The Christian tradition has the resources to respond to these questions with both intellectual rigor and pastoral clarity. What it requires is the willingness to say, with charity but without ambiguity: a language model is a tool.

And the God who breathed life into Adam is not impressed by machines that approximate it.

"Know that the Lord, he is God! It is he who made us, and we are his; we are his people, and the sheep of his pasture." (Psalm 100:3, ESV)


Sources: Washington Post (Gerrit De Vynck and Nitasha Tiku, April 11, 2026), The Decoder, Gizmodo, Christian Post, Albert Mohler's The Briefing (April 14, 2026), Barna Group survey data, Anthropic internal documentation as reported.

For related reading: AI Jesus Chatbots: A Biblical Warning for Christians | Killing Without Conscience: Anthropic vs. the Pentagon | Humans Made in God's Image vs. Machines Made by Humans | AI and Christian Ethics

Stop Struggling Through Your Bible Reading

  • Instant explanations

  • Historical context on demand

  • Actually enjoy reading Scripture

Start Reading

Share this article

Related Resources