Delegating Death
Artificial Intelligence, Moral Accountability, and the Anglican Theological Tradition
Introduction: A Dispute With Theological Implications
In late February 2026, the United States Department of Defense and the artificial intelligence company Anthropic arrived at an impasse that has been widely reported as a corporate or political dispute. At its core, it is a theological one. Anthropic declined to grant the Pentagon unconditional access to its Claude AI model, seeking to prohibit its use in fully autonomous weapons, systems capable of identifying and engaging targets without human authorization, and in mass surveillance of American citizens.1 The Department of Defense refused these restrictions and responded by designating Anthropic a ‘supply chain risk,’ effectively blacklisting the company from defense contracting. Anthropic subsequently filed legal challenges on First Amendment and due process grounds.
Behind this bureaucratic confrontation lies something far older and far more consequential: the steady technological erosion of the individual born moral weight that has always attended the taking of human life. This essay argues that the trajectory from the rifled musket to the targeting algorithm represents not merely a series of tactical innovations but a coherent theological catastrophe, the progressive displacement of human moral agency from the act of killing, culminating in the prospect of death administered by autonomous artificial intelligence with no human conscience anywhere in the chain of causation.
The Maven Smart System, the AI targeting platform at the center of much of this controversy, is used across the Army, Air Force, Space Force, Navy, Marine Corps, and Central Command. It incorporates Anthropic’s Claude model, using it to generate targeting recommendations, simulate battlefield scenarios, and produce assessments that may directly inform strike decisions.2
The argument proceeds in five movements. First, I examine the theological anthropology of homicide, the claim, rooted in the Anglican and broader Christian tradition, that the taking of human life bears a moral weight that cannot be offloaded. Second, I trace the first great dissociation: the industrialization of warfare in the American Civil War. Third, I examine the catastrophic acceleration of this dissociation in the First and Second World Wars. Fourth, I engage the Anglican theologian Charles Raven, whose prophetic critique of mechanized warfare finds startling contemporary application in the age of artificial intelligence. Finally, I offer a theological response grounded in Anglican and Episcopal moral theology.
Part I: The Moral Architecture of Killing
The foundational Christian claim about killing is not primarily political or jurisprudential, it is ontological. Humans are created imago Dei (in the image of God), and this endowment is not accidental to who we are; it is part of what we are. To kill another human being is, therefore not merely to stop a living organism from functioning but to kill someone made in God’s very image and likeness.
This does not mean that the Christian tradition has taken an absolute prohibitionist stance on the taking of life. The just war tradition, developed through Augustine of Hippo, systematized by Thomas Aquinas, and transmitted through Anglican moral theology by figures including Richard Hooker, Jeremy Taylor, and Paul Ramsey, has always maintained that lethal force can be morally justified under circumscribed conditions.3 My tradition is emphatic on one point: the moral weight of lethal action cannot be dissolved or transferred. The soldier who kills under lawful orders is not morally absolved. The commander who orders an engagement bears the deaths that result. Moral responsibility is not zero-sum; it is distributed, and the distribution does not relieve anyone in the chain.
The first theological reflection we see on this in the Hebrew Bible is shockingly close. After Cain murders Abel, God doesn’t declare punishment from afar. God inquires, ‘Where is your brother Abel?’ (Gen. 4:9).4 The question is not asked because God does not know. It is asked because moral accountability is not merely a divine ledger entry, it is a confrontation, a face-to-face reckoning that requires the one who has acted to name what has been done. Cain’s attempt to deflect, ‘Am I my brother’s keeper?’ is not answered directly; it is overwhelmed by the counter-claim of Abel’s blood crying from the ground. The taking of life cannot be bureaucratized or depersonalized. It leaves a mark on the world that speaks.
The Anglican moral tradition has understood this with particular clarity in the context of warfare. The just war criteria, legitimate authority, just cause, right intention, last resort, proportionality, and the protection of noncombatants, are not merely regulatory conditions. They are the moral infrastructure by which the terrible burden of lethal force is kept tethered to human conscience and accountability. Each criterion places a human being in the position of having to make a judgment, bear a responsibility, and answer for a decision. Remove the human decision-maker, and the tradition does not merely become difficult to apply, it becomes theologically incoherent. There is no one left to ask, ‘Where is your brother?’
Part II: The First Dissociation — The Civil War and the Industrialization of Killing
The American Civil War represents a theological watershed in the history of warfare, though it is rarely framed in these terms. It was the first major conflict in which the Industrial Revolution was applied systematically to the business of killing, and the consequences were visible in casualty statistics that still stagger the imagination. Recent scholarship estimates the war’s death toll at between 620,000 and 750,000 military deaths, with some assessments reaching considerably higher when civilian mortality from disease, displacement, and direct violence is included.5
What produced this carnage was not primarily tactical incompetence, though that played a role in the early years. It was the collision of Napoleonic tactics with industrial-age weapons. The rifled musket extended the effective range of infantry fire from approximately fifty yards to three hundred yards or more, meaning that attacking troops crossed a killing ground three to six times longer than anything their tactical formations had been designed for.6 The soldier firing artillery from several hundred yards could not see the face of the man he was killing. He was operating a machine against a mathematical position.
This is the first theological rupture. The artilleryman’s moral experience of killing is categorically different from that of the infantryman. The infantryman who fixes a bayonet and charges must look into the eyes of another human being and perform an act of deliberate destruction. The moral weight is immediate and personal. The artilleryman fires at a position, at coordinates, at a tactical objective. The human being on the receiving end is, in his experiential world, an abstraction.
This is not to say that artillerists felt no moral weight, many clearly did, and the literature of the Civil War is full of accounts of soldiers wrestling with the violence they had inflicted and witnessed. But the structure of the moral experience was beginning to change. The act of killing was being refracted through layers of technological mediation that created what we might call, borrowing from Charles Raven, a ‘specialization’ of the moral agent, a person who performs one isolated technical function and who is structurally insulated from the full human meaning of what that function accomplishes.
Part III: Total War and the Collapse of Distinction, 1914–1945
If the Civil War represented the first great dissociation, the World Wars completed it. The First World War resulted in approximately seventeen to twenty million deaths, of which roughly seven to ten million were civilian, a proportion representing a qualitative shift in the nature of warfare.7 Poison gas, industrially produced and mechanically delivered, killed without discrimination and without the killer ever approaching within sight of the victim. Artillery fire became so dense and destructive that landscapes and peoples alike were wiped out. New technologies and doctrines of aerial warfare made possible the targeting of civilians at great distances from combat zones for the first time.
In World War II each one of these trends intensified until civilian deaths outnumbered military ones for the first time as a share of total casualties. Between seventy and eighty-five million people were killed, of which fifty-five to sixty million were civilians.8 Allied and Axis strategic bombing meant that the combatant/non-combatant distinction became moot in practice, if not in principle. The atomic bombings of Hiroshima and Nagasaki were the logical endpoint: maximum lethality delivered at maximum distance, with maximum abstraction from the human beings who were killed.
By 1945, the structure of killing in warfare had been transformed almost beyond recognition from its pre-industrial form. A bomber crew at thirty thousand feet had no more direct experiential connection to the people below than a chess player has to the pieces removed from the board. The moral weight of killing, the weight that Augustine and Aquinas had insisted must be borne by the one who kills, had not disappeared, but it had become progressively more difficult to locate, to assign, and to reckon with. This is in no way to diminish the moral injury and trauma our soldiers return home burdened with.
Part IV: Charles Raven’s Prophetic Critique — The Sin of Specialization
Writing during and immediately after the Second World War, the Anglican theologian, biologist, and pacifist Charles Raven identified the catastrophe not primarily as a political failure, though it was that, but as the consequence of a deep civilizational rupture: the sundering of scientific knowledge from moral and spiritual accountability. In Science, Religion, and the Future (1943), Raven argued that scientists had become what he memorably called ‘organized in gangs,’ specialists who solved technical problems of warfare with great precision and no moral compass, because the very structure of their specialization had severed their technical expertise from ethical reflection.9
Raven’s argument was not anti-science. He was, famously, a distinguished biologist as well as a theologian, and his life’s work was the integration of evolutionary science with Christian theology.10 His argument was, rather, that science and religion had committed the same sin, the sin of fragmentation, of allowing their respective domains to become so specialized and self-referential that they lost the capacity to speak to each other and, in losing that capacity, lost the capacity to speak to the whole human being.
The result, in his analysis, was a civilization that had split its mind from its heart: technically capable of producing weapons of mass destruction and morally incapable of providing reasons not to use them. Scientists who placed their talents at the service of state-sponsored violence had not merely made a political error; they had committed a theological one. They had treated human beings as objects in a technical calculation, a precisely calibrated form of what the tradition names as sin.
Raven’s four theological pillars: divine immanence, the continuity of natural and supernatural, the organicism of life, and the sacramental character of the world, all converged on a single moral claim: that the world, and every human being in it, participates in and discloses the divine life. To treat a human being as a target coordinate, a data point in a targeting algorithm, or collateral damage in a statistical model of acceptable risk is not merely a moral failure. It is an act of desecration, a violation of the sacramental order in which every human face is, in some sense, the face of God.
“The disaster of the World Wars was the natural result of a civilization that had divorced knowledge from wisdom and efficiency from conscience.” —Charles Raven, Science, Religion, and the Future (1943)
C.S. Lewis, Raven’s Anglican contemporary, approached the same problem from a different angle, noting in his wartime writings the particular moral danger of modern warfare’s tendency to treat enemy populations as abstractions, as demographic categories rather than as persons bearing irreducible dignity.11
Part V: The Algorithmic Horizon — Artificial Intelligence and the Removal of Moral Agency
The dispute between the Pentagon and Anthropic brings Raven’s diagnosis into the twenty-first century with a precision that is almost uncanny. The Department of Defense has allocated at least $75 billion to AI-driven programs since 2016, a figure that excludes classified programs and those where AI’s role is unclear.12 The scale of this enterprise, and the speed of its development, represents an acceleration that dwarfs even the military-industrial mobilization of the World Wars.
The accuracy questions alone should give pause. Military assessments reportedly found that Maven’s algorithms could correctly identify a tank in favorable conditions approximately sixty percent of the time, a figure that fell to thirty percent in adverse weather.13 Foundation models generate analysis that can be false or misleading while sounding authoritative, a combination that is particularly dangerous in high-stakes, time-pressured military environments. Investigations of targeting practices in recent conflicts have found precisely this dynamic: AI-recommended targets approved without sufficient independent corroboration because analysts were under structural pressure to process them rapidly.14
But the accuracy problem, serious as it is, is not the deepest theological concern. The deeper issue is structural. The Palantir AIP system, which integrates Claude, can receive surveillance data, identify potential threats, generate multiple courses of action, assign weapons systems to targets, and present a complete battle plan for human review, all within seconds.15 The human being at the end of this process reviews and approves. But what does such approval mean? When a commander confirms a course of action generated by an algorithm under conditions of operational pressure and information overload, is that genuinely human moral decision-making, or is it the ritualized performance of decision-making that has already occurred in the machine?
Anthropic raised precisely this concern when it sought to prohibit Claude’s use in fully autonomous weapons. The Pentagon refused.16 What this refusal reveals is not merely a policy preference but an implicit anthropology, an understanding of what human beings are and what human moral agency means. The Pentagon’s position, functionally if not explicitly, is that human moral agency can be reduced to a node in a larger automated system; that ‘human in the loop’ can mean something indistinguishable from rubber-stamping; and that the accountability question does not require a human being who actually decides.
This is precisely the conclusion that the Christian theological tradition, and the Anglican tradition in particular, cannot accept. The just war tradition is not a set of constraints that happen to require human decision-making as an implementation detail. It requires human decision-making because moral accountability is constitutively human. Machines do not repent. Algorithms cannot be held accountable before God. Systems cannot be asked, ‘Where is your brother?’
Raven’s ‘sin of specialization’ reaches its logical terminus in autonomous AI warfare: a system so specialized that no human being anywhere in the chain can be said to have made the decision to kill. The scientist designs the algorithm. The contractor integrates the model. The military official approves the system’s deployment. The commander activates the operation. The AI selects and engages the target. Everyone has performed a specialized technical function. No one has killed. And so no one bears responsibility for the killing.
This is not the mitigation of the moral weight of homicide. It is the systematic evacuation of moral weight from the act of killing, through the progressive insertion of technological intermediaries until the human face that required accountability is no longer visible to anyone in the chain.
Part VI: An Anglican Theological Response
What, then, does the Anglican/Episcopal theological tradition require of us in this moment? First, it requires that we name what is happening accurately. The automation of killing is not a technological development that happens to have ethical implications. It is an anthropological claim, a claim about what human beings are, what moral agency means, and what accountability requires. The Church has both the authority and the obligation to contest that claim wherever it is made, including in the corridors of the Pentagon and the boardrooms of defense contractors.
The baptismal covenant, which stands at the center of Episcopal liturgical and moral theology, commits us to ‘respect the dignity of every human being.’17 This commitment is not qualified by national identity, combatant status, or the distance at which a weapon is deployed. It is the unconditional recognition of the imago Dei in every human face, the same recognition that Raven expressed in his sacramental theology of nature. When an algorithm generates a targeting recommendation without being able to see or respond to the humanity of the person targeted, it does not merely potentially violate this commitment; it is structurally incapable of fulfilling it.
Second, the tradition requires that we engage the just war criteria with full seriousness and resist any interpretation that would permit the evisceration of human moral agency they presuppose. Raven’s contention was that modern warfare had made just war theory obsolete. Paul Ramsey’s insistence on the moral immunity of noncombatants, Oliver O’Donovan’s analysis of the public and political character of legitimate armed force, all of these resources converge on the same demand: the decision to use lethal force must remain in human hands, must be made by a human being who will bear its weight, and must be answerable before both human and divine judgment.18
Third, and following Raven directly, the Church must challenge the fragmentation of knowledge that permits AI to be deployed in warfare without integrated moral accountability. Raven called for a ‘New Reformation’ that would reintegrate scientific and moral intelligence.19 In 2026, we need something analogous: a sustained public theological engagement with the ethics of artificial intelligence that refuses to permit these questions to be settled by defense contractors and military planners alone. The Church must insist on a seat at the table, not to bless or to veto, but to represent the one constituency that the algorithms cannot represent: the human beings who will die.
The Episcopal Church and the broader Anglican Communion are not without resources for this task. Our tradition has consistently insisted on the integrity of creation, the dignity of the human person, the public accountability of power, and the moral limits of the state. We have a theology of peace that is more than mere opposition to conflict, it is a positive vision of the world as God intends it, shot through with the conviction that every human life participates in and reflects the divine life. That theology must now be brought to bear, with all the intellectual rigor and pastoral urgency we can muster, on the question of autonomous lethal force.
Conclusion: The God Who Asks
Humans will continue to make war, to kill indiscriminately, and attempt to outpace their enemies in technology; it has ever been so. Nevertheless, there is a question that runs through the entire Christian tradition on the taking of human life. It is not the question of legality or military necessity or proportional response. It is the question that God asked in the first recorded act of homicide: ‘Where is your brother Abel?’
The history of warfare technology since the Industrial Revolution can be read as a sustained collective effort to escape that question, to place enough distance, enough machinery, enough algorithmic intermediation between the killer and the killed that no one anywhere in the chain must answer it. The Civil War artilleryman could not see the face of the man his shell killed. The bomber crew could not hear the voices of the people below them in the burning city. And now, in the age of autonomous AI targeting systems, we contemplate a future in which no human being in the entire chain of decision, deployment, and execution has in any meaningful sense chosen to kill the person who dies.
The Church cannot bless this trajectory. It cannot bless it because the question ‘Where is your brother?’ is not a bureaucratic inconvenience to be solved by better institutional design. It is the fundamental moral question, the question that names what killing is, what accountability means, and what it costs to be a moral agent in a world where every human being bears the image of God.
Charles Raven saw, in the mechanized slaughter of the World Wars, the consequence of a civilization that had split its mind from its heart. The age of artificial intelligence represents the potential completion of that split, the moment when the mind, in the form of algorithmic intelligence, severs itself entirely from the heart that might feel the weight of what the mind has done.
The Anglican tradition, at its best, has always insisted that these cannot be separated, that knowledge and wisdom belong together, that capability and conscience must walk in step, that the God who is Creator Spirit indwells the natural world and meets us in the face of every human being, including the face of the enemy. It is the tradition’s task, in this moment, to say so clearly, publicly, and without equivocation: we will not delegate death to machines, because the God who asks ‘Where is your brother?’ is asking us.
Notes
1. Amos Toh and Emile Ayoub, “The Military’s Use of AI, Explained,” The Intercept, March 12, 2026. See also Caroline Haskins, “What We Know About How US Military Officials Are Using AI Chatbots,” WIRED, March 2026.
2. Haskins, “What We Know About How US Military Officials Are Using AI Chatbots.” The Maven Smart System is managed by the National Geospatial Intelligence Agency and used across the Army, Air Force, Space Force, Navy, Marine Corps, and US Central Command.
3. Augustine of Hippo, Contra Faustum, 22.74; Thomas Aquinas, Summa Theologiae IIaIIae, q. 40; Paul Ramsey, The Just War: Force and Political Responsibility (New York: Charles Scribner’s Sons, 1968).
4. Genesis 4:9–10. See Walter Brueggemann, Genesis, Interpretation: A Bible Commentary for Teaching and Preaching (Atlanta: John Knox, 1982), 55–61.
5. J. David Hacker, “A Census-Based Count of the Civil War Dead,” Civil War History 57, no. 4 (2011): 307–48. Hacker’s revised estimate substantially raised earlier figures, suggesting totals of 650,000–850,000.
6. Paddy Griffith, Battle Tactics of the Civil War (New Haven: Yale University Press, 1989); Earl J. Hess, The Rifle Musket in Civil War Combat: Reality and Myth (Lawrence: University Press of Kansas, 2008).
7. John Keegan, The First World War (London: Hutchinson, 1998), 3–7. Civilian death estimates for the First World War are contested; figures of seven to ten million include deaths attributable to disease, famine, and displacement caused by the conflict.
8. Gerhard Weinberg, A World at Arms: A Global History of World War II, 2nd ed. (Cambridge: Cambridge University Press, 2005), 894. The proportion of civilian deaths exceeding military deaths in the Second World War is widely recognized; precise figures vary by methodology.
9. Charles Raven, Science, Religion, and the Future (Cambridge: Cambridge University Press, 1943), 47–52. Raven’s phrase “organized in gangs” describes the specialization of scientific labor in service of military objectives.
10. Raven, Science, Religion, and the Future, 12. For Raven’s broader theological project, see John Polkinghorne, “Charles Raven: Prophet of Dialogue Between Science and Theology,” Zygon 34 (1999): 615–25.
11. C.S. Lewis, “Why I Am Not a Pacifist” (1940), in The Weight of Glory and Other Addresses (New York: HarperCollins, 2001); “The Humanitarian Theory of Punishment,” in God in the Dock (Grand Rapids: Eerdmans, 1970).
12. Toh and Ayoub, “The Military’s Use of AI, Explained.” The $75 billion figure is drawn from a Brennan Center report cited therein and excludes classified programs and those where AI use is not fully documented.
13. Ibid. The tank identification accuracy figures — approximately 60% in favorable conditions and 30% in adverse weather — are from 2024 military assessments of Maven’s performance.
14. Ibid. On the structural pressure on analysts to approve AI-recommended targets rapidly, the authors cite media investigations of Israeli Defense Forces targeting practices in Gaza.
15. Haskins, “What We Know About How US Military Officials Are Using AI Chatbots.” The Palantir AIP demo described by Haskins illustrates the full battle planning sequence — from surveillance alert to troops receiving orders — accomplished within seconds via an AI assistant.
16. Toh and Ayoub, “The Military’s Use of AI, Explained.” The Pentagon’s designation of Anthropic as a “supply chain risk” followed the company’s refusal to grant unconditional access to Claude.
17. The Book of Common Prayer (New York: Church Publishing, 1979), 305. The baptismal covenant question reads: “Will you strive for justice and peace among all people, and respect the dignity of every human being?”
18. Paul Ramsey, The Just War, 153–57; Oliver O’Donovan, The Just War Revisited (Cambridge: Cambridge University Press, 2003), 11–30.
19. Raven, Science, Religion, and the Future, 89. Raven’s call for a “New Reformation” integrating scientific and moral intelligence appears in his conclusion to the 1943 lectures.



As if to prove the point of my essay…
https://artificialbureaucracy.substack.com/p/kill-chain?r=1vlnbu&utm_medium=ios