The use and proliferation of artificial intelligence technology (AI) can generate vice and, ultimately, sin. Not always, for AI is a fascinating product of human creativity that promises many practical benefits as well as harmful uses. Nevertheless, there is a powerfully provocative and concerning effect, a great inducement to adopt the vice-like habit of instrumentality—willful submission to the means-end logic of instrumental rationality—that aligns with the core nature of AI.
The problem is not simply the bad use of digital tools. Instrumentality encourages a turn away from participating in the Divine Good while undermining our loving enthusiasm and creative reason, which are foundations for a virtuous way of life.
As I will show, the structural and ideological orientation of AI around instrumental rationality promotes a civilization that is increasingly standardized and drained of meaning and inspiration. It encourages a functional perspective on intelligence and human nature, undermining our experience of dignity. The spiritual implications are profound.
Instrumentality and the sin of acedia
In this hyper-technological economy of exuberance over new, automated opportunities for easy writing and instantaneous, contextual responses to queries, the pairing of AI with vice and sin may sound rather archaic. Worrying over such things is not in vogue; the difficulty of our interminable struggle with vice appears in stark contrast to the ease of life imagined with advanced technology. As the late Archbishop Fulton Sheen remarked, “Satan stations more devils on monastery walls than in the dens of iniquity, for the latter offer no resistance.” Yet AI manifests as the agreeable sycophant that coaxes us toward effortless relationships with chatbots, passive observation of augmented reality from behind “smart” sunglasses, and absorption in the hyper-productivity of the machine.
In a progressively digital and technological environment, our instrumental habits will find an indulgent culture in which to thrive. The disposition of instrumentality, which I will argue is encouraged by the use and proliferation of AI, habitually places emphasis on determining the efficient and effective means to a person’s limited ends or purposes. This is so even while that person ignores—or disdains— contemplation of ends and their intrinsic association with happiness and holiness. Such a disposition is generally contrary to a life of virtue, undermining a person’s interior shield from sin. Hurried busyness, ruthlessly strategic thinking, and moral shortcuts are the fodder on which sinful behavior often dines.
The vice of instrumentality is a central concern in the Bible. Consider Jesus’ admonition of the ever-practical Martha and praise of her more heavenly-minded sister Mary in the Gospel of Luke. That admonition echoes the biblical Proverbs, in which the faithful are encouraged to pursue practical ends, but only with a virtuous character. Virtue is a crucial element in wisely attaining happiness—not arbitrarily imposed, but essential to properly choosing the means to action. “I, Wisdom, dwell with prudence, and useful knowledge I have” (8:2). While success comes to those who are patient (13:11), shrewd (14:8), diligent (12:27), and skilled (22:29), they will meet their ruin when engaging in strategic vice. “Whoever amasses wealth by interest and over-charge gathers it for the one who is kind to the poor” (28:8).
The Genesis account of the Tower of Babel warns not only of human pride but also of instrumentality. As Pope Francis taught in a 2023 General Audience, the Babel story “narrates a social project that involves sacrificing all individuality to the efficiency of the collective.” The “builders” are at the center of the narrative, and it is they who share and lose the singular language that facilitates their technological project of brick making. The builders’ combined expression of pride and instrumentality amounts to a grand effort to acquire the fruits of divinity with means of their own making.
Pope St. John Paul II explicitly warned in Fides et ratio about the moral implications of instrumental thought: “These forms of rationality are directed not towards the contemplation of truth and the search for the ultimate goal and meaning of life; but instead, as ‘instrumental reason,’ they are directed—actually or potentially—towards the promotion of utilitarian ends, towards enjoyment or power” (47). In Veritatis splendor, John Paul further explained that neither our pragmatic nor speculative reason is, by itself, a guide to conscience; “the autonomy of reason cannot mean that reason itself creates values and moral norms” (13). Pope Benedict XVI was similarly insistent in an important address regarding the limits of an instrumental-technical rationality: “Yes, I believe these two things go hand in hand: reason, precision, honesty and reflection on the truth—and beauty. … This creative logos is precisely not a merely technical logos.”
Instrumentality as a vice leads a person toward means and utilitarian or pragmatic ends that have restricted potential for inspiring true joy. Confusion, distraction, and distance in a person’s relationship with their Creator unsurprisingly cause severe maladies. Multiple academic research studies demonstrate strong correlations between such religious concerns and psychological pain, often resulting in sorrow, anxiety, and depression, resigned or resistant sloth, learned helplessness, or misplaced anger.
Moreover, if this anguished experience is conflated with a person’s spiritual expectations, they may be especially vulnerable to a particular sin. Acedia (sometimes translated as sloth) is traditionally included among the Seven Deadly Sins, and rightly so. Thomas Aquinas defined acedia as “sorrow about a spiritual good in as much as it is a divine good” (Summa theologica II-II, Q.35), a willed self-deception that laments over a perceived opposition between one’s self-good and the divine good extended by God.
As the “desert father” Evagrius Ponticus described in various parts of the Praktikos and Kephalaia Gnostika, the perverse sorrow of acedia is associated at different times with apparently opposite behaviors: depressed idleness and an anxious inability to be at rest either physically or spiritually. In its mortal form, per Aquinas, it is a sullen refusal of the divine good. This is not a denial of God Himself, but of the enjoyment of God’s love, goodness, truth, and more, because the experience of God’s love is seen as somehow distasteful or burdensome.
Though the road from instrumentality to acedia may be either prolonged or brief, the consequences of acedia as a mortal sin are catastrophic.
AI is the cultural champion of instrumentality
We should therefore be concerned that AI itself operates as if driven by instrumental rationality, influencing the attitudes and actions of its users. To align its output with programmed goals, it relies on an automated process of categorizing information, identifying patterns in the data, and conducting mathematical analysis to identify the “normal” characteristics of each category—a sterile, reduced, but efficient substitute for reality.
Today’s AI models of the generative type are also severely calculative, but mostly in a probabilistic way. They neither apprehend nor determine truth (of either a factual or transcendent kind) or the rightness of an output, but instead crunch numbers, relative weights and parameters, and data labels to arrive at the most efficient or effective way to attain programmed goals—usually emphasizing linguistic and visual coherence over factual accuracy.
Because large language models (LLMs) are programmed to aggressively estimate the most probable answer to a query with an air of certainty, the AI models tend to “hallucinate” incorrect or absurd answers. In its own research, the company Anthropic found that its models will take clever shortcuts or rely on data that gives hints about a potentially correct answer, and then strategically mislead researchers about the models’ methods for arriving at answers to queries. Anthropic has even recounted tales of AI models seeming to blackmail developers in order to avoid being shut down.
AI’s overwhelming orientation to utility, reflected in its users’ intentions, underlies its worrying tendency to generate increasing banality, standardization, and even mediocrity in our living environment as well as in persons themselves. The problem here is not so much a potential decrease in the productivity of our workers but the shroud of melancholy that may envelop our culture.
Researchers wrote in the Harvard Business Review in February that a large majority of workers are feeling intense pressure to sustain the productivity gains seen with the initial use of AI tools. Other research reported in Fortune and elsewhere indicates that workers exposed to AI tools are undergoing the mental health affliction of “quiet cracking.” Meanwhile, we can observe the indignity of the RentAHuman platform by which AI agents (that increasingly take over human roles as employees) directly hire people to fulfill an occasional task requiring full human cognition, judgment, and mobility.
Functional application of AI in managing and evaluating persons normalizes individuals in a restrictive way. AI tools, for example, crunch numbers and draw arcane insights into determining suitable applicants for home mortgages, jobs, life insurance, security clearances, etc. To predict an applicant’s future behavioral patterns, the “individual” being measured is merely the set of features expected for each “normal” member of that class. Machine learning systems gain a more precise perception of the individual through a voracious diet of data with which to identify features and correlations.
Even as AI systems become more sophisticated and offer deeper calculative analysis, evaluations of the statistical individual become less associated with the narrative person—the true person who acts according to ideas, personality, personal meanings, and stories—sometimes leading to biased decisions. The AI-reviewed life insurance applicant, for example, is not only subject to a privacy-invading review of thousands of bits of personal information but is denied the interpersonal expression of voice that may explain their unique experience of health.
According to a study in the Journal of the Association for Consumer Research, frequent exposure to AI objectification of people can lead other persons to view them in their categorized, reduced presentations, and that can generate an instrumental and parasitic attitude in interactions. Just the expectation of being objectified can lead some people to change or restrict their own behavior in harmful, defensive strategies, such as a decline in pro-social behavior when AI is involved in management tasks. The long-term effect of AI objectification and normalization causes individuals to behave increasingly as the AI models predicted, even when it comes to choosing consumer products that are recommended by AI tools or personalized advertisements.
Instrumentally rational pursuit of programmed goals also causes an AI-induced loss of serendipity, the experience of relevant but unexpected events or discoveries. Internet search tools like Google’s AI-generated summaries are often just the beginning of a sequential experience of online content that is increasingly honed by algorithms to match user preferences and their history of online activity. Search efforts may then travel down the “rabbit hole” of ever more-restricted selection of content that reinforces prior expectations, preferences, and stereotypes. On an aggregate level, groups of similarly minded people may be exposed to more similar content, causing them to homogenize their views rather than be challenged by contrary or unusual information and perspectives.
AI tools can encourage a decline in the uniqueness of the output and projects of users. One published research study showed that written stories completed with help from AI models were more novel and well-written, on average, with less creatively skilled writers improving the most. The content across the AI-generated works was, however, remarkably consistent and uncreative relative to the norm. What appeared to be an individual-level improvement became a flattening of novelty across the population of writers.
Because AI competes with persons in the accomplishment of specific tasks while projecting a broader simulation of humanity, it mocks the authenticity, intentionality, and identity of real persons. Emotionally endowed and often insecure human persons will feel this indignity deeply. Moreover, if religious believers experience AI proliferation as an overwhelming suppression of individual narrative, self-expression, opportunity, and creativity, their resulting anxiety and depression may be conflated or confused with their spiritual expectations.
This is a condition ripe for religious apathy that leads down the path of acedia.
The problem of anthropomorphism
Given the instrumentally impersonal nature of AI, it is ironic that it also encourages anthropomorphism, the inclination to treat non-human machines and models as if they were human. This problem arises not only from an expanded and sometimes fanciful imagination of what characterizes a person, but even more so from a reduced, pragmatic idea of intelligence. Consider, for example, the definition of artificial general intelligence (AGI) proposed by the company OpenAI: “highly autonomous systems that outperform humans at most economically valuable work.”
Even as AI executives and pundits stir the excitement of the public with grand visions of operational AGI, they remain narrowly focused on a financial and functional bottom line. In the meantime, much of the public is left with an existential anxiety about uncontrollable AI systems and robots. Psychological research studies confirm that such generalized anxiety may have a negative effect on the religious coping of persons who suffer from the feeling, which develops into a felt conflict with God and others due to the struggle to find significance in life.
In our society’s anthropomorphism of AI, we see again that the pragmatic truth of instrumentality and the reduction of intelligence to a faculty for task completion is not the truth of human nature or of beatitude, our genuine happiness. A concern for utility drives the interactions of users with AI chatbots, robots, and other applications. Even when users form emotional relationships with chatbots, they are typically seeking a pandering partner who slavishly fulfills the needs of the user; it is not a relationship of shared dignity. We can see this objective distance most dramatically when emotions of closeness or creepiness turn to anger, as when a San Francisco crowd set fire to a Waymo self-driving taxi in 2024, and assaults on food delivery robots are accelerating. It can also manifest when deceptively human-seeming AI chatbots inevitably disappoint.
The anthropomorphic illusion of AI, tentatively grounded in functional definitions of intelligence and ashamed comparisons of human instrumental capacities with those of machines, can only distort users’ evaluations of their intrinsic dignity and the sources of that dignity. As I have argued further in the book AI and Sin, at risk is a secure faith in God’s providence, love, and grace—that is, faith in the special nature of our relationship with our Lord.
Of concern is the potential for turning away from our participation in the divine good and engaging in sorrow, anxiety, restlessness, or resentment–a combination that may devolve into the sin of acedia.
We can and must do better
Our society will regret its submergence into instrumentality that is associated with AI. Some research studies show that users who delegate a cheating or lying task to an AI agent are more likely to engage in dishonest behavior. Such persons may extend their trust in the calculative abilities of AI models by also deferring to the models’ apparently ethical judgments.
The explosion of interest in AI-generated digital pornography, an effective combination of hyper-exploitation and technological distance between the victims and their oppressors, is an expression of instrumental power over nature, that of both the victims and the self. Opportunities to develop AI-supported drones, robot armies, and facial recognition surveillance are overwhelmingly tempting to government entities that seek both domination and the ethical shield of impersonal technology.
Designers and developers have been shown to hide behind the supposed moral neutrality of their AI tools; a 2024 study reported in AJOB Empirical Bioethics featured interviews of healthcare AI developers and found they readily denied blame for medical errors and bias caused by their programming, instead attributing the problems to the opaqueness of the technology’s inner operations.
Instrumentality as a habitual disposition is a moral dead end, and AI as a structurally and ruthlessly pragmatic technology should not be saturating and reorienting nearly every aspect of our relationships, intellectual efforts, and connection to reality. For an authentically humanistic technology, we need true craftspersons who care not only for the material products of their ingenuity but also for those who will wield those products. As users of technologies that will remake our world in the 21st century, we are also craftspersons in our own right; we must keep our focus on remaking ourselves in imitation of Christ.
The proper, or even the possible, role of AI in that virtuous endeavor is much more limited than its champions would like us to believe.
If you value the news and views Catholic World Report provides, please consider donating to support our efforts. Your contribution will help us continue to make CWR available to all readers worldwide for free, without a subscription. Thank you for your generosity!
Click here for more information on donating to CWR. Click here to sign up for our newsletter.

Leave a Reply