“Humanism” and Ethics in the Age of AI: Why the Relentless Negativity?
The strangest thing about self-anointed “humanists” is how little faith they have in actual humans to respond to—and benefit from—technological change. I have spent many years analyzing this phenomenon in both the various academic fields that make up Science and Technology Studies (STS), and also in popular writing about technology more generally.
As I noted in a 2020 book chapter on “Humanism, Ethics, and Responsible Innovation,” plenty of different STS scholars and tech critics run around calling themselves humanists these days, but what binds most of them together is the (often unstated) notion that technology and progress are somehow at odds with humanity and human flourishing. There is a profound negativity bias at work in most technology writing—especially among ethical philosophers—that implies that humans are either essentially powerless in the face of technological change, or they are fundamentally incapable of coping with it. After the requisite amount of hand-wringing and preachy pessimistic predictions, the critics inevitably declare SOMETHING MUST BE DONE and all roads typically lead back to some sort of application of the Precautionary Principle to counter the worries at hand. [I wrote a book about that a decade ago.]
Basically, the self-declared humanists want to stop the clock on all sorts of progress until all their worries and recommended correctives are heeded. From their perspective, a world full of tool-builders and new technological innovations is just a world full of trouble for humanity. Risk-taking is just too risky from their perspective, even though trial-and-error experimentation is what propels human progress. These critics also lack any serious theory of human adaptation and resilience despite the fact that actual human beings have again and again proven that they can use their tools (technologies!) to improve their lot in this world in both material and emotional ways.
This relentless negativity about technological change remains highly prevalent among the ethical philosophers and pundit class, and this trend has been supercharged in the age of AI. Everywhere you look today you find philosophers and pundits demanding “Human-centered” AI, “Human-focused” AI, “Human-first” AI, “Pro-human” AI, and, of course, we need to make sure that we are all playing for “Team Human.” My first thought whenever I read such things is, who could possibly be against any of this! But my second thought is, what the hell do they even mean by these terms? Again, what they implicitly mean to say is: Slow progress until we philosopher kings let you know whether it’s safe to move forward. Which is usually to say, never.
Sadly, it is all too often the case that there are built-in incentives within various STS fields, especially philosophy, to lean in the negative direction even when the entire course of human civilization suggests that such relentless negativity towards technological change is unwarranted. “AI scholarship is now basically doom porn, with scholars trying to one-up each other’s tales of a looming techno-apocalypse,” I noted in a 2023 essay on this topic. “The AI academic community today has become a close-minded mono-culture. If you dare suggest AI can do any good, you’re practically chased out of the room or accused of being an unthoughtful oaf,” I argued.
An important new article by Peter Königs on “The Negativity Crisis of AI Ethics” discusses this phenomenon in detail. “Positive perspectives on artificial intelligence are few and far between,” in the field of AI ethics, notes Königs, who is a professor in the Department of Philosophy and Political Science at TU Dortmund University. “Artificial intelligence holds enormous potential for humanity. Yet, the myriad of ways in which it can make the world a better and more just place receive short shrift by AI ethicists,” he argues.
Why is that? Königs says that the “rising tide of panic” can be explained by the conjunction of three factors or institutional norms within the field. He labels these factors: Subject-Matter, Positive Impact, and Incentives and summarizes them as follows:
Subject-Matter, then, means that AI ethicists are mostly in the business of commenting on new AI technologies, rather than solving existing philosophical puzzles. Positive Impact means that AI ethicists are more or less barred from doing so by saying positive things about AI. The third institutional factor – Incentives – simply refers to the fact that academics have to publish in order to have a career. Not writing papers about AI is not an option for AI ethicists.
He unpacks each of these explanations in detail and offers a compelling case for why we should basically never expect AI ethicists to give anything more than casual lip service to the benefits of advanced algorithmic technologies or computational capabilities. “It is likely that, as AI ethics continues to boom, the tide of panic will continue to rise, largely independently of how ethically problematic AI really is,” Königs argues. He even floats the idea of a “nuclear option” for the field that “would be to deliberately shrink the field of AI ethics, for its own sake.” He concludes that, “the field has grown somewhat out of proportion. The amount of funding allocated to AI ethics is not only questionable given opportunity costs and diminishing marginal returns. It might also actively undermine the reliability of the epistemic system by creating a mismatch between the size of the AI ethics community and the quantity and severity of problems to be researched.”
These are brave words coming from someone looking to make a career in philosophy and political science! I applaud Professor Königs for having the courage to take this stand. I wish more people in the field would be willing to join him. Alas, taken together, the three factors Königs identifies make it very hard for ethicists to break with the pack and offer an alternative narrative about technology and how humans generally benefit from it. Königs notes that you would not be likely to have a long career in the field if you chose to tell a positive story about AI or technological change more generally:
AI ethicists must point out ethical issues with AI, or else they risk obsolescence. As a result, their portrayal of AI’s ethical implications is problematically skewed towards the negative in two ways: it is one-sided (positive aspects are rarely discussed), and it is negatively biased (negative aspects are exaggerated). We should therefore be skeptical of the predominantly negative narrative put forth by the AI ethics community and think about ways to reform the field.
This is a bleak but realistic assessment. When I rise to speak at academic conferences filled with STS folks, I’m greeted with angry stares, gloomy questions, and a lot of finger-wagging. But I usually give these folks some real hell right back because, if you are going to casually throw around the term “humanism” without (a) ever bothering to define what you mean by it and/or (b) implicitly suggest that technological advancement is antithetical to actual human flourishing, then it is going to be very hard for me to take you seriously.
In recent years, a few other scholars have bravely stepped forward to address these same themes and challenge the prevailing pessimistic narrative around AI ethics. Some of my favorites include Orly Lobel’s “The Law of AI for Good” and a series of essays by Konstantine Arkoudas (most notably, “On AI Ethics and Regulation,” “AI Risks vs the Risks of Plain Bad Code,” and “The Precautionary Principle.”)
But my very favorite essay on this subject was from Andrew McAfee of MIT back in 2015. In a wonderfully in-your-face essay for The Financial Times entitled, “Who Are the Humanists, and Why Do They Dislike Technology So Much?” McAfee had very clearly grown tired of hearing the self-anointed humanists preach about the evils of innovation and pretend that they spoke for all of society when doing so. What makes it more insulting is the way that they use various banalities and flowery, undefined phrases to try to shut down debate altogether. McAfee describes this approach as follows:
The third sense of “humanist” is by far the most problematic. It’s close to: “Because I am for the people I should be free from having to support my contentions with anything more than rhetoric.” Or, more simply: “You can trust what I say, because I am on the side of people instead of the cold, hard machines.” Well, no. We should evaluate what you say based on the quality and quantity of evidence you’ve marshaled, and on the rigour with which you have analysed and presented it. If this sounds like an argument in favour of the scientific method, that’s because it’s exactly what it is.
Exactly right. As I noted when writing about this in my last book:
“it is not enough for critics to insist that technological innovation is “anti-human” or “de-humanizing” and to use such rhetorical ploys as the basis for rejecting any particular innovation. Critics bear some burden of proof regarding the harms that they allege, and they must be willing to acknowledge that there are trade-offs associated not only with new technologies, but also with the remedies they propose to any alleged downsides.
If you get the sense I’m a little angry about all this, you’re right. I am tired of hearing people getting on their moral high horse and describing themselves as humanists when they fail to actually believe in humans and their remarkable capabilities! Properly understood, technology and humanism are compliments, not opposites. Technological innovation is fundamentally about improving our humanity by bettering our lives as well as the lives of those around us and even those far removed from us. Benjamin Franklin once noted, “man is a tool-making animal” by his very nature because our tools are an extension of our minds and our desire to improve our lives and the world. All technology is the product of human design and action and there are few things more “humanist” than creating tools to solve pressing problems that real humans face every day. Thus, one can simultaneously believe in “the centrality of humankind to the universe” as well as the notion that technological innovation is central to humankind’s ability to improve our existence.
It is time to take back humanism from the humanists.
Additional Reading:
“Humanism, Ethics, and Responsible Innovation,” in Evasive Entrepreneurs and the Future of Governance: How Innovation Improves Economies and Governments. (2020)
“Humans and Technological Change: Fragility or Resilience?” (2023)
“What If Everything You’ve Heard about AI Policy is Wrong?” (2023)
“Coping with Technological Change,” in Challenges in Classical Liberalism: Debating the Policies of Today Versus Tomorrow (2023).
“Defending Technological Dynamism & the Freedom to Innovate in the Age of AI” (2025)
“Muddling Through: How We Learn to Cope with Technological Change” (2014)
“Failing Better: What We Learn by Confronting Risk and Uncertainty” (2016)
“What Does It Mean to ‘Have a Conversation’ about a New Technology?” (2013)
“Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle” (2013)

