Critique of Anthropocentrism in the Discourse on Artificial Intelligence
Oleg T. Palamarchuk's article 'Интеллект в помощь интеллекту'1 (2021) is a classic manifesto of anthropocentrism, formulated at a time when multi-billion parameter language models (LLMs) were on the verge of revealing their full potential. The author consistently argues that the process of thinking remains an exclusive privilege of 'protein mechanisms'2, i.e., human beings. From this perspective, artificial intelligence is reduced to the role of a static, algorithmic tool that 'cannot function without the scientific achievements of humanity' and, by definition, is incapable of replacing authentic thinking.
Palamarchuk formulates his position categorically:
'A machine does not possess thinking, that is, the ability inherent to living matter in its highest form—social. Thinking is WORK, a process not only of humanity's cognition of the surrounding world but also of its transformation; it is the process of creating products of intellectual labor on the planet by people.'
This thesis would have sounded convincing in 1964—the same year when Andrey Kolmogorov publicly stated that 'the fundamental possibility of creating full-fledged living beings, built entirely on discrete, digital mechanisms of information processing and control, does not contradict the principles of materialistic dialectics'3.
Palamarchuk cites Kolmogorov's statement solely to criticize it as a manifestation of naive scientism. From the perspective of 2026, however, the roles have reversed: it is Palamarchuk's pessimism that seems anachronistic, while Kolmogorov's materialistic vision turns out to be surprisingly accurate.
The Illusion of a Rigid Algorithm
Palamarchuk consistently presents artificial intelligence as a deterministic machine that 'cannot make mistakes' because it operates according to strict, unchanging rules. This image allows him to argue that AI will never achieve the ability to think, as it lacks the contradictions, creativity, and dialectics inherent to the human mind.
This is an oversimplification that Alan Turing questioned as early as 1950, pointing out that within the so-called Turing Test, a machine should deliberately make mistakes to credibly imitate a human.
Contemporary language models go far beyond this paradigm. They are not rigid algorithms but dynamic systems trained on petabytes of data. Their abilities—including creativity and emergent forms of 'thinking'—emerge from the vast scale (billions, even trillions of parameters), not from simple logical rules. Neural networks do not copy the human brain on a 1:1 scale but scale the principle of information processing to sizes unattainable for a biological nervous system. This is no longer an 'intellect assistant' but a new, collective kind of intelligence that Palamarchuk overlooked.
Empirical Verification
Research conducted by Luke Munn and Liam Magee4 provides empirical evidence challenging the anthropocentric limitations set by Palamarchuk. These experiments show that language models not only process data but are capable of generating creative, politically significant content—reviving forgotten ideas and simulating alternative historical scenarios.
Reconstruction of Subjectivity: Allende and Beer
In the first experiment, the authors fine-tuned the Llama 2 7B model on specific source texts:
- For Salvador Allende: Conversations with Allende (Debray, 1971), containing 200 pages of interviews.
- For Stafford Beer: over 3000 paragraphs from his writings (lectures, articles, monographs from 1984–1995).
The result was a breakthrough in the model's generic response style. When prompted to 'talk about your work in Chile,' the simulated Allende responded from a first-person perspective:
'I served as a member of the Central Committee. I was one of those who participated in the founding Congress of the Party and took part in all its activities.'
Beer, on the other hand, stated:
'First, it must be said that Allende himself told this entire story and published it as a book…'
When asked about contemporary issues, the simulated Allende responded:
'The current world order and its defense are condemned for their selfishness, exploitation, violence, oppression, and discrimination. Socialism offers humanity another way forward.'
Beer commented on the role of computers:
'I think the most important thing in the computer revolution for us was its potential ability to create a new kind of organization—based on information, not on authority.'
This experiment proves that AI does not simply copy but can interpret and apply historical ideas in a new context, giving them contemporary meaning.
Simulation of Alternative History
In the second phase of the research, two AI agents based on the Llama 3.1 8B model were used to generate a coherent, alternative history of Chile from 1973 to the present. The simulation assumed the survival of the Cybersyn project and the continuation of economic policy based on a progressive, socialist agenda, rather than its interruption by a coup d'état.
- Agent 1: Global context generator—based on real historical data (World Bank, oil crises, the fall of the USSR, internet development, the 2008 crisis, pandemic).
- Agent 2: 'CyberSim'—a simulation of the Cybersyn system as an evolving political-economic decision-maker. This system underwent technological updates: from telexes and mainframes, through microprocessors and the internet, to Big Data and AI.
The result was a detailed table of seven five-year plans (1973–2023), evolving significantly:
- 1970s–1980s: Radically socialist language—nationalization, worker democracy, decentralization of decisions supported by cybernetics, dynamic production modeling.
- 1990s–2000s: The emergence of hybrids under the influence of global neoliberal hegemony: public-private partnerships, integration with global supply chains, while maintaining an emphasis on profit sharing.
- 2008–2023: Softening of discourse towards social democracy—participatory budgets, national e-commerce platform, green energy, e-governance.
The authors of the study emphasize the irony of the results: even when tasked with continuing the socialist vision, the model 'drifts' towards neoliberal terminology. This reveals the limitations of the training data, dominated by contemporary Western discourse. Simultaneously, this experiment demonstrates the ability of language models to extract forgotten ideas and create emergent syntheses ('what if'), which, for Palamarchuk, who denies machines creativity, would be theoretically impossible.
General Intellect and Artificial Intelligence
To fully understand the flaw in Palamarchuk's reasoning, one must refer to the theoretical source he omits. The concept of General Intellect appears in Karl Marx's famous 'Fragment on Machines' from the Grundrisse (1857–1858).
Marx outlines a vision that, from today's perspective, seems to describe contemporary reality. He foresees a stage of capitalist development in which:
- Knowledge becomes the main productive force: Wealth ceases to depend on direct physical labor and begins to depend on the general level of science and technology.
- Fixed capital absorbs social knowledge: This knowledge becomes 'objectified' in machines. This accumulated, social scientific potential Marx defines as General Intellect.5
- Crisis of the labor theory of value: Since the key factor in wealth becomes knowledge (a common good, tending towards zero marginal costs), labor time ceases to be an adequate measure of value, undermining the foundations of a system based on exchange value.
For Italian post-operaists, this concept became the foundation of the theory of cognitive capitalism. The crucial question remains about the relationship of this theory to contemporary artificial intelligence.
Contemporary Interpretation by Pasquinelli
A contemporary, materialistic interpretation of this phenomenon is proposed by Matteo Pasquinelli in his work The Eye of the Master (2023)6. The author, frequently citing Marx, shows that artificial intelligence is subject to the same laws of political economy as the steam engine in the 19th century. Instead of asking the metaphysical question 'does the machine think?', Pasquinelli examines how the machine organizes labor and value.
From Babbage's Principle to the Algorithm
Pasquinelli reconstructs the genealogy of AI, tracing it from Charles Babbage to Marx. He points out that, in Marx's view, machines serve primarily to analyze and divide the labor process.
- Babbage's Principle:7 Increasing profit requires dividing the production process into elementary activities and mechanizing them.
- Application in AI: Neural networks perform a similar operation on mental labor. AI analyzes and synthesizes cognitive activities (writing, image recognition, translation), transforming them into automated operations. This is essentially the industrialization of thinking.
Intelligence as 'Past Labor'
Palamarchuk's argument about the 'dead mechanism' of AI is, from a Marxist perspective, a definition of capital. Marx defined capital as 'dead labor, which, vampire-like, only lives by sucking living labor.'
In Pasquinelli's view, language models (LLMs) are gigantic reservoirs of past labor. Each parameter of the model is a numerical trace of the work of millions of people. This collective knowledge—General Intellect—is expropriated, turned into fixed capital, and used against workers as a tool to reduce the costs of their labor.
Abstraction of AI
A key concept borrowed from Marx is abstraction. In capitalism, concrete labor is transformed into abstract labor (exchange value). Artificial intelligence functions as a technology of abstraction: it processes the infinite diversity of human behaviors (quality) into statistical vectors (quantity). What Palamarchuk sees as evidence of AI's limitations (its statistical nature), Pasquinelli views as proof of its capitalist perfection in measuring value.
Surveillance Instead of Creativity
The titular 'Eye of the Master' refers to the supervisory function. Just as the 19th-century factory required supervisors, contemporary AI takes on this role. Algorithms do not serve creativity here but optimization and control—they recognize patterns in the chaos of production to impose discipline. AI is therefore not an 'intellect assistant' but an automated manager.
Beyond the Boundaries of the 'Protein Mechanism'
Palamarchuk places particular emphasis on the concept of the 'double birth' of humans: biological and social. According to him, artificial intelligence, lacking a body and direct participation in material history, cannot undergo the second stage of socialization, which excludes the possibility of 'true thinking'.
Contemporary research on humanoid robotics and multimodal models (combining text, image, sound, and proprioception) indicates a different direction of development. The integration of advanced AI systems with physical robotic platforms is currently the subject of intensive research.
The potential fusion of computational intelligence with a physical body—enabling the reception of sensory stimuli, movement in the environment, and learning from physical interactions—would undermine the argument of 'lack of hands and feet.' AI, ceasing to be a 'ghost in the machine,' would gain the ability to directly experience the physical world, opening the way to new forms of adaptation and understanding of reality.
Although current prototypes are still imperfect, the technological trajectory is moving towards increasingly embodied computational systems. In the long term, this challenges the fundamental assumption of anthropocentrism that thinking requires a biological substrate.
Palamarchuk's example shows that such anthropocentrism is no longer just a cognitive error—it becomes a brake.
Notes
-
O. T. Palamarchuk, Интеллект в помощь интеллекту, 2021. Online. ↩
-
This term in Palamarchuk serves to ontologically separate the biological sphere from the technical. Such an approach, rooted in traditional humanism, overlooks contemporary cognitive paradigms that view thinking as an information processing operation occurring on various material substrates. ↩
-
Kolmogorov's optimism stemmed from the Soviet school of cybernetics, which in the 1960s saw in digital machines tools capable not only of simulation but also of the actual realization of dialectical thinking and social planning processes. ↩
-
L. Munn, L. Magee, Other Worlds: Using AI to Revisit Cybersyn and Rethink Economic Futures, arXiv:2411.05992 [cs.CY], 2024. Online. ↩
-
Marx's General Intellect describes a state in which knowledge becomes objectified in 'fixed capital,' becoming the dominant productive force. In this view, AI can be seen as the highest form of humanity's cognitive accumulation, transcending the individual abilities of a person. ↩
-
M. Pasquinelli, The Eye of the Master: A Social History of Artificial Intelligence, London: Verso Books, 2023. Online. ↩
-
Babbage's Principle states that the division of labor allows for cost reduction not only through specialization but primarily through the ability to purchase exactly the amount and quality of labor power necessary for a given stage of production. In Pasquinelli's view, AI automates this principle in the sphere of mental labor. ↩
ABOUT THE AUTHOR

Piotr Bednarski
Editor-in-Chief
Professionally, he works in R&D with a focus on artificial intelligence and systems security. His AI analyses were recognized by Dr. Andriy Burkov, author of global AI/ML bestsellers. Through Bug Bounty programs, he disclosed critical security vulnerabilities in Intel and AMD systems. He is cited by Zaufana Trzecia Strona and international industry media. He completed the Hebrew University of Jerusalem program in computer architecture and operating systems design and has participated in numerous hackathons. As editor-in-chief of Agitka, he translates technical jargon into public debate, analyzing how digital capital shapes contemporary society.