1. What is to be done? Beyond the “solutions which are already there”

Bernard Stiegler’s final works have telling titles: Automatic Society, Volume 1: The Future of Work (2015), The Age of Disruption: Technology and Madness in Computational Capitalism (2016), Qu’appelle-t-on panser 1. L’immense regression; 2. La leçon de Greta Thunberg [What is called caring? 1. A massive regression; 2. A lesson from Greta Thunberg] (2018 [vol. 1], 2020 [vol. 2]), and Bifurcate: ‘There is No Alternative’ (2020)[1]. There, Stiegler elaborates on his new approach to political economy. Whereas in the middle stage of his writings, the philosopher’s project dealt with the problems of consumer capitalism and offered an alternative to the outdated production-consumption model[2], his most recent works are concerned with the new challenges presented by the realities of computational capitalism using automated calculations to exercise algorithmic control of all aspects of life. The last project initiated by Stiegler, calling for a reconstruction of the theoretical foundations of computer science and, consequently, overcoming what he calls “computational ideology”[3], is part of this broad attempt to develop an agenda for a new political economy suited to the technological challenges of the twenty-first century.

In his projects of a new political economy and a new theory of computer science in the context of computational capitalism, one will not find ideas for saving the world, even if saving is what they are about – saving as taking care of what needs to be taken care of, as nurturing what needs to be nurtured. Almost everybody has ideas, more or less interesting. The genuine contribution of Stiegler’s philosopher and activist’s reflection is new concepts which are desperately missing in our era of Musks, Thiels, Gateses and “solutions which are already there.” Such concepts can also be part of an answer to the Leninian question “What Is to Be Done?”, even when a thought to which they lead departs from the beaten tracks of what is called leftist philosophy. It is therefore important to explore these concepts and to contextualize them referring to Stiegler’s own recent work and the experience of the algorithmised everyday life to which these works directly relate.

2. Machines, instruments, and concepts for struggle

To Stiegler, philosophy as a mode of thinking is above all about producing concepts. One of the singular features of his thought is conceptual invention—in this, he remains a successor of the previous generation of French philosophers described as “poststructuralists”[4]. This conceptual inventiveness has, as he explained in a long, posthumously published conversation with Mehdi Belhai Kacem[5], a specific function: it is linked to the ability to integrate concepts into social circulation, and therefore also to implement them in machines. This is because the functioning of society always involves the operation of some kind of machinery, which today means, above all, information technology and computing machines. The complex conceptual apparatus, which Stiegler designed and chiselled in the conviction that concepts are the instruments for struggling for the future, is designed precisely for such implementation. The Introduction to La Pharmacologie du Front national [Pharmacology of the National Front] reads as follows:

This work is an instrument. It has been designed to be an instrument—and to be used for struggle. Like any instrument, it should be the object of practice and, like any instrument, it should instruct those who practice it: the instrument seeks to provide a set of instructions about an aspect of the world that its practitioners have in common and, above all, make together through their practices[6].

This sentence could actually open each of Stiegler’s subsequent instrument-book, where he develops his political economy project. These instrument-books aim at transforming the existing mode of operation of machines, redesigning them to harness the disintegrating power every technology entails—along with all of its emancipatory potential—to reduce our technological vulnerability, as individuals and as societies.

This kind of technological transformation of the present technical system, as urgently needed as the energy transformation[7], can only be the result of a struggle. The concept of a “technical system” was introduced by Bertrand Gille in his monumental Histoire des techniques. Gille defines a technical system as a coherent and historically variable whole made up of interdependent structures. By offering this kind of approach, Gille wanted to show that the logic of “technical progress” eludes the simplistic picture of the evolution of technologies as geographically isolated and developing in a chronological manner[8]. The problem with the contemporary technical system created by the convergence of computing machines is that the mode of functioning and organisation of this system has been entirely subordinated to the imperatives of the market. Therefore digital machines can serve a model of economy that in fact leads to its own demise and ours with it.

The problem, therefore, does not lie in the tendency of machines to specialize and totalize—the tendency which Jacques Ellul aptly described in Le système technicien [The Technological System][9]. The real issue is their subordination to the supposedly universal “laws of economics” and “the total market”[10]. It is precisely this subjugation, rather than machines themselves, which can be described as dehumanising, if we stick to Ellul’s rhetoric. It amounts to a regular war against the ideal of social justice which, as jurist Alain Supiot reminds us, referring to the content of the Declaration of Philadelphia adopted in 1944 by the International Labor Organization, was supposed to be “one of the foundations of the international legal order”.[11]

Stiegler’s own position in this respect is surprisingly similar, though expressed more bluntly: since the conservative no-alternative revolution of the era of Margaret Thatcher and Ronald Reagan (which after 1989 spread to Poland through Leszek Balcerowicz reforms), we have been in an economic war. The current libertarian tendencies within digital capitalism are a yet more radical version of this ultra-liberal revolution of the TINA era[12]: not only confirm that the marriage of capitalism and democracy has long been separated, with Californian aristocrats of uberisation such as Pieter Thiel publicly demanding an official divorce of that couple on the pretext of “escape from politics in all its forms”[13]. These tendencies, to a much greater extent than Fordist capitalism with its “scientific organisation of labour”, seek to subordinate all finalities related to every aspect of individual and collective life to the criterion of efficiency—henceforth considered self-sufficient and completely subject to calculations that would replace politics and help us solve our problems much more efficiently than we do ourselves. The technocratic “appetite for soft totalitarianism”, which is set to intensify in the coming years and take on various forms of surveillance, is rooted precisely in the attempt to reject the political and replace deliberation with calculation. It is a constant striving to achieve a state of what Supiot calls “governance by numbers”[14], to the exclusion of the state of law, i.e. law being an element of culture existing within defined territorial boundaries.

3. Psycho-physiological shift: from a way of organizing nature to a way of organizing the brain

The increasingly powerful machines that now make us to a far greater extent than we make them are computational in nature—from measuring, statistical, logistical, monitoring and scientometric devices to dating apps or those used for so-called self-quantification. Their convergence, proceeding with unprecedented speed, has meant that they are no longer mere tools. They form a planetary technospheric system that we inhabit. This system has a fundamental impact on what we know and what we don’t know, as well as on the appearance and functioning of our “existential territories”, which, as Félix Guattari aptly observed, “may already have been deterritorialized to the extreme”[15], and which today take the form of an increasingly closed and hostile planetary digital infrastructure.We are so entangled in and dependent on this system that, before we can change it, we need to understand how it works and who and how much benefits from it. Only then can we know what we actually should change. As David Djaïz vividly puts it,

The iPhone X I buy at the Apple Store in Paris was designed in California; its Oled screen was made by South Korea’s Samsung, its NAND memory by Japan’s Toshiba, parts of the facial recognition camera by France’s STMicroelectronics; the cobalt needed for the battery was mined in Congolese mines, and all the parts were assembled in China; the finished product was shipped by a Danish container ship to the Rotterdam port, to be transported to the Paris shop by Polish trucks. By buying an iPhone, and thus exercising my consumer freedom, I am participating in a globalised network of values[16].This example, concealing the everyday reality of the consumer victims of the attention economy, but also the day-to-day reality of the children employed in the cobalt mines and the workers in the transshipment ports, demonstrates the state of affairs in “technological globalisation”.

It is because of this state of affairs that the question of computer science and NBIC in what is an increasingly problematic relationship with computational capitalism have to be the starting point for the debate on the design of a new political economy for the age of computational technologies—from digital platforms that do not merely extract our data but also impact on our cognitive abilities[17], through ever more daring neurotechnological manipulations (the project to directly connect the brain to a digital machine) and, more recently, morphotechnological manipulations (synaptic chips, which would be more effective than microprocessors in simulating brain activity and would allow better implementation of this connection) to the digital infrastructure for financial capitalism.

It is not enough to criticize this increasingly commented state of affairs, especially if this involves no more than just another critique of capitalism. What we also need to do is to politicize computer science, along with related contemporary technosciences (neurotechnology, robotics, artificial intelligence, artificial life and so on) and digital design in its broadest sense[18]. What I mean by politicization is opening a debate about the relationship between the epistemological assumptions and theoretical claims that underlie computer science, and the reality of computational capitalism. The aim of such politicization is to find the way out of the disorder by correlating the designing process of modern computational machines with finalities other than primitively understood efficiency and by freeing the technologies themselves from equally primitively understood functionality. Perhaps in today’s era of “digital objects”[19], we can better understand the political overtone of the somewhat surprising statement of Gilbert Simondon who wrote, back in 1957, that the philosophical thought regarding the modes of existence of technical objects “must fulfill a duty … analogous to the one it fulfilled for the abolition of slavery and the affirmation of the value of the human person ”[20].At this point, a relevant question to ask and demand political answers, especially in the areas of public health and education, is this: what and who is to be served by the engineered infrastructure of technological globalization that has created the disorder, and who is to bear what costs (psychological, social, and environmental)? It is worth remembering the lesson Simondon taught us when he pointed out, in a critical dialogue with Marx, that the goal must not be, or at least not exclusively, the seizure of the means of production and the abolition of property, purportedly at the root of the worker’s alienation. The point is that the disorder of our society, thoroughly interconnected technologically, is just the modern and far more complex form of alienation, which is not only economic and social, albeit still important and not to be underestimated, but also psychophysiological: “the machine no longer prolongs the corporeal schema, neither for workers, nor for those who possess the machines.”[21]. Today, computing technology does not only “challenge the energies of nature”, “enframing” nature as the Heideggerian Gestell[22]. From now on, even the brain as the material layer of our mental activities is subject to algorithmic and automated “enframing”. Therefore the contemporary digital disorder is generated largely by a “psychophysiological shift”. With this change, computational capitalism, in addition to still being “a way of organizing nature”,[23] becomes also a way of organizing the digital brain.

4. Science and education in the age of disruption

In the context of this psychophysiological disorder, we need the politics of technology just as much as we need political ecology. Bruno Latour says: “Not sure about the laws of Nature? Well, let’s vote them!”[24]. Perfect. The thing is, however, that a similar vote should decide on the laws of computing technologies of which we are equally unsure—especially in terms of the psychosocial repercussions of their use, and in view of the principle of non-mastery characterizing the convergence of NBIC and industrial research that involves them (artificial intelligence, artificial life, robotics and genetics).It is engineer and philosopher Jean-Pierre Dupuy who pointed out to that non-mastery, back in 2004:

… How do we explain that science has become such a “risky” activity that, according to some leading scientists, it is now the primary survival threat to humanity? Some philosophers answer this question by saying that Descartes’ dream of “making oneself the master and possessor of nature” has gone awry; it would be urgent to return to the “mastery of mastery”. Those who say so have not understood anything. They do not see that the technology which appears on the horizon, by “convergence” of all the disciplines, aims precisely at non-mastery. The engineer of tomorrow will not be a wizard’s apprentice by negligence or incompetence, but by purpose. He will take complex structures or organisms and try to find out what they are capable of by studying their functional properties – an ascending, bottom-up approach. He will be a seeker and experimenter at least as much as an implementer. His success will be measured more by the degree to which his own creations surprise him than by the consistency of what he learns with a list of predetermined tasks[25]Still, the question remains whether this engineer, surprised by the calculation results and uncertain about their translation into the organization of life, does not remain first and foremost a proletarian: someone who no longer knows where these results come from, because they are unable to reasimilate the skills that have been externalized into machines with computational powers far more powerful than the biological capacity of the intellect. Is this search, considered “scientific” but increasingly problematic in terms of the responsibility of science, not conducted in the absence of thought (scientific and technical, but also philosophical and social), thus depriving the engineer of their ingenium: the talent that allows them to achieve professional and personal fulfilment?

Today, when some scientists decide to be missionaries for the Green Transformation and speak out as the “voice of science”, we are still missing the—undoubtedly difficult— discussion about the threat posed by the new activities which, while still considered scientific, have been made possible by computational technology. In the era of nanotechnology, this mythological demiurgic force that strikes at the organization of the smallest particles of matter, it hardly seems reasonable to claim that science coupled with computational technology serves humanity. In this context, it is worth recalling Jacques Lacan’s insightful remark that the driving force of science is actually far more inhuman than human. Science “was unleashed, as it were, only by giving up anthropomorphism, even that of … the force whose impetus was felt at the heart of human action”. It is this science, Lacan observes, that is at the root of “the disjunctions constituted by our technology [techniques]”[26]—from logical disjunctions to all sorts of divisions within something that is originally a whole, established for the purpose of better understanding. It is a truly superhuman challenge, one that can safely be seen as a test of the humanity of a part of humankind which inhabits the Western world (now known as the Global North), to make this cognition neither posthuman nor transhuman, but non-inhuman, which means subordinating the unpredictable efficiency of computational technologies to ends other than this efficiency itself, regarded as an intrinsic value, and thus, handling more carefully the unpredictability that is henceforth a primary feature of the technosphere.

The problem with scientific activity based on the computationalist paradigm, nowadays being transferred to computing technologies, is that this activity produces a state of affairs which cannot be translated to a new state of law in the double sense of the word law: as ius (i.e., referring to legal norms to which commercialized computing technologies are not subject today) and as theoria (i.e., in reference to the sets of theorems and hypotheses that make up various areas of science, today being called into question by managers involved in so-called data science and proclaiming the primacy of computational models over theoretical models, the primacy of correlation over causation[27]).

This is the anatomy of the disorder that Stiegler referred to as disruption. The term precisely defines what is currently happening to all of us: we are facing a technological disruption that impacts us and, according to the etymology of the Latin disrumpo, shatters us or breaks us into pieces.

A brief tracing of the term’s semantic transmutation shows that this impact is a deliberate action and has strong scholarly legitimacy. As Wolfgang Streeck notes, while disruption, as a term with unambiguously negative connotations, originally referred to an unforeseen and violent rupture and signified a disaster for those it struck, from the 1990s onwards it acquired a positive hue. From then on, disruption became the name for a technological innovation which, now recognised as a value in its own right, makes a difference by means of a devastating attack on the firms operating in the markets, thus radically transforming the markets themselves[28].The influential theory of “disruptive innovation” was introduced in the mid-1990s by Clayton Christensen, professor of management at Harvard Business School, from which “financialization theorists” are also recruited, as Charles Ferguson exquisitely shows in his report Inside Job[29]. Christensen theorizes that any innovation actually leads to disruption. Disruptive innovation is therefore such a kind of it that effects the disappearance of a currently operating technology (often to everyone’s satisfaction)[30]. In relation to marketing, which, as Deleuze noted at the same time, has become “the instrument of social control” that “produces the arrogant breed who are our masters,”[31] disruption was also theorized by Jean-Marie Dru, president of the global marketing agency TBWA. His theory was based on the same conviction: living in disruption is all about the reign of discontinuity and the constant overturning of existing practices and conventions[32].

The term disruption has become embedded in the Silicon Valley startup jargon. A disruptive innovation could thus be Uber, which is destroying local transportation networks, or Airbnb, damaging the fabric of cities and disturbing their everyday life. The protagonists of the Silicon Valley series aired by HBO present the results of their work at the TechCrunch Disrupt event. The ideology of disruption is also spread by Singularity University from Silicon Valley, whose branch also operates in Warsaw. This ideology seeks to completely digitize education, citing the otherwise valid argument that traditional educational institutions are based on 19th century assumptions and must therefore be disrupted[33]. Disruption followers speak ofa new DNA of the education system that is based on the assumption that learning and development must be life-long and on a holistic approach to educating people and fostering the so-called digital mindset (orientation and openness to technology) along with a new way of thinking about the world[34]. This man-made “DNA”, wrapped in a lulling narrative of how the “acceleration” taking place is all about the people, whom “technology experts” are supposed to make “resilient”[35] to change, in fact amounts to following the same old neoliberal ideology with its leading political imperative “Adapt!”, whose origins have been insightfully traced by Barbara Stiegler[36]. The only difference is that whereas neoliberal theorists attempted to derive this necessity of adaptation from the theory of evolution transferred to the plane of socio-economic relations, today’s proponents of disruption add startup technological jargon to the language of biology to cover the theoretical poverty of their discourse.All Bernard Stiegler’s theoretical and activist efforts are aimed, roughly speaking, at confronting the disruptive force—not necessarily to slow it down or stop it, but at least not to advocate even more acceleration, as leftist accelerationists would.[37] The “acceleration” is destructive in that it “short circuits” all the processes and dimensions which contribute to human civilization. The point, then, is rather to transform the momentum of the disorder and its concommittant automation into a time gain which could be used for making the process of automation itself the object of thought and thus deliberating on our future. Stiegler’s reasoning is this: care-ful thinking about the process of automation means using it for a de-automating action of thinking, as well as cultivating knowledge in all its forms (the knowledge of how to live, how to do and how to conceptualise) that automation—provided we do not succumb to its rhythms at the urging of “technological experts”—is likely to make possible. As Stiegler argued in an interview, “we have to turn what makes Uber possible into a new possibility, in a completely different tone.”[38], to open up a new epoch of technical culture that technological disruption has been irrevocably destroying. Indeed, the difference between technics and technology lies in the fact that while the former is empirical, relating to the practical skills it makes possible to acquire, the latter is entirely subordinated to computational logic, which even in principle seeks technological autonomy as an automatism. In this case, the challenge is not to foster “digital skills” which could, at best, help people operate an automated information system whose workings they do not understand, but to invest in the forms of knowledge that can be revaluated through the time saved by automation. In other words, the aim is to induce a kind of positive disruption. This is what, broadly speaking, Stiegler’s project of a new political economy is intended to serve.

5. The non-triviality of technics

In Philosophising by Accident, Stiegler thus characterizes his philosophical project:

I am interested in grounding again and entirely the question of technics. Entirely, since I consider the philosophical question is the question of technics. I do not write a philosophy of technics like those who write on the philosophy of art, moral philosophy or political philosophy, as regions of knowledge belonging to a broader and more general philosophical knowledge. Technics for me is not a regional object, but a philosophical object in itself. I raise the question of technics as the philosophical question, and from this point of view I am in a hyperphilosophy. I attempt to elaborate once again, and in full, the philosophical question, and therefore to revisit in the most general way the founding concepts of philosophical thought in their entirety, but always on the basis of the technical question. I consider this project an uncovering of a forgotten origin of all these questions[39].Stiegler’s “hyperphilosophy” took on form in the three volumes of Technics and Time – 1, The Fault of Epimetheus, 2, Disorientation and 3, Cinematic Time and the Question of Malaise, originally published in 1994, 1996 and 2001 respectively. Through a highly eclectic and heterodox reinterpretation of the European philosophical heritage from the Greeks to Immanuel Kant, Martin Heidegger, and Jacques Derrida, passing through André Leroi-Gourhan’s paleoanthropology, Gilbert Simondon’s philosophy of individuation, the critical theory of the Frankfurt School, and phenomenology, Stiegler argues that technics, being the “pursuit of life by means other than life”[40], “not only is non-trivial, but constitutes the condition of possibility of all that is not trivial.”[41]Recognizing the non-triviality of technology requires a reconstruction of the Western philosophical tradition (metaphysics) because—this is the opening thesis of Stiegler’s philosophical project—that tradition was constituted by separating knowledge (epistēmē) from artificial organs (devices, artefacts) which enable its production (technē). At the same time, all forms of knowledge (theoretical, practical and technical) require some (artificial) supports; they do not exist without technologies which – as prostheses for our memory (which is biologically finite) – not only enable the infinite intergenerational transmission of knowledge, but condition its possibility and determine its future.

What this means is that transformations of the artificial supports inevitably entail transformations of knowledge itself. Among the more recent philosophers, this was aptly described by Jean-François Lyotard in his “Report on Knowledge” published in 1979 in response to what was then called the informatization of societies, when knowledge began to be identified with information, and the unit of information was the bit: “The nature of knowledge cannot survive unchanged within this context of general [technological] transformation.”[42]. Today, in the face of societies’ algoritmization[43], we actually lack metaphors to illustrate the amount of information that feeds the technosphere. Over the last two centuries, we have produced more of it than has been created since the invention of the printing press.

The pace of the transformation to which Lyotard was referring is so rapid as to invalidate the existing knowledge, rendering it ever more inefficient. The volume of data produced in 2018 alone amounted to 33 zettabytes, or 33,000,000,000,000,000,000,000 bytes[44]. To illustrate the magnitude of the hyper-acceleration, it is worth noting that this figure represented 90 percent of the information available on the Web at the time, all produced in just the previous two years.[45] According to IBM analysts, the amount of data will rise to 175 zettabytes by 2025[46] and reach the projected level of 2142 zettabytes ten years later. To be sure, the amount of data produced became incomprehensible to the overwhelming majority of the more than 4 billion web users long before the zetta- prefix arrived. Nevertheless, the consequences of the modern hybris, the proliferation of which the Greeks equated with an excessive focus on the advancement of technology[47] and which has, in a sense, killed the web itself[48], are certainly palpable and have an overwhelming effect on what we can make sense of and see on the screens that capture our attention.

If today we may speak—as an Italian philosopher wants—of a state of exception as being permanent, it would be the state of infodemic (a WHO term[49]) resulting from the doubled information hyperacceleration, clearly showing that the explosion of information does not result in an explosion of knowledge. Rather, it seems that the opposite is happening, a process sociologist Gérald Bronner termed as a “cognitive apocalypse”:[50] information flooding has totally deregulated the “cognitive market”. As a result, it is a field of rivalry between absolutely all kinds of ideas showing myriad faces of madness that can affect the screen-bound minds. The challenge is to overcome this apocalypse, because the way we react to it determines our chances of escaping what for the moment should be called, Bronner argues, the threat to civilization.The challenge of facing that peril is essentially technological. As we take it up, it may be useful to update the project of Simondon, who, aiming at the rehabilitation of technical culture and the integration of technical reality into general culture, wrote: “In order to restore to culture the truly general character it has lost, one must be capable of reintroducing an awareness of the nature of machines, of their mutual relations and of their relations with man, and of the values implied in these relations.”[51]. It was in the name of these values that Simondon postulated that “an axiomatic of technology”[52] should be taught “in the same way the foundations of literary culture are taught”, and “[t]he initiation to technics must be placed on the same level as scientific education”[53].

Stiegler’s position relative to and within philosophy in the face of this challenge is clear: the question of technics as a primary philosophical question is fundamentally political, and a philosophy that does not take such a question seriously, that is, does not situate it in the context of the consequences of the so called digital revolution, has no future. Thus, while in Le technique et le temps Stiegler pursued his “hyperphilosophy” with a view to making technics a noble and, in this sense, non-trivial object of philosophical deliberation, his works written during the first decade of the twenty-first century up to the most recent ones develop and practice this hyperphilosophy with a single, albeit multifaceted, goal: to develop a new political economy. “Philosophy that cannot be translated into the categories of political economy is a phony,” declared Stiegler in the informally sounding conversation with Mehdi Belhai Kacem[54].

6. Kant and cinema

In his philosophical struggle, Stiegler conceives of philosophy in at least two interlocking ways: 1) as a rational field of knowledge whose possibility of existence is contingently shaped and changes with the transformations of the technical system, and 2) as a new critique of political economy, whose assumptions Stiegler wants to reformulate by starting from the question concerning technology and politicizing this question in a different way than Karl Marx did, albeit in a somewhat Marxian spirit, critically updated and re-read outside the big conceptual machines of Marx’s philosophy[55]. He writes: “I am convinced that it is for want of an economic and political proposal capable of projecting beyond the Anthropocene that barbarian behaviour multiplies”[56].

The realization of such a project with the use of philosophical concepts turned into instruments for struggle, however, requires first recalling and updating the function of reason. Reason, Stiegler states, referring to the detailed analysis of Immanuel Kant’s Critique of Pure Reason in the third volume of La technique et le temps, as well as to the use Max Horkheimer and Theodor W. Adorno made of Kant’s postulates, is impure, for it is a technology. What does he mean? Kant distinguished four types of mental faculties: intuition, intellect, imagination and reason, while making it clear that every thought produced needs some schema that is a product of rational imagination. While Stiegler supports this claim, he notes something that Kant missed: the patterns of imagination are produced by the artefacts that constitute culture, whose production in the twentieth century was aligned with cultural capitalism. In other words, imagination now operates in conjunction with what Horkheimer and Adorno described in Dialectic of Enlightenment as “culture industry”[57], which is founded on the “imagination’s industrialization as an industrial exteriorization of the very power to schematize, and thus as an alienating reification of knowing consciousness”[58].

Horkheimer and Adorno brought out the mighty guns at Kulturindustrie with its Hollywood capital in 1944: Kulturindustrie, they said, paralyzes human imagination and discriminating ability to such an extent that the spectator is unable to distinguish perception from imagination, fiction from reality. This critique, though certainly biased, at times outdated, and for some, bourgeois, is still in many ways valid and difficult to ignore today in the context of the Virtual Reality being installed, and especially in relation to the Metaverse project. However, Horkheimer and Adorno missed a crucial point: The problem is not only that an industrially organized technical system, in this particular case cinema (and then, even more generally, television), can subjugate the workings of the audience’s imagination and industrially direct, synchronize, so in a sense control the flows of individual consciousness. The fundamental point is that—Stiegler argues, correcting Kant and Horkheimer and Adorno at the same time—imagination does not work without artefacts and in this sense it needs to be propped up, and the primary prosthesis that keeps imagination “going” is the cinema and the “image-objects” that comprise it. This is why the question of imagination in the industrial age—the imagination that Kant called transcendental but which is above all hypermaterial—is one of political economy. For how our imagination works or does not work, and what use we can make of it, depends to a large extent on the industrial organization of artefacts that conditions its work, for better or worse.

Interestingly, Stiegler’s theses, derived from his critique of Kant’s three critiques and from a critique of critical theory, find surprising confirmation in modern neuroscience. In his recent book with the telling title, Le cinéma intérieur. Projection privée au cœur de la conscience [The Inner Cinema: Private screening in the heart of consciousness], Lionel Naccache argues that humans are fiction makers:

In fact, we never stop producing, in an uncontrollable way, the meanings we assign to everything we experience. Meanings which, for this reason, can rightly be called fictions – not to emphasize their illusory or erroneous character (which they do not necessarily have), but to remind us of their subjective dimension: these meanings, accurate or not, make sense in our own eyes; more than anything else, they constitute the sense that things have for us[59].

For Naccache, creating these meanings is a fundamental quality of our minds. Each of them lives in and, like in the cinema, makes sense of a world which is more or less its own, though forever inhabited by others, therefore shared. The movie protagonist is the self. The functioning of the mind (or the “mind-brain” as Naccache calls it) which neuroscience describes in universal terms, always manifests itself in “diversal” ways; it is inscribed in the singular fictions that constitute all and every one of us.

But is this peculiar “inner cinema” really surprisingly similar to cinema as such, or is it nonetheless significantly conditioned by the images generated by the latter on an industrial scale? This question could be posed by Stiegler himself, who wrote in Echographies of television co-authored with Jacques Derrida:

The image in general does not exist. What is called the mental image and what I shall call the image-object (which is always inscribed in a history, and in a technical history) are two faces of a single phenomenon. They can no more be separated than the signified and the signifier which defined, in the past, the two faces of the linguistic sign[60].

The point here is not to contest something that actually exists, namely the difference between the mental image and the (moving and temporal) image-object, the primary unit of every film which is the result of “systematic discretization of movement” through “a vast process of grammaticalisation of the visible”[61]. The challenge for thought is not to view these two images as opposites but to see how they blend together.

Only by following this course can we realize what happens to the mental images that make up the film, directed and screened at the same time by our mind, how we are transformed by the analogue-digital event and what or who is co-writing the script at the time when our capacity of seeing seems to be in danger of being overwhelmed by image overflow.[62] This is because the mind, like culture, visual or otherwise, does not exist without technology any more than attention exists without its object.

7. “The adventure of living better”

A deeper reason for today’s increasingly widespread criticism of Facebook/Meta and other Silicon Valley capitalist tycoons is the mode of existence of contemporary image-objects, entirely controlled by algorithms that are unknown to us and therefore cannot be subjected to rational criticism or deliberation; they outcompete reason and make it deficient. Thus deprived of the essential tool of struggle for a rational future, we fall into madness that can only perhaps be ended by self-annihilation, as Stiegler seems to suggest in probably the grimmest of his books[63]. It is in this specific context that he calls for a new critique of the modern reason as a new critique of political economy, for the primary goal of this critique is industrial reorganization: redesigning the functionality of the technical system to realign it with the functions of reason.

Just as Stiegler made a critique of Kant that was fundamental to twenty-first-century philosophy,[64] his new critique of political economy was also developed in very close dialogue with Kant. For him, therefore, the basic function of reason remains the capacity to synthesize, and it is this capacity that the new algorithmic order, as it has been constituted so far, renders systemically dysfunctional, since, in the name of efficiency-enhancing optimization, it seeks to completely automate the activities of the intellect, which are henceforth transferred to computing machines far more powerful than the intellect itself.

To explain this change occurring in a society that becomes automated, Stiegler maintains Kant’s distinction between reason and intellect. Computation performed by algorithms is nothing more than an automated intellect. This kind of technological automation makes it possible to transfer to the machine the analytical functioning of what Kant defined as Verstand (intellect, sometimes translated as the understanding). The intellect can be automated precisely because its faculty is analytic, as opposed to the synthetic faculty of reason. The fundamental problem, however, is that intellect without reason, which always transcends calculation, does not produce knowledge.

Let us be clear: it is not, of course, a matter of pitting synthetic reason against analytic intellect, rejecting the calculations, measurements, and computations that are inherent to the function of intellect. Reason and intellect blend together, so the challenge for thought is to understand what rules govern such blending in the new techno-logical environment: this involves, on the one hand, seeing the epistemological limits of computational models, which entails a critique of the theoretical claims or beliefs underlying their development, and on the other hand, reflecting on the uses made of these models, which are good when they take into account the context and domain of application.

This kind of reflection indeed transcends the scope of intellect; it belongs to the domain of reason, which, however, does not so much “control” intellect, as Kant would have it, as make deals with it, because it is “impure”, dependent on intellect’s techniques: it can control these techniques, thus opening a “new age of reason”, or it can be systematically killed by them, which always involves an upsurge of irrational forces. Therefore, recognizing the limits of intellect and the computing machines it creates must go hand in hand with recognizing the limits of reason, which is itself a technology. This is not to deny that in making such a distinction, it is still helpful to note the function that reason, according to Alfred North Whitehead, performs: it is “the organ of emphasis upon novelty”). The functioning of this organ is what opens “the adventure of living better ”[65].

It seems that the retrieval of this function of reason is of vital importance at the time when key decisions for overcoming the crisis have to be made. The scope of any decision made by the intellect only covers the area in which it can be about repeating or reinforcing what is already there. However, what is at stake today is making decisions that result from a synthesis, i.e. decisions that require an appeal to a reason that in a real situation is capable of going beyond what is already there. This is why we cannot hope that the adventure of living better will come to be through computing technologies, which are currently leading “to a hypertrophy of the understanding and to a regression of reason in the Kantian sense – as the faculty of deciding, operating through a synthesis that is also called judgement”.[66]

8. Political and scientific implications of cognitivism

The machines using deep learning and neural networks (commonly known as artificial intelligence) that make up this infrastructure today are purely analytical and computational. And in that sense, they do not think. What distinguishes thinking from intelligence is that thinking goes beyond computation and therefore makes it possible for something to emerge that is not yet there, that is future. Such a statement, however, stands in stark contrast to cognitivism which maintains (after cybernetics) that thinking is reckoning[67], and that mind is but a computing machine whose workings can be satisfactorily explained by invoking a model of either the computer (computationalist perspective) or the neural network (connectionalist perspective). This discourse turned out to be so significant that automatism is today regarded as the dominant model of cognition[68].

At the root of Stiegler’s project to rebuild computer science theory lies a critique of this model of cognition and the theoretical claims behind it, which assume far-reaching analogies between machines and living organisms and between the mind and a computer. What makes these analogies possible is the vague notion of information[69], itself derived from computational models: computability theory (studying can be solved with the use of computers) and Shannon’s mathemathical theory of communication (which discusses and optimizes communication as signal transmission in noisy channels while the meaning carried by the signal is treated as irrelevant, just as the type or function of the devices enabling the communication).[70]

It is because of these complex interrelationships that the way to transform computer science and its applications is through a critique of its theoretical underpinnings, which derive from theories of human cognition that were considered revolutionary in the second half of the 20th century and linked to the simultaneous development of computers, neurophysiology, and artificial intelligence. The aim is to show the epistemological limitations of these theoretical models on the one hand, and the political and economic implications of their applications in the context of integrated automation, installed by means of digital platforms, on the other. In this context, there is a rather obvious connection between computer science and economics whose primary raw material is not only our data but also our mental resources, the relationship which is purely ideological and fraught with long-term psychosocial consequences. The challenge then is to prepare the theoretical foundations for a new and necessarily cross-disciplinary epistemology that would allow us to break out of the automated clamp of computational capitalism. Such an epistemology could provide a new impetus for applied computer science and broadly defined digital design, whose purpose would no longer be subordinated to the efficiency imperative of automated computational systems that reinforce market mechanisms but would instead foster the functional generation of novelty.

9. Exorganogenesis and functions of the external organs

Generation of novelties is, as we already know, the function of reason described as an “organ.” Novelty, however, can be understood here in relation to theoretical biology, where it has a very specific meaning: the production of novelties is the production of new structures that enable the organism to perform new functions; they remain incalculable and, in this sense, improbable.[71]. Biologist Maël Montévil situates these new structures within what he calls “possibility spaces”[72]. It is thanks to these novelties that biological organisms, organizing themselves through organs capable of performing new functions, can not only keep themselves alive but also organize life by transforming the places in which they live. This transformation, however, is only possible locally and within what Montévil and Mossio call the “constraint closure”.[73]. Constraints here, just like novelties, have a functional, not metaphoric meaning, referring us to mechanics (constraints as limitations imposed on movement of a body or system of bodies), but in this case they should be redefined to account for the specificity and singularities of organisation and biological function of organisms, whereas “closure” means a kind of barrier that, while restricting, does not completely close off the field of change and does not limit the dynamics typical of that change.

It is thanks to the novelties generated in this field, which, in the language of Jacques Derrida’s philosophy, might be defined as the creation of differences, that organisms can “negotiate” with the inexorable law of entropy: they cannot eliminate the entropic tendency, which, from a thermodynamic perspective, is inherent in nature and closely related to the course of the evolutionary process, which means that organisms are subject to this tendency (because they live) and themselves produce entropy (because they evolve). But they have enough agency to delay the entropic tendency locally—by producing differences in and within the constraint closures which affect them and on which they depend. In other words, their entropic tendency goes hand in hand with the anti-entropic tendency (specific to living organisms and the systems they create), which allows the system to conserve energy and retain its dynamic potential for renewal enabling it to improve its functioning. Thus, the two tendencies exist in parallel, not in opposition. In fact, they are complementary: Generation of anti-entropy cannot occur without generating entropy, some forms of which may have a key function in maintaining the life process (as is the case of chemical diffusion in the body)[74].

Although human animals are, of course, living organisms, our specificity is more than biological, because we are creators of novelties which fundamentally transform our living environment and ourselves by using not so much our organs as instruments, and not so much biological organs as technological ones, subject to an evolution parallel to biological evolution. What is more, they are organized into systems, enabling the human species to steer the direction of evolution itself by means of what Charles Darwin called artificial selection, that is, by amplifying and perpetuating desirable traits in future generations of plants and animals. Artificial selection is one of the roads that have brought us to an artificial Earth.

Another road is through exosomatization. While in his first works Stiegler defined technology as “the pursuit of life by means other than life”, in his last writings he drew on the achievements of the biologist Alfred Lotka and his term “exosomatic evolution”, referring to the evolution of extra-corporeal organs. Lotka introduced this term in 1945 on the basis of his earlier works to describe the “increased [in the human species – M. K.] adaptability has been achieved by the incomparably more rapid [compared with previous times] development of… ‘artificial’ aids to our native receptor-effector apparatus.”[75]. As organized human organisms, we generate novelties using those “artificial aids” in the process of exosomatization which has, beginning with Upper Paleolithic, transformed anthropogenesis into a process of exorganogenesis, and human organisms and organizations into exorganisms and exorganizations, which, since the industrial revolution, have been increasingly dependent on “exosomatic comfort”[76], and creating information systems that are increasingly complex, and thus increasingly prone to critical instabilities and balancing on the verge of information chaos.

Indeed, exosomatic organs are characterized by an irreducible ambivalence: while they serve us to postpone the rise of entropy by producing more or less local and territorially contingent differences, which for Stiegler is synonymous with the production of new skills, they can also utterly destroy our living environment, not only in a biological, but also in a social and psychological sense. The process of exosomatization, which unprecedentedly accelerated in the 20th century, largely due to the invention of computer notation[77], had two characteristics: first, it was subordinated to an economic process whose organization, based on the outdated theorems of classical physics[78], contributes to increasing entropy, as first described in Nicolas Georgescu-Roegen’s groundbreaking work[79]. Second, with the development of computing machines based on computerized recording, the process of exosomatization has undergone a fateful mutation: while previously it was about exteriorizing the computational abilities of the intellect and transferring them to machines (Marx’s famous hypothesis of the existence of a general intellect),[80] once our data and our attention became the primary resources of capitalism (data economy and economy of attention), the process of exosomatization began to be based on interiorizing the function of computing machines whose design and operation are beyond our control, even though they fundamentally affect the process of development of what Lev Vygotsky called our “higher psychological functions,” arguing that they have a social origin and are “psycho-cultural” rather than biologically or intellectually grounded[81].

We are first products of this exorganogenetic mutation, still having a chance to use it to design new supports that will enhance our higher mental functions rather than impair them, and thus functionally contribute to the production of new forms of knowledge instead of annihilating it. The question of design – and therefore the question of its underlying theoretical assumptions that determine the functionality of the designed application, platform or system – seems to be crucial in this regard. It requires the adoption of a somewhat broader perspective that encompasses the relationship between technics, people and their living environment, which is always a technical environment. The point is that technics is not neutral by definition, because its implementation functionally changes the environment and conditions its further transformation possibilities, as well as opens or shuts off new opportunities. For this reason, the still widespread misconception that what ultimately matters is how we use these applications, platforms, or systems, which in turn sustains the misconception of technology as a passive tool and treats the question concerning technology as a pseudo-issue, also in relation to the usual political economy debates.

10. Information technologies as tools of the struggle against denoetization and for deproletarianization

The basic question that the theory of exorganogenesis triggers in relation to the form it has taken in computational capitalism is this: are we able to imagine information technologies that foster the production of anti-entropy on a human scale, which means the production of new forms of knowledge? In what ways could data [Latin datum: a gift, donation] that we presently donate to the GAFAM (Google, Amazon, Facebook, Apple, Microsoft) instead serve this kind of production, making knowledge and the deliberative processes it involves a foundation of a new mode of management? How do we design computational machines that foster, rather than replace, these deliberative processes, and so remain open to what is incalculable and what requires decisions to be made with other criteria in mind?

For Stiegler, knowledge means various skills that makes us not only informed but also, most importantly, capable: capable of living, designing, theorizing, contemplating, educating, nurturing, cooking, caring. The commonly made argument that knowledge is now being democratized because it is available online is flawed because it confuses knowledge with information. Knowledge, however, cannot be reduced to information insofar as its production requires interpreting and transforming information rather than mechanically processing it, which in itself is always a more or less critical act. Moreover, this production is never an individual activity; Instead, it involves sharing and transmission whereby knowledge can be reproduced, taking on improbable and unpredictable forms, whether it be life knowledge, practical knowledge, artistic knowledge, or scientific knowledge – even if the production of these forms of knowledge is governed by different laws and requires the observance of different rules established by various, formal or informal, professional or amateur, communities of knowledge, from societies and cities to communities of scholars, wikipedians, free software developers, associations, guilds, clubs, or allotment communities.

In this sense, knowledge as composed of skills is never outdated, provided that it gets put into practice. Nor is it reduced to cognition, but forms “noesis” and is the basis of a life that can be described as “noetic”[82] The key task of a „new theoretical computer science” would therefore consist in reinforcing various forms of noesis with the awareness that new information technologies, like any technology, can serve the proliferation of noetic life and at the same time generate a functional “denoetization,” i.e., a loss of the ability to think. Here we come to a recurring motif in Stiegler’s thought, the idea of technology being a pharmakon: both the medicine and the poison, what makes us thoughtless and what enables us to fight our own thoughtlessness (even while being aware that ultimate victory cannot be ours). Technology is not the problem; the challenge is to make it the object of thought and to learn to think with it, which today, according to Stiegler, means: to think about it care-fully.

One inherent property of modern technics is that the progress of technological innovation is always faster than that of societal organizations and culture, neither of which exist without technology, or exorganogenesis. The uniqueness of today’s system-organized information technologies, which have themselves acquired organizing properties as if they were living organisms, is that they comprise a new stage of exosomatization that Stiegler has described as “exorganogenesis of noesis”[83]. Therefore, what is in mortal danger at this stage is our very ability to think care-fully about this “exorganogenesis of noesis” and to learn to think within it with the use of new exosomatic organs. The tragedy of this situation, experienced at every step of our digital and neocybernetic day-to-day reality, is well reflected in Norbert Wiener’s warning, which was to be the motto of the fourth volume of Technics and Time titled Facultés et fonctions de la noèse dans l’âge post-véridique [Faculties and functions of noesis in the era of post-truth]: “Cybernetic technology is a two-edged sword, and sooner or later it will cut you deep.”[84]

A cursory reading of any of Stiegler’s recent works is enough to see that their tone was increasingly desperate and that they were filled with grim diagnoses of the “automated society” that had arrived in the “Anthropocene era”:

The Anthropocene era is that of industrial capitalism, an era in which calculation prevails over every other criteria of decision-making, and where algorithmic and mechanical becoming is concretized and materialized as logical automation and automatism, thereby constituting the advent of nihilism, as computational society becomes an automatic and remotely controlled society.[85].

This desperation is frequently accompanied by a sense of tragedy. Yuk Hui, in his “memory of Bernard,” is right to say that he was “a thinker of catastrophe” and “the greatest tragist since Nietzsche.”[86]. But this should not obscure what Philippe Petit has rightly noted: that his thinking is “the most desperate [of all in the twenty-first century – M.K.] but also the most open to the future.”[87].

Stiegler’s melancholia is not overwhelming like the one in the so titled Lars von Trier movie[88]. Rather, it is like Vincent van Gogh’s “active melancholy”[89]. We are experiencing the apocalypse and its attendant nihilism, which permeates that which is de facto happening. The challenge is to see an opportunity in that permeation to, as Anne Alombert puts it, “transform the facts, always violent and arbitrary, into new and necessary laws.”[90]. The critique of the theoretical foundations of computing that Stiegler has attempted to initiate wants to use this opportunity to transform computing devices into vehicles for the development of new forms of thought, and thus new skills. The point is not to allow these devices to overpower us and, to quote Gilles Deleuze, “to be worthy of what happens to us.”[91].

11. It’s NOT okay to panic

In the broader context of Stiegler’s project of political economy, already developed in his earlier works, the reconstruction of the theoretical foundations of computer science fits into the basic goal of that project, which is to struggle against “generalized proletarianization”, or the loss of various forms of knowledge and their constituent skills resulting from “extremely rapid and violent penetration of technologies into the various social systems (family system, educational system, political system, legal system, linguistic system, etc.)”[92].

The proletarianization here refers to the process which “deprives individuals and communities of their knowledge. An individual is proletarianized when they fail to reappropriate or re-interiorize knowledge that has been exteriorized (and often automated) in a technical support.”[93]. A distinctive feature of computational capitalism is that proletarianization has begun to encompass not only the loss of practical and life skills in it, as in the case of industrial (19th century) and consumer (20th century) capitalism, but also the loss of design skills, now being automated, and the ability to make rational decisions. In his Automatic Society, Stiegler invokes the example of Alan Greenspan, former head of Federal Reserve who, during a Congressional hearing after the financial crash of 2008, was unable to answer what caused this calamity; obviously the financialization process he oversaw had deprived him of such abilities. Moreover, in testifying, Greenspan emphasized that the system’s widely accepted practices of using derivatives had been validated for instance by a Noble Prize in economics awarded for the discovery of a new method of determining their value.[94] This kind of knowledge deprivation is also faced today by the architects of Facebook, who fail to control the operation of the algorithms they have created.

Commenting on the film Don’t Look Up, physicist Szymon Malinowski, the protagonist of the documentary It’s OK to Panic[95] whom Polish media compared to the Leonardo DiCaprio scientist character, remarked: “This film is a laying out on the table of the basic mechanisms that govern social life, which has become detached from its natural foundations.”[96] However, these mechanisms are primarily technological, and such disconnect is a result of information poisoning. The problem may not necessarily lie in the “…people [who] completely misunderstand how dependent we are on the climate”[97], but in the existing mode of digital industry, which simply makes this understanding impossible.

A relevant climate policy will not be concretized outside of the politics of technology, for the need to tackle the climate crisis is unlikely to become everybody’s concern when our attention is continually diverted from what is really important. Bernard Stiegler’s proposed critique of the theoretical foundations of computer science attempts to overcome this state of inattention, which ought to induce inquietude and thus produce new thought. Indeed, inquietude is the precondition for a thought that prevents it from turning into fear, because it tames the anxiety and guards us from disgrace. In this sense, it’s not OK to panic, insofar as panikos means collective fear, the kind that precludes any reasonable action.

  1. B. Stiegler & Internation Collective, Bifurcate: ‘There is No Alternative’, transl. D. Ross, London: Open Humanities Press, 2021.
  2. See B. Stiegler, For a New Critique of Political Economy, trans. D. Ross, Cambridge: Polity Press, 2010, and idem (with Frédéric Neyrat), From Libidinal Economy to the Ecology of the Spirit, trans. A. De Boever, Parrhesia, London: Open Humanities Press, 2020.
  3. Idem, Automatic Society, Volume 1: The Future of Work, trans. D. Ross, Cambridge: Polity Press, 2016, pp. 33-34.
  4. Conceptual inventiveness also serves Stiegler to engage in critical dialogues with Jacques Derrida, Gilles Deleuze and Jean-François Lyotard. Stiegler carries out such a dialogue in those fields in which those philosophers’ thought appears to be either insufficient or in need of critical reconstruction, especially in relation to the technological and technoscientific transformations of the contemporary world. It is therefore a matter of, on the one hand, continuing their work and updating it critically in the new technological context, and on the other, overcoming the divisions between them, for the most part created by their respective acolytes, rather than really existing between the philosophers themselves. It is difficult today to make a meaningful assessment of the legacy of post-structuralism and attempts to apply it to the analysis of the contemporary situation, especially in relation to questions of ideology, political theory and the critique of political economy, without considering the way in which that legacy was updated and transformed by Stiegler (see B. Stiegler, M.B. Kacem, Philosophies singulières. Conversation avec Michaël Crevoisier, Paris/Zurich/Berlin: Diaphanes, 2021, p. 156, and B. Dillet, ‘The Pharmacology of Poststructuralism: An Interview with Bernard Stiegler’, in: The Edinburgh Companion to Poststructuralism, ed. B. Dillet, I. MacKenzie, R. Porter, Edinburgh: Edinburgh University Press, 2013).
  5. B. Stiegler, M.B. Kacem, Philosophies singulières…, p. 156.
  6. B. Stiegler, La Pharmacologie du Front National, Flammarion, Paris 2013, p. XI.
  7. See A. Alombert, ‘Jakie transformacje energetyczne na rzecz trzech ekologii? Entropie, ekologie i gospodarka w erze antropocen’, Er(r)go 2022, no. 1 (44) [forthcoming].
  8. See Histoire des Techniques, B. Gille (éd.), Paris: Gallimard, 1978.
  9. See J. Ellul, The Technological System, London: Continuum, 1980.
  10. A. Supiot, The Spirit of Philadelphia: Social Justice vs. the Total Market, transl. S. Brown, New York/London: Verso, 2012.
  11. Ibid. The Declaration stresses that every person, regardless of race, creed or gender, has the right to prosperity and spiritual development, and that ensuring favourable conditions for the exercise of this right shall be the basis of national and international policies of states and peoples of the world. The International Labor Organization was established in 1919 as an autonomous organization affiliated with the League of Nations. After World War II, when the League of Nations was transformed into the United Nations, the ILO came to be affiliated with the United Nations. In his analyses, Supiot emphasizes that this hit the former Eastern Bloc countries with particular force in the process of so-called democratization after 1989 when democracy was regarded as synonymous with market capitalism.
  12. TINA is the acronym for “there is no alternative”, phrase attributed to Margaret Thatcher who used it to comment on the economic reforms she was introducing in Great Britain, which is why the “Iron Lady” was also known by the nickname “Tina”. Her “no-alternative reforms” led to a social disintegration resulting in the emergence of “less than a society, a post-social society” (W. Streeck, How Will Capitalism End. Essays on a Falling System, London/New York: Verso, 2018, pp. 13–14). The contemporary reaction of forces tending towards nationalistic closure, often coupled – as in Poland – with religious fundamentalism, and with what has always driven such forces: the search for scapegoats, from refugees and Muslims to LGBT+ people, should be seen as the result of these “no-alternative reforms” and an attempt to fill the empty space left by society.
  13. “In our time, the great task for libertarians is to find an escape from politics in all its forms – from the totalitarian and fundamentalist catastrophes to the unthinking demos that guides so-called ‘social democracy’” (P. Thiel, The Education of a Libertarian, Cato Unbound, April 13, 2009, https://www.cato-unbound.org/2009/04/13/peter-thiel/education-libertarian/ [accessed 22 January 2022]. As cited in: D. Djaïz, Slow Démocratie. Comment maîtriser la mondialisation et reprendre notre destin en main, Paris: Allary Éditions, 2019, p. 21.
  14. See A. Supiot, Governance by Numbers. The Making of a Legal Model of Allegiance, transl. S. Brown, London: Bloomsbury Publishing, 2014.
  15. F. Guattari, The Three écologies, transl. I. Pindar and P. Sutton, London/New Brunswick, NJ: The Athlone Press, 2000, p. 46.
  16. D. Djaïz, Slow Démocratie…, p. 21.
  17. As neuroscientist Michel Desmurget alarmingly points out, the widespread availability of digital devices and the ubiquity of screens capturing our attention, especially the attention of children, affect the very foundations of human intelligence, such as the ability to use symbolic language, to concentrate, to memorize and to live in culture (here understood as a certain set of skills that help us organize ourselves both on an individual and collective levels, and to understand the world). Informational overstimulation generates intellectual understimulation, which renders the brain unable to actualize its potential. This is what Demurget means when he worries about the “manufacturing of the digital cretin” – in the purely medical sense (cretinism as a medical condition indicating intellectual and related somatic disorders), without the meaning of an insult. His analyses also make it clear that what causes this cretinism is not the content oozing from the screens. The content itself, whose appraisal is always subjective, is of secondary importance here. The actual problem is us consuming it. According to the research cited by the neuroscientist, in the Western world this overconsumption amounts to three hours a day in the case of children between two and seven years of age, nearly five hours for children aged eight to twelve, and nearly seven hours for adolescents aged thirteen to eighteen. On an annual basis, this amounts to one thousand hours for preschool children (which is more than one school year), 1,700 hours for middle school children (two school years), and 2,400 hours for high school students (two and a half school years). The discussion of school failure that is currently sweeping through Poland fails in that it does not consider the technological factors that determine the systemic school failure of an increasing number of children and adolescents. (M. Desmurget, La Fabrique du crétin digital. Les dangers des écrans pour nos enfants, Paris: Seuil, 2019).
  18. M. Gerritzen, G. Lovink, Made in China, Designed in California, Criticised in Europe: Amsterdam Design Manifesto, Institute of Network Cultures, https://networkcultures.org/blog/publication/amsterdam-design-manifesto/ [accessed: 22 January 2022].
  19. See Y. Hui, On the Existence of Digital Objects, Minneapolis/London: University of Minnesota Press, 2016.
  20. G. Simondon, On the Mode of Existence of Technical Objects, trans. C. Malaspina and J. Rogove, Minneapolis: Univocal Publishing, 2017, p. 9.
  21. Ibid., pp. 133-134.
  22. “This setting-upon that challenges the energies of nature is an expediting, and in two ways. It expedites in that it unlocks and exposes.” (M. Heidegger, ‘The Question Concerning Technology’, in: idem, The Question Concerning Technology and Other Essays, trans. W. Lovitt, New York/London: Garland Publishing, 1977, p. 15).
  23. J.W. Moore, Capitalism in the Web of Life: Ecology and the Accumulation of Capital, London: Verso, 2015, p. 87.
  24. É. Aeschimann, ‘Portrait. Bruno Latour, le climat mis au vote’, Libération, December 20, 2010, https://www.liberation.fr/terre/2010/12/20/le-climat-mis-au-vote_701827/ [accessed: 22 January 2022].
  25. J.-P. Dupuy, ‘Le problème théologico-scientifique et la responsabilité de la science’, Le Débat 2004, no. 2 (129), p. 181.
  26. J. Lacan, The Triumph of Religion, preceded by Discourse to Catholics, transl. B. Fink, Cambridge, UK/Malden, Mass.: Polity Press, 2013, p. 36.
  27. Chris Anderson’s famous article perfectly illustrates this type of reasoning: “Correlation is enough”, the TED president writes provocatively. “We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot.” (C. Anderson, ‘The End of Theory: The Data Deluge Makes the Scientific Method Obsolete’, Wired, June 23, 2008, https://www.wired.com/2008/06/pb-theory/ [accessed: 22 January 2022]). While such reasoning can hardly be taken seriously from the point of view of mathematics, which is a theoretical science, it is also true that the power of this kind of atheoretical approach to influence the imagination of contemporary algorithm engineers, developers of translation machines based on neural networks, or data scientists themselves seems to be enormous. Mathematician and computer scientist Cristian S. Calude, along with mathematician and logician Giuseppe Longo, have made an accurate critique of this kind of approach (C.S. Calude, G. Longo, ‘The Deluge of Spurious Correlations in Big Data’, Foundations of Science 2017, no. 3 [22], pp. 595–612).
  28. W. Streeck, How Will Capitalism End…, p. 39.
  29. See B. Stiegler, States of Shock: Stupidity and Knowledge in the 21st Century, transl. D. Ross, Cambridge, UK, Malden, MA: Polity, 2017.
  30. J.L. Bower, C.M. Christensen, ‘Disruptive Technologies: Catching the Wave’, Harvard Business Review 1995, nos. 1–2, https://hbr.org/1995/01/disruptive-technologies-catching-the-wave [accessed: 22 January 2022].
  31. G. Deleuze, Negotiations. 1972–1990, trans. M. Joughin, New York / Chichester, West Sussex: Columbia University Press, 1995, p. 181.
  32. J.-M. Dru, Disruption live: Pour en finir avec les conventions, Paris: Campus Press, 2003.
  33. See M. Marchwicka, Transformacja cyfrowa a ludzie, March 26, 2021, https://digitaluniversity.pl/transformacja-cyfrowa-a-ludzie/ [accessed: 22 January 2022].
  34. Ibid.
  35. This so-called resilience would supposedly help us survive the inevitable violence of disruption. The term resilience is used in many fields: ecology (the ability of an ecosystem to recover from a turbulence), psychology (an individual’s ability to adapt to a hostile environment), materials science (resistance to deformation) or in relation to information networks (the ability of networks to sustain operation and correct errors that occur). The ideological capture of the term is that resilience is now considered to be the most desirable skill in a situation of discontinuity caused by disruption, both for organizations and for individuals. As Streeck notes, resilience should not be confused with resistance. “Resilience is not resistance but, more or less voluntary, adaptive adjustment. The more resilience individuals manage to develop at the micro-level of everyday life, the less demand there will be for collective action at the macro-level” (see W. Streeck, How Will Capitalism End…, p. 40).
  36. See B. Stiegler, Il faut s’adapter. Sur un nouvel impératif politique, Paris: Gallimard, 2018.
  37. Cf. N. Srnicek, A. Williams, ‘#Accelerate Manifesto for an Accelerationist Politics’, May 14, 2013, https://criticallegalthinking.com/2013/05/14/accelerate-manifesto-for-an-accelerationist-politics/ [accessed: 22 January 2022] and idem., Inventing the Future: Postcapitalism and a World Without Work, London/New York: Verso, 2015.
  38. B. Stiegler, ‘L’accélération de l’innovation court-circuite tout ce qui contribue à l’élaboration de la civilisation’, Libération, July 1, 2016, https://www.liberation.fr/debats/2016/07/01/bernard-stiegler-l-acceleration-de-l-innovation-court-circuite-tout-ce-qui-contribue-a-l-elaboration_1463430 [accessed: 22 January 2022].
  39. B. Stiegler, Philosophising by Accident: Interviews with Élie During, transl. B. Dillet, Edinburgh: Edinburgh University Press, 2017, p. 35.
  40. Idem, Technics and Time, 1, The Fault of Epimetheus, trans. R. Beardsworth and G. Collins, Stanford, Ca.: Stanford University Press, 1998, p. 17.
  41. Idem, ‘Critique de la raison impure’, interview by Camille Riquier, Esprit, 2017, no. 3–4, pp. 118–129, DOI: 10.3917/espri.1703.0118; https://www.cairn.info/revue-esprit-2017-3-page-118.htm [accessed: 22 January 2022].
  42. J.-F. Lyotard, Postmodern Condition: Report on Knowledge, trans. G. Bennington and B. Massumi, Manchester, UK: Manchester University Press, 1984, p. 4.
  43. See D. Cardon, À quoi rêvent les algorithmes. Nos vies à l’heure des big data, Paris: Seuil, 2015 and É. Sadin, La vie algorithmique. Critique de la raison numérique, Paris: L’Échappée, 2021.
  44. T. Gaudiaut, ‘La totalité des données créées dans le monde équivaut à…’, Statista, April 24, 2019, https://fr.statista.com/infographie/17793/quantite-de-donnees-numeriques-creees-dans-le-monde/ [accessed: 22 January 2022].
  45. Planétoscope, https://www.planetoscope.com/Internet-/1523-.html [accessed: 22 January 2022].
  46. H. Vatter, ‘Addressing Data Growth with Scalable, Immidiate and Live Data Migration’, IBM, November 30, 2020, https://www.ibm.com/blogs/journey-to-ai/2020/11/addressing-data-growth-with-scalable-immediate-and-live-data-migration/ [accessed: 22 January 2022].
  47. See X. Guichet, Care in Technology, Hoboken NJ: John Wiley & Sons, 2021, section 1.1.2.2.
  48. In 2017, André Staltz, a software developer who “spends most of his time building transparent software,” noted on his blog that 70 percent of flows on the Web are controlled by just two companies: Google and Facebook (Meta), which invites reflection on whether we can still call it the Web. (A. Staltz, ‘The Web Began Dying in 2014. Here’s How’, Staltz, 30.11.2017, https://staltz.com/the-web-began-dying-in-2014-heres-how.html [accessed: 22 January 2022]).
  49. The World Health Organization introduced this term to refer to the ongoing pandemic. This linguistic device was intended to show that the COVID-19 virus is the first-ever virus spread technologically, by means of disinformation or misinformation affecting a substantial portion of the population and contributing to the undermining of public health, both as a field of empirical knowledge and as a public institution (joint statement by WHO, UN, UNICEF, UNDP, UNESCO, UNAIDS, ITU, UN Global Pulse, IFRC, ‘Managing the COVID-19 Infodemic: Promoting Healthy Behaviours and Mitigating the Harm from Misinformation and Disinformation’, September 20, 2020, https://www.who.int/news/item/23-09-2020-managing-the-covid-19-infodemic-promoting-healthy-behaviours-and-mitigating-the-harm-from-misinformation-and-disinformation [accessed: 22 January 2022]). The infodemic in this sense was by no means non-existent before the COVID-19 pandemic. The latter has only brought the existence of the infodemic into sharper focus and made us more aware of its psychosocial implications.
  50. G. Bronner, Apocalypse cognitive, Paris: PUF, 2021.
  51. G. Simondon, On the mode of existence…, p. 19.
  52. Ibid.
  53. G. Simondon, On the mode of existence…, p. 19.
  54. B. Stiegler, M.B. Kacem, Philosophies singulières…, p. 41.
  55. See B. Stiegler, States of Shock: Stupidity and Knowledge in the 21st Century, especially chapter 6: ‘Re-reading the Grundrisse. Beyond Two Marxist and Post-Structuralist Misunderstandings’.
  56. Idem, The Age of Disruption: Technology and Madness in Computational Capitalism, trans. D. Ross, Cambridge, UK/Medford, MA: Polity Press, 2019, p. 76.
  57. See M. Horkheimer, T.W. Adorno, Dialectic of Enlightenment: Philosophical Fragments, trans. E. Jeffcott, Stanford CA: Stanford University Press, 2002.
  58. B. Stiegler, Technics and Time 3. Cinematic Time and the Question of Malaise, trans. S. Barker, Stanford, CA: Stanford University Press, 2011, ch. 2, ‘Cinematic Conscieousness’, p. 37.
  59. L. Naccache, Le cinéma intérieur. Projection privée au cœur de la conscience, Paris: Odile Jacob, 2021, pp. 9–10.
  60. B. Stiegler, ‘The Discrete Image’, in: J. Derrida, B. Stiegler, Echographies of television: Filmed interviews, trans. J. Bajorek, Cambridge UK: Polity Press, 2002, p. 147.
  61. Ibid., pp. 148-149.
  62. See P. Szendy, Pour une écologie des images, Paris: Minuit, 2021.
  63. See B. Stiegler, The Age of Disruption: Technology and Madness in Computational Capitalism.
  64. To put it – for lack of space – very briefly: in the first version of ‘The Transcendental Deduction’, one of the chapters of the Critique of Pure Reason, Kant distinguishes three syntheses: that of apprehension (in intuition), of reproduction (in imagination) and of recognition (in the pure concept created by intellect). Stiegler postulates introducing a fourth synthesis, the techno-logical and atranscendental one which “…in conditioning the synthesis of recognition, supports and articulates the three syntheses of consciousness” (Technics and Time 3…, p. 141). The techno-logical synthesis is, in other words, the ability of the imagination to externalize its own creations (images) and transform them into artefacts that themselves become prostheses of knowledge and condition its production. In Stiegler’s language, such techno-logical synthesis, being a result of the operation of what he calls “secondary retention”, building on Edmund Husserl’s theory of retention, also conditions the functioning of memory, not only in its individual dimension, but most of all in its culture-creating dimension that enables experiencing (and transforming) the recorded past (preserved in secondary retention) and recording one’s own experiences that will transform future generations in the same way.
  65. A.N. Whitehead, The Function of Reason, Princeton NJ: Princeton University Press, 1929, pp. 14–20.
  66. B. Stiegler, ‘Noodiversity, Technodiversity: elements of a new economic foundation based on a new foundation for theoretical computer science’, trans. D. Ross, Angelaki, 25:4, 67-80, DOI: 10.1080/0969725X.2020.1790836, August 6, 2020, https://www.tandfonline.com/doi/full/10.1080/0969725X.2020.1790836 [accessed 12 April 2022], p. 73; also in this volume, pp.
  67. J.-P. Dupuy, Aux origines des sciences cognitives, La Découverte, Paris 1994, s. 16.
  68. See D. Bates, ‘Automaticity, Plasticity, and the Deviant Origins of Artificial Intelligence’, in D. Bates, N. Bassiri (eds.), Plasticity and Pathology. On the Formation of the Neural Subject, New York: Fordham University Press, 2016 and D. Bates, ‘Penser l’automaticité au seuil du numérique’, in: B. Stiegler (ed.), Digital Studies. Organologie des savoirs et technologies de la connaissance, Limoges: FYP éditions, 2014.
  69. Tracing the genealogy of the notion of information that dominated scientific debate in the twentieth century, Mathieu Triclot writes: “What do we mean when we talk about information, the processing of information, in relation to things as disparate as programs, computers, brains, networks, media, voters, proteins, organisms of society…? The list of things that pertain to information today is seemingly open-ended. The ‘information discourse’ has not only made its way into debates about information technology and networks, but also into those of life sciences concerning humans and society. What connects the neuroscientist who states that ‘the nervous system of a mollusk transmits information according to a code, analysing and processing it’ [Jean-Pierre Changeux – M. K.], a cognitive scientist who tells us that ‘the mind is essentially a system that manipulates symbols’ [Jerry Fodor – M. K.], an anthropologist who explains that ‘the second law of thermodynamics is not valid in the case of mythic operations’ because ‘the information they convey is not lost’ [Claude Lévi-Strauss – M. K.], a technophile enthusiast who demands that ‘human societies equip themselves with a better nervous system capable of processing information in real time’ [Joël de Rosnay – M. K.], or the Vice President of the United States who discovers that the ‘lingua franca’ of the new millennium ‘is made of ones and zeros’. [Albert Gore – M. K.]? Not to mention Jacques Lacan, who proposes that we capture ‘the Eros of female sexuality’ as ‘information running counter to social entropy’”. (M. Triclot, Le moment cybernétique. La constitution de la notion d’information, Seyssel: Champ Vallon, 2008, p. 5).
  70. See A. Alombert, V. Chaix, M. Montévil, V. Puig (éd.), Prendre soin de l’informatique et des générations. En hommage à Bernard Stiegler, Limoges/Paris: FYP éditions, 2021, p. 11.
  71. F. Bailly, G. Longo, ‘Biological Organization and Anti-Entropy’, Journal of Biological Systems 2009, no. 17, pp. 63–96; G. Longo, M. Montévil, Perspectives on Organisms: Biological Time, Symmetries and Singularities, Dordrecht (NL): Springer, 2014.
  72. M. Montévil, ‘Possibility Spaces and the Notion of Novelty: From Music to Biology’, Synthese 2018, no. 196, pp. 4555–4581, https://doi.org/10.1007/s11229-017-1668-5 [accessed: 22 January 2022].
  73. M. Montévil, M. Mossio, ‘Biological Organisation as Closure of Constraints’, Journal of Theoretical Biology 2015, no. 372, pp. 179–191. American biologist Stuart A. Kauffman, referring to the “constraint closure” theory, argues that “These young scientists have found a, or maybe ‘the,’ missing concept of biological organization.” (S.A. Kauffman, A World Beyond Physics: The Emergence and Evolution of Life, New York: Oxford University Press, 2019, p. x).
  74. I am grateful to Giuseppe Longo for bringing this issue to my attention.
  75. A.J. Lotka, ‘The Law of Evolution as a Maximal Principle’, Human Biology 1945, no.17, p. 188.
  76. N. Georgescu-Roegen, Energy and Economic Myths. Institutional and Analytical Economic Essays, New York: Pergamon, 1976, p. 35.
  77. Antropologist Clarisse Herrenschmidt distinguishes three stages of information recording: the first to appear were the ways to denote languages, invented around 3300 B.C., then the ways to write down numbers on minted money thanks to the invention of arithmetic notation 620 B.C., and the final stage is the information technology notation, based on a code. This last notation was developed between 1936 and 1948 (thanks to the work of mathematician Alan Turing), and then found its extension in the notation characteristic of the web. This is why Herrenschmidt calls this type of writing “reticular” (French réticulaire = network- or web-oriented; see C. Herrenschmidt, Les trois écritures. Langue, nombre, code, Paris: Gallimard, 2007). It was from this work that Stiegler borrowed the term “reticularity”, not found in computer science, which he uses with regard to information.
  78. As shown by Mariana Mazzucato, classical physics became the yardstick of “scientificness” for twentieth-century neoclassical economics: “economists wanted to make their discipline seem ‘scientific’ – more like physics and less like sociology – with the result that they dispensed with its earlier political and social connotations” (M. Mazzucato, The Value of Everything: Making & taking in the global economy, London: Penguin Allen Lane/New York: Public Affairs, 2018, p. 8). Another economist, Ivar Ekeland, also points to the triumph of the physical model in the dominant economic theory of today. He notes that one of the greatest successes of this theory is financial mathematics, allegedly a scientific validation of financialization (see I. Ekeland, ‘Du bon usage des modèles mathématiques’, Responsabilité & Environnement 2021, no. 101, http://www.annales.org/re/2021/re_101_janvier_2021.html [accessed: 22 January 2022]. The physical (reductionist) model of cognition has also informed cognitive science explanations of the mind: “Almost everything in the world can be explained in physical terms.” (D. Chalmers, The Conscious Mind: In Search of a Fundamental Theory, New York/Oxford UK: Oxford University Press, 1996, p. 93).
  79. N. Georgescu-Roegen, The Entropy Law and the Economic Process, Cambridge, Mass. / London: Harvard University Press,1971.
  80. “Nature builds no machines, no locomotives, railways … These are products of human industry; natural material transformed into organs of the human will over nature, or of human participation in nature. They are organs of the human brain, created by the human hand; the power of knowledge, objectified. The development of fixed capital indicates to what degree general social knowledge has become a direct force of production, and to what degree, hence, the conditions of the process of social life itself have come under the control of the general intellect and been transformed in accordance with it.” (K. Marx, Grundrisse: Foundations of the Critique of Political Economy (Rough Draft), trans. M. Nicolaus, London: Penguin/New Left Review, 1973, pp. 625-626).
  81. See L.S. Vygotsky, ‘Tool and Symbol in Child Development’ (1930) in: L.S. Vygotsky, Mind and Society: The Development of Higher Mental Processes, Cambridge, Mass.: Harvard University Press, 1978.
  82. The term noetic is derived from Greek nous, used by Aristotle to describe a soul which is “intellective” (or “rational”), distinct from the “vegetative” and „sensual” soul. Husserl, in his turn, employed a term “noesis” to mean the act of thinking, or mental processes referring to an object, that is, what is being thought. In Stiegler’s approach, although the noetic life is undoubtedly made possible by thinking, his noesis is, first, technologically conditioned and as such, it always has the characteristics of “technesis” and, second, for this very reason, it is not restricted to purely intellectual pursuit: it includes the cultivation of practical knowledge, possibly invoking what sociologist Richard Sennett desribed as craftmanship (R. Sennett, The Craftsman, New Haven/London: Yale University Press, 2008). From a philological perspective, the Latin equivalent of the Greek nous (which means sense, intuition and common sense as well as intellect) would not be intellectus, but sensus (see Seuil–Le Robert (éd.), Vocabulaire européen des philosophies. Le dictionnaire des intraduisibles, Paris: B. Cassin, 2019, pp. 1133–1134). In this view, noetic life would also be concerned with forms of sensual life. In short, noetic is anything that adds meaning to life by co-creating and sharing some kind of knowledge.
  83. As cited in: Prendre soin de l’informatique et des générations…, p. 16.
  84. As cited in: F. Conway, Jim Siegelman, Dark Hero of the Information Age: In Search of Norbert Wiener. The Father of Cybernetics, New York: Basic Books, 2006.
  85. B. Stiegler, Automatic Society…, pp. 8-9.
  86. Y. Hui, ‘The Wind Rises: In Memory of Bernard’, Urbanomic, August 8, 2020, https://www.urbanomic.com/document/in-memory-of-bernard/ [accessed: 20 April 2022].
  87. P. Petit, ‘Mais au fait, c’était quoi la pensée de Bernard Stiegler?’, MarianneTV, August 7, 2020, https://www.marianne.net/societe/mais-au-fait-c-etait-quoi-la-pensee-de-bernard-stiegler [accessed: 22 January 2022].
  88. Cf. A.W. Nowak, ‘Mała zmiana (pojęciowa), wielka rewolucja’, Czas Kultury 2021, no. 21, https://czaskultury.pl/artykul/mala-zmiana-pojeciowa-wielka-rewolucja/ [accessed: 22 January 2022].
  89. “So instead of giving way to despair, I took the way of active melancholy as long as I had strength for activity, or in other words, I preferred the melancholy that hopes and aspires and searches to the one that despairs, mournful and stagnant” (V. van Gogh, The Letters, Van Gogh Museum, http://vangoghletters.org/vg/letters/let155/letter.html, [accessed: 20 April 2022].
  90. A. Alombert, Nous avons à devenir la quasi-cause du décès de Bernard Stiegler, Mediapart, 15.08.2020, https://blogs.mediapart.fr/anamnesis/blog/150820/nous-avons-devenir-la-quasi-cause-du-deces-de-bernard-stiegler [accessed: 22 January 2022].
  91. G. Deleuze, lecture delivered at Université Paris 8, Vincennes-Saint-Denis on June 3, 1980. Webdeleuze, https://www.webdeleuze.com/textes/249 [accessed: 22 January 2022]. As cited in: A. Alombert, Nous avons…
  92. B. Stiegler, Ce qui fait que la vie vaut la peine d’être vécue. De la pharmacologie, Paris: Flammarion, 2008, p. 183. This passage is not included in the English edition What Makes Life Worth Living: On Pharmacology, trans. D. Ross, Cambridge: Polity Press, 2013.
  93. A. Alombert, M. Krzykawski, ‘Lexicon of the Internation: Introduction to the Concepts of Bernard Stiegler and the Internation Collective’, in: B. Stiegler, Internation Collective, Bifurcate…, p. 320.
  94. See B. Stiegler, Automatic Society, Vol. 1, pp. 1-2. Greenspan whom Stiegler cited, was referring to the 1997 Nobel Prize awarded to Myron S. Scholes and Robert C. Merton. The way the financial system works is shown in an instructive interview with “repentant” former derivatives trader Anic Lajnef, who operated at the heart of it for many years. The interview was published on the French civic channel Le Média pour tous [Media for Everybody]: https://www.youtube.com/watch?v=KcBJ5WVys_M&ab_channel=LeMediaPourTous [accessed: 22 January 2022].
  95. It’s Okay to Panic, directed by J. L. Ramsey, 2020, https://www.youtube.com/watch?v=osm5vyJjNY4&ab_channel=RamseyUnited [accessed: 22 January 2022].
  96. Świat się chwieje, broadcast by G. Sroczyński, Radio TOK FM, January 2, 2022, https://www.tokfm.pl/Tokfm/7,124813,27965760,polski-odpowiednik-naukowca-z-nie-patrz-w-gore-rozbieramy.html?fbclid=IwAR0vUj6HzfnuQR2DPQcVQIkKmokuAW5ZzVfDPFkXUKcKV0IpiErSe2HDG6k#s=BoxOpMT [accessed: 22 January 2022].
  97. Ibid.
Text commissioned by the Biennale Warszawa, translated from Polish by Jerzy Paweł Listwan

Michał Krzykawski, Associate Professor in philosophy/literature and head of the Centre for Critical Technology Studies at the University of Silesia. His research revolves around continental philosophy, philosophy of technology, philosophy of science and political economy. He is particularly interested in a dialogue between philosophical thinking, technology and science in the context of epistemological, psychosocial and ecological issues related to the current digital transformation. He has extensively published on contemporary philosophy. Co-founder of the Collective Organoesis.