The Nooscope is a cartography of the limits of artificial intelligence, intended as a provocation to both computer science and the humanities. Any map is a partial perspective, a way to provoke debate. Similarly, this map is a manifesto — of AI dissidents. Its main purpose is to challenge the mystifications of artificial intelligence. First, as a technical definition of intelligence and, second, as a political form that would be autonomous from society and the human. In the expression ‘artificial intelligence’ the adjective ‘artificial’ carries the myth of the technology’s autonomy: it hints to caricatural ‘alien minds’ that self-reproduce in silicon but, actually, mystifies two processes of proper alienation: the growing geopolitical autonomy of hi-tech companies and the invisibilization of workers’ autonomy worldwide. The modern project to mechanise human reason has clearly mutated, in the 21st century, into a corporate regime of knowledge extractivism and epistemic colonialism. This is unsurprising, since machine learning algorithms are the most powerful algorithms for information compression.

The purpose of the Nooscope map is to secularise AI from the ideological status of ‘intelligent machine’ to one of knowledge instrument. Rather than evoking legends of alien cognition, it is more reasonable to consider machine learning as an instrument of knowledge magnification that helps to perceive features, patterns, and correlations through vast spaces of data beyond human reach. In the history of science and technology, this is no news: it has already been pursued by optical instruments throughout the histories of astronomy and medicine. In the tradition of science, machine learning is just a Nooscope, an instrument to see and navigate the space of knowledge (from the Greek skopein ‘to examine, look’ and noos ‘knowledge’). (…)

Instruments of measurement and perception always come with inbuilt aberrations. In the same way that the lenses of microscopes and telescopes are never perfectly curvilinear and smooth, the logical lenses of machine learning embody faults and biases. To understand machine learning and register its impact on society is to study the degree by which social data are diffracted and distorted by these lenses. This is generally known as the debate on bias in AI, but the political implications of the logical form of machine learning are deeper. Machine learning is not bringing a new dark age but one of diffracted rationality, in which, as it will be shown, an episteme of causation is replaced by one of automated correlations. More generally, AI is a new regime of truth, scientific proof, social normativity and rationality, which often takes the shape of a statistical hallucination. This diagram manifesto is another way to say that AI, the king of computation (patriarchal fantasy of mechanised knowledge, ‘master algorithm’ and alpha machine) is naked. Here, we are peeping into its black box.

(the excerpt from “The Nooscope Manifested: AI as Instrument of Knowledge Extractivism”)

Matteo Pasquinelli (PhD) is based in Berlin Professor in Media Philosophy at the University of Arts and Design, Karlsruhe, where he is coordinating the research group on Artificial Intelligence and Media Philosophy KIM. He edited the anthology Alleys of Your Mind: Augmented Intelligence and Its Traumas (Meson Press) and, with Vladan Joler, the visual essay ‘The Nooscope Manifested: AI as Instrument of Knowledge Extractivism’ (nooscope.ai). His research focuses on the intersection of cognitive sciences, digital economy and machine intelligence. For Verso Books, he is preparing a monograph on the history of AI titled The Eye of the Master. 

Vladan Joler is an academic, researcher and artist whose work blends data investigations, counter-cartography, investigative journalism, writing, data visualisation, critical design and numerous other disciplines. He explores and visualises different technical and social aspects of algorithmic transparency, digital labour exploitation, invisible infrastructures and many other contemporary phenomena at the intersection between technology and society.

In 2018, in cooperation with Kate Crawford, he published Anatomy of an AI System, a large-scale map and long-form essay investigating the human labour, data and planetary resources required to build and operate an Amazon Echo device. A previous study of his, entitled Facebook Algorithmic Factory, included deep forensic investigations and visual mapping of the algorithmic processes and forms of exploitation behind the largest social network. Other studies that he authored and published in recent years by the independent research collective SHARE Lab, include research on information warfare, metadata analysis, browsing history exploitation, surveillance, and Internet architecture.

He has curated and organized numerous events and gatherings of Internet activists, artists and investigators, including SHARE events in Belgrade and Beirut. His artistic pre-history is rooted in media activism and game hacking.