La quatrième mémoire (avec Y. Citton)- Multitudes

Les dites « intelligences artificielles » (IA) signent moins le remplacement des facultés humaines que l’apparition d’une nouvelle catégorie de mémoire générant de façon récursive des discours, images et sons qui peuvent ressembler aussi bien à des « créations » qu’à des documents « authentiques ». Cette automatisation de l’expression et de la représentation à partir des données massives accumulées sur le Web explique pourquoi la question des arts, loin d’être anecdotique, est devenue consubstantielle aux modèles statistiques actuels. On peut y percevoir l’émergence d’un nouveau réalisme qui déstabilise les archives du passé comme les perspectives de futur. La nature contrefactuelle de ce réalisme alien trouble ce sur quoi nous fondions auparavant les indices de vérité.

Fourth Memory
So-called “artificial intelligence” (AI) are less a sign of the replacement of human faculties than of the emergence of a new category of memory, recursively generating speeches, images and sounds that can resemble “creations” as much as “authentic” documents. This automation of expression and representation using the massive data accumulated on the Web explains why the question of the arts, far from being anecdotal, has become consubstantial with current statistical models. We can see the emergence of a new realism that destabilizes both past archives and future perspectives. The counterfactual nature of this alien realism unsettles the foundation upon which we previously established our relation to truth.

So-called “artificial intelligences” (AI) represent less the replacement of human faculties than the emergence of a new category of memory recursively generating discourse, images, and sounds that can resemble both “creations” and “authentic” documents. This automation of expression and representation based on massive data accumulated on the Web explains why the question of the arts, far from being anecdotal, has become consubstantial with current statistical models. We can perceive in it the emergence of a new realism that destabilizes both archives of the past and perspectives of the future. The counterfactual nature of this alien realism disturbs what we previously based truth indicators on.

Retentions, Protentions, Distentions

According to Bernard Stiegler, different retentions are ways of memory that allow reiteration, recall, repetition. Primary retention is the immediacy of intuition or perception, such as the perception of a musical note. Secondary retention is the temporalization (of understanding) that compares, anticipates, and recalls different events, for example notes that, in succession, form a musical melody. Tertiary retention is the inscription on a material support of these events allowing their technical repetition, for example on a disc where the melody is recorded.

Technical developments in recent decades have enabled the development of what can be considered quaternary retentions. These result, on one hand, from the multiplication of tertiary retentions bearing not so much on parts of the world (a printed text, a photographed object, a recorded concert) as on the gestures and attentional habits that underlie our primary retentions (what I click on or swipe, correlations between my digital gestures and those of other users, the time I spend on such and such content, etc.). Quaternary retentions result, on the other hand, from the ability to process these (meta)data in very large numbers (big data) through computation processes based on “statistical induction,” that is, on a dynamic and bidirectional interaction between structuring processing instructions (from above) a set of data and proximity attractions observed (from below) between data aggregates.

Quaternary retentions form a fourth memory consisting of the statistical processing of tertiary retentions by so-called “artificial intelligences.” This processing no longer aims at identical recall of what has been. The fourth memory feeds on the past of retentions to possibilize them in its latent space and to be able to regenerate them: it is not the same indexical retentions that come back again and again, but rather resembling retentions. It is resemblance itself, understood as mimetic representation, that is automated, marking a new stage in the complex process of industrialization.

Indeed, AI is fed by large data sets that allow it to calculate a latent space, which defines probabilities according to a Bayesian logic that constitutes a statistical space structured by observable proximities and attractions within a data set. Without going into details, developed in Anna Longo’s remarkable work, “The Game of Induction,” it should be noted that if the images generated with the help of neural networks are credible and realistic, it is not only that they contain probabilities drawn from many past images that have been accumulated in barely thirty years on the Web (whose preparatory function was undoubtedly like extractivist drilling and storage in our memories), but also that they potentially contain all images to come. And that is why they can be both different and realistic. Realism then becomes the credible anticipation in an inductive space of a possible image.

Can we still speak of memory, retention, archive, history? Secondary and tertiary retentions, as memorization of perceptions and as action of keeping to oneself what should be disseminated, are not strictly speaking outdated, but they find themselves surpassed by the dynamics specific to quaternary retentions. These capture, forge and shape the future by operating an automated loop of the future on the presence of the past – in a mix that could be considered pretention (to be understood as a leapfrog from retentions to protentions). A significant inversion occurs indeed in many cases: circulation seems to precede what is retained because this circulation depends on externalization in datacenters, whose commercial infrastructural presuppositions determine what there is to retain and the constitution of retentions. By providing social networks with what they are made for, we produce our memories by adapting them.

This pretentional dynamic that structures our behaviors through an instantaneous back-and-forth between retentions (what is kept from the past) and protentions (what projects us toward a preformed future) is undergoing a major, epochal transformation. We propose the notion of distention to designate the epoch that opens with the possibilization of retentions processed by AI. This involves not only increasing the volume or surface of a body by subjecting it to very strong tension, but also relaxing the bonds that tighten a whole or that unite several things. Distention is an extension because from tertiary retentions, it multiplies even more the number of documents by creating a retention of retentions, an attention to attention, a memory of memory, in an inflation of metadata bearing on data.

The change of name from Google to Alphabet and from Facebook to Meta is to be read as a symptom of this meta-ization catalyzed by quaternary pretentions. Distention is a genetically recursive retention, so that it is not, like classical Stieglerian retention, the repetition of an event: it is the repetition of pretention coupled with itself, the automation of its self-producing loop. This singular repetition allows us to understand how mimetic resemblance is repeated and automated as such.

Traversing the Disfactuality of Latent Space

An example: in August 2023, a producer based in Nancy, Lnkhey, published on SoundCloud and YouTube a remix where Angèle’s cloned voice, thanks to the open-source software Retrieval-based-Voice-Conversion, sings a song she never performed. Several million people listened to it. Angèle reacted on TikTok: “I don’t know what to think about artificial intelligence, I find it’s insane, but at the same time I’m scared for my job lol.” In this video, she lip-syncs this remix then makes an amused face, as if struck by vertigo facing this resembling voice that isn’t hers.

Another example: a Lumière brothers film upscaled to 3840×2160 pixels resolution and 60 frames per second. The film hasn’t been “restored,” because it’s not about returning the film to its original state, but rather an “instauration” because elements originally absent have been added. The effect is striking: the film no longer has the realism of 1895, but the grain of a video shot in 1971 with a Sony Portapak. Completion, meaning the act of completing a historical document in order to repair it, leads to an anachronistic realism that modifies the nature of the archive, which is no longer determined by an origin. Completion invents a historicity that doesn’t exist at the origin, because it has been fed by images taken between 1895 and today. The result doesn’t emanate from the most faithful possible retention of data to capture in 1895, but from a modulatable mix of given and statistically probable, determined by the dataset’s latent space.

Realism changes nature and becomes disfactual, the prefix “dis-” here signifying separation, difference, cessation, or defect within the factual itself, that is, within facts. This disfactuality ultimately touches on facticity, meaning the contingency of the correlation between thought and the facts it aims at. The images are factitious, but this facticity comes to affect reality as a whole, and that’s why it is disfactual: it dislocates something from within. In doing so, it corrodes the factuality on which our confidence in our power to exercise a certain mastery over the world is based.

With artificial intelligences, what we retain, what we process, what we metabolize, are all past forms of tertiary retentions once they have been massively digitized in binary form and thereby made intercompatible, processable, translatable. This period can be associated with Big Data, as a project of digitizing culture, and Web 2.0, as everyone’s participation in this memorization. This period was in fact only a preparatory act for statistical induction. We may be seeing the emergence of a new form of realism, a realism of realism, which would allow better understanding of the multiplication of alternative and counterfactual truths than the simple promise of a demarcation, increasingly difficult to maintain, between truth and fiction.

This new metabolization can be analyzed in six stages: 1) quaternary retentions (which record our attentional gestures, our interpretive reactions, our creative re-elaborations) find themselves 2) accumulated in enormous databases, to be 3) sorted, calculated, associated, approximated by an unprecedented power of computation based on statistical induction, from which 4) latent spaces emerge, from which 5) a fourth memory generates pretentions 6) in the form of aesthetic objects (a new Angèle song) that are neither “real” recordings nor “real” creations, but unprecedented entities – which we struggle to qualify (models, proxies, synthetic products, deep fakes, proofs of concept?).

The Alienation of Credibility

The latent space is our new cultural space, whose products are disfactual. Angèle’s song existed before it really existed. It existed as a statistic or, depending on the case, a possibility. It had to be born into reality through this cover of covers. This is the ontological significance of the already quoted post: “When Angèle’s AI on Saiyan finally becomes reality,” where the “AI’s” that separates and connects AI to Angèle expresses this preterition of the cultural latent. Everything exists before existing. There is, in this strange disfactual anticipation, a new complicit pact with the public. It’s Kaaris amusing himself with his own AI, and this is by no means the property of a fantasmatic replacement: it’s the distance from oneself, a well-known strangeness of modernity, a shift in our apparatus. Thanks to the accumulation of the past through material supports of tertiary memories, we produce something that had never taken place, but which strangely resembles everything that could take place: Kaaris singing Inspector Gadget or a Disney anime. This possible already has its form of reality, but all the cultural intelligence of our era lies in this shared amusement between singers and their audiences, in this new repetition where we interpret this possible not-yet-effectuated but which nevertheless has already taken place at such point in latent space.

If, until now, our culture and its sharing were determined by tertiary memories, fruits of the industrial period, we are surely entering a new epoch with quaternary memories where the aesthetic contract could be that of alienation: we reproduce machines that reproduce us. The latent space becomes a space of possibilities that contains the past, but also, undoubtedly, a part of the future and the incalculable. For one could well, for example, take a photograph with any device and send it to an AI to verify that it already exists and find it, retrieve it. It’s no longer just about digitization that makes discrete in the form of 0s and 1s (by cutting them through sampling) variations that can be recombined at will (as synthetic products). It’s now about statistical pretentions, which distend our protentions by informing what doesn’t yet exist according to the thirst for commercial profits of platforms exploiting their privileged access to the economy of our attentions.

This very particular realism – disfactual – emerges from a fourth memory that belongs to both the spectator’s past and the future of instituted images. It is not simply another result of causality: it enters a possible onto a given, while at the same time haunting the second with the first, undermining the foundations of our indexical belief (if I hear Angèle’s voice, it must be that Angèle sang). It is therefore essential to place what might appear as a simple technological innovation – with its litany of novelties, passing from GAN, Clip, Disco Diffusion, Zoetrope to Dall-E 2, Imagen, Parti, etc. – in the general context of an uncertainty towards factuality where, according to certain surveys, 40% of 18-24 year olds in the United States say they think the Earth is flat. This latency should of course be linked to conspiracy theories, fake news, to this strange expressive democratization of opinion where everything thinkable seems to have to be thought by someone, and where everyone seems to think only to react to what they believe the other thinks in an endless Bayesian anticipation.

From Disfactual Possible to Counterfactual Realism

We still have to orient ourselves in this culture of latent space and in the paradoxical emotion that seizes us when we listen to, and listen again to, Angèle’s voice, then return to the AI’s voice, going back and forth between the two, unable to decide on our emotion and the world that thus traverses us. It’s a new realism and a new historicity whose new structures are emerging – alienating not so much our identities as the very credibility of our cultural world.

Faced with the omnipresent and stifling refrains of replacement by AI, Angèle and her audience are playing a different game than that of horrified lamentation. The feelings are mixed. There is undoubtedly some fear, some amazement. But there is above all amusement in the infinite game of simulacra and resemblances – another name for culture – which neither the technocritical pastors nor the humanist priests will ever have understood. AI is not thought of in advance, as if it were enough to reflect on it correctly to fix the way it must be reformed, framed, put into legislation or into a pipe, with an input and an output, branches, a whole logistics that is finally a logos and that will always be one step behind. AIs – to be read here as our Alienated Intelligences – are experimented with: we alienate them and they alienate us. In this case, they have indeed learned to sing like Angèle and the latter has somehow responded to them by covering “their” common song (confounding the high priests of copyright in the process). We have been the secret witnesses of this seismic echo. We can be the explorers and agents (more or less secret) of experimental alienations.

After the apogee of hypermnetic accumulation of memory supports through their digitization and recording in data centers – the ultimate stage of Benjaminian reproducibility – our era industrializes, with AI, resemblance itself through the possible. This is undoubtedly the reason why AI – these questions that traverse and upset so many domains of human activity – has so frequently been approached in the media and with the general public through “the question of art.” The latter indeed symbolically concentrates in modernity the proper of humanity, as well as the mystery of its interiority which, we know, was the trial of a construction of subjectivity in the West, going as far as the will to power and nihilism.

In another TikTok post, one can read “The loop is looped.” It’s not just that we’re teaching AI to create images, texts, and sounds that resemble us: it’s that we resemble them and that, compared to reactionary discourses, we desire nothing other than to actively alienate ourselves. We believe neither in making AI readable through code transparency, nor in the act of cutting off and separating ourselves from these flows to regain an imaginary autonomy and fantasized sovereignty. We want to experiment with the fact that what we believe exists is also a product of technique and its paradoxical reproduction. We are its reprise. By metabolizing the entire history of our memory supports, AIs, understood as “our” Alienated Intelligences – and this very problematic “our” undoubtedly deserves to split humans into dominant and dominated, into white and non-white – are in the process of constituting a new memory, where past and future are no longer chronological, but seem to respond to each other by exchanging their roles.

Here it is Brian Massumi’s prophetic ruminations on the differences between the possible, the probable, the virtual, and the potential that must be remobilized to take the measure of what is happening to us. Statistical induction plays with a possible (partially) mastered by probabilities. The song-that-Angèle-has-not-sung mobilizes the probable to realize other possibles. Its disfactuality fits perfectly, however, into the protentions of dominant aesthetics (if not into that of a music business making hypocritical homage to the sacrosanct “copyright”). Only experimentation – always vertiginous – with our alienations can hope to draw from the simple disfactual the potential for transformation inherent in counterfactuality. The challenge of images, sounds, and texts in the age of generative AI is not so much to be “original,” “new,” “true,” “innovative” or “beautiful” (all terms and values that have aged badly in just a few years). It is rather to be counterfactual: to survey (artisanally) the latent space to (automatically) manifest credible retentions of non-occurred realities because they are contrary to the facts of established dominations.

To embody latent aspirations towards facts contradicting the protentions of the reigning order: isn’t this what was called “revolution” during the last century? To take up a distinction that Pierre-Damien Huygue insists on today, the experimental, non-instrumental, “artistic” uses of generative AI are not so much about political action (prattein) as about artistic-artisanal making (poïein). Not so much “making revolution” (in the sense of acting for a revolution to come about) as fabricating counterfactual objects that make visible, audible, thinkable with the force of realism what counter-worlds our societies carry within them. That deep fakes make us fear a world of “post-truth” – political pendant to the economic fantasy of human replacement by machine – certainly testifies to a very real problem: it is essential to be able to preserve a certain social relationship of trust towards our access to factuality. No information possible without credibility. But our anxieties and our anti-conspiracist crusades testify just as much to the inability of progressive forces to understand the political potential of counterfactual realism – at a time when reactionary forces shamelessly exploit the mechanisms of disfactual irrealism. Though often idle, our debates on AI will not have been in vain if they help us identify – and invest in – this (not so) new terrain of struggles.