My machine has been calculating a series of images for 24 hours. It is Kim Novak who comes to haunt films in which she has not played, she who haunted Vertigo and who haunted herself in her “own” role of revenance.
I cannot deny the fascinating character of these images which evolve hour by hour on my screen. I observe them as the flow of a flowing river, tumultuous and swirling, irregularities of details blending into the general flow of a landscape. The machine teaches one image to be another image. With each cycle, the neural network refines the resemblance a little more and I begin to imagine that all the images that we created, in particular since the acceleration produced by industrialization, were finally only the constitution of a memory at the disposal of the machines to produce all the (other) possible images. The cinematheque as a dataset for the artificial imagination? Will it even be necessary to make new films if past films can be used on machines to produce images? What is the status of these new images, the fruit of the memory of existing images? Do they have the ontological necessity of filmed images or are they contingency?
What Bernard Stiegler called the fourth synthesis, and which concerns the transcendental imagination is determined by the technical memory. Today, it is based on the automation of mimesis through deep learning. We can emphasize that this automation is a revolution in the history of representation that cannot simply be aligned with what was done before in the automatic generation of images. Until then, generative art produced vector images close to computer language. We could generate 3D images, but we remained in a morphological conception of the image. We had to implement a model of the image, was it random to different degrees. In short, there was a programmed preunderstanding of what needed to be generated, so that the generation remained Platonic the idea, hear the program, was before the result.
The Internet has somewhat disturbed this platonicism of the generation. By opening up to the network, something was introduced that exceeded expectations in the form of the images that appeared. This memory of the anonymous, multitudes whose social networks are the receptacle, no longer belonged to the underlying idealism of computer code. There were the desire materials there.
With recursive neural networks, the situation is even different since the (temporary) result is reintroduced into production. Everything happens as if the ability to resemble is delegated to the machine. Now this faculty is precisely that which serves as foundation to the Platonic ideal Forms. How do we recognize a table in the inanticipable diversities of its forms and variations? It is that we have, according to Plato, an earlier ideal Form that makes this recognition possible. But the implicit conception of NRNs teaches us that this faculty of representation (Vorstellung) presupposes neither a preliminary Idea nor an understanding of the being in the world and of the world as such as Heidegger thought, but operates thanks to a simple statistical induction. When I see Kim Novak in Vertigo slowly becoming Brigitte Bardot in Le Mépris, I observe the machine creating images that did not exist and that are at the junction between several images: Bardot’s face takes the position of that of Novak while keeping his face, that is to say his recognizability. We know it’s Bardot, even though she was never filmed in this posture.
For several years, theorists have been dreaming of themselves as great educators and authorizers of images according to different strategies. They appeal, such as Stiegler (in particular with the project Lignes de temps) or Debray, to a literacy of images, since living in their medial world, we should make them intelligible so that we can appropriate them and regain our sovereignty. This desire to reconquer meaning, which brings the image back to writing, remains in its Platonic and anti-artistic background. It presupposes that meaning is based on itself, which implies a fundamental meaning. He also presupposes that learning images would bring them back (by force) to language. Now, what the automation of the resemblance of images allows us to understand is that learning images means on the one hand the power to generate new ones (this is what we call hyperproduction or the surplus of possibilities) and on the other hand that meaning is the product of a deficient prediction or anticipation that reintroduces its margin of error in its own structure, this margin being irreducible. To learn images is to produce other images and thus, by resemblance, to leave open forever the series. The absence of closure of these images is also the fundamental absence of “our” sovereignty. For there to be resemblance, neither ideal Forms nor even meaning are necessary. Kim Novak will never be herself again. She had never been.