For an ideomaterialist critique of technologies

Technologies seem to consist of two main poles: the ideological superstructure and the physical infrastructure. The former seems to be the cause of the latter insofar as the technique is a human production and must therefore be understood according to this origin. Thus, artificial intelligence must be analyzed with regard to the explicit or implicit intentions of its designers and a social context in a given history. Criticism of technologies is often made according to this deterministic method consisting in seeking behind the apparent autonomy of techniques, the project defined by human beings in order to fix responsibilities and alternatives. This is how we look for the biases in artificial intelligence (AI), the causes of which are to be found in the designers’ unthinking guilt.

I would like to refute this anthropological determinism by stressing that if techniques are indeed produced by human beings, they are not reduced to their intention. So we have no way to perfectly materialize our ideas into reality. The gap between the idea and matter is not accidental, but largely determines the reciprocal relationship between the two, i.e. the heuristics.

If we look at the AI from a historical perspective, we know that two projects were in competition. The first, the strong AI, seemed the most promising and self-evident, consisted in modelling the totality of human knowledge, that is, in transferring it into a machine that could be known. The second, which originated in Frank Rosenblatt’s Perceptron (1957), conceived the machine as a child who had to learn by himself from data. While the first AI dominated research and investment for decades, it is the second model that succeeds today and seems to realize the dream of automated intelligence.

However, this “failure” of expert systems in favour of the learning machine is not insignificant because it is a radical paradigm shift in the superstructure. The ideology behind the first AI is the transferability of human intelligence into a machine. It presupposes that human intelligence is knowable by humans and that its content can be mathematically modelled. It is therefore the classic dream of an absolute science (Laplace) where the world, matter and idea merge into a transferable whole. It must be admitted that the expert systems had mediocre results because the completeness of the description of the knowledge was always taken into fault. The cause is undoubtedly to be sought in a naïve conception of reflexivity as transparency to oneself.

The neural model, or machine learning model, can of course benefit from knowledge modeling, but its logic is quite different. It does not consist in translating human intelligence into technical intelligence, but in letting a machine learn from data. While the result may seem “intelligent”, the cause of this effect is uncertain. This means that the AI is not a human intelligence in the machine, but could be a machine intelligence that cannot be compared to the human being. It is therefore close to Turing’s model.

Between the first and second models, we move from a mimetic model to an alterity model. We also move from an idealistic and absolute epistemology where language seems to merge with the thing, to an epistemology of surface effects where it does not matter if the machine is intelligent in the human sense of the term. It is easy to understand why the first model was more desired than the second. He guaranteed a mastery of knowledge, which became identical to itself, transparent. The second model is more disturbing because with it we would produce an intelligence that does not belong to us and that does not resemble us.

Criticism of the AI often consists of refuting the ideological hyperstructure and revealing the human intentions behind the techniques. But this criticism obscures the gap between hyperstructure and infrastructure, it forgets the incessant feedback loop between the two. We shape technology as much as it shapes our thinking. Criticism of the AI has limited efficiency because it almost never questions its framework. It has the same presuppositions as what it thinks it criticizes because it is heir, like the AI, to the fate of Western thought.

Ideomaterialist criticism consists in analysing the productive gap between hyperstructure and infrastructure. This gap is not considered as a defect, but as a dynamic that structures the genesis and development of technologies. Ideomaterialism as a point of junction and disjunction between the project and the material allows us to understand that if we have a certain idea before making a technique, it is realized in an unexpected way and that we manage the consequences of this surprise by using the inappropriate ideology that we have. Thus, Kurzweil’s Singularity is the resurgence of the first AI model.

A second example is virtual reality (VR). I would not go into this question at length, since I have already developed this point extensively in other texts from the 1990s. If the hyperstructure of the VR promised an absolutely convincing immersion without deficit, it is another experiment which saw the light of day: the VR produces seasickness and we can never forget the technical device. So VR is an immersive and emersive experience, ambiguous, disappointing, troubling. We are there without being there. If the hyperstructure failed before the experience of infrastructure, it is because the ideology had an inaccurate conception of perception. Perception was conceived as adherence to oneself. But if perception were immediate and immersive, we would forget it. For the experience to take shape, we need a memory and a time lag. I have to perceive that I perceive. This doubling of perception, which Kant had already mentioned, shifts us by ourselves and it is precisely this gap that is amplified with VR. One can conceive of this as a disappointment or positively.

Ideomaterialism is therefore not only a critique of discourse, it is also a material and artistic practice. This practice consists of infiltrating the gaps between hyper and infrastructure and searching for production elements. Generally speaking, the incident is a foundation of art because the incident is not an operating error, but is the underlying structure of technologies. We understand that ideomaterialism deconstructs the instrumental and anthropological conception of technique.