Thoughts from French Psychoanalysis on the Absence of Corporeality in the Unconscious of Emerging Complex Cognitive System

In this text we answer at the same time to two recent interesting works of Giancarlo Minati and Luca Possati in which they both called to work on the development, one from the part of the computer side, and the other of the humanities one of an IA unconscious in complex cognitive systems as an experiment to come to more anthropomorphic machines, performance added by the unconscious will not be addressed in this paper. We gathered many sources in psychoanalysis to help us understand what could be the barriers dressed against us. In the light of Lacan, Anzieu, Leclaire and Winnicott amongst others we tried to explain how having a body, in the biological sense, makes a difference with recreating—this is a typical human preoccupation—an unconscious in IA. Of course, from a French psychoanalytic standpoint there are many conservative objections, while some can be easily overcome, the matter of innate desire and body seems an understandable concern. It is also important to consider the interesting conjecture of Possati (i.e., a computer can be a projective identification object); while we only may say that it is a transitional object in the sense of Winnicott. Also, we can study further within psychotherapy the behaviour of the patient and therapist, with an algorithm we developed. In the end we address the objection of French postructruralist psychology objections to the creation of a human-like unconscious and advise the experimenting of Possati’s theory with our device.


Introduction
The French word is almost untranslatable in English and German. Its Lacanian meaning has evolved considerably and departs significantly from its common meaning. Lacan considers jouissance as the pleasure one receives from a sexual object. The pleasure principle is a principle of limiting pleasure, it implies one should not enjoy to excess, but while one seeks pleasure, even in a limited form, the subject tends to go beyond the limit of the pleasure principle. However, this does not result in "more pleasure" as one would expect. This is because eventually, the subject can no longer enjoy pleasure and it becomes more of a painful experience. This is what Lacan calls jouissance (SVII,218). Jouissance is not pleasure; it can be suffering. The effect of this suffering can cause the subject to feel a paradoxical enjoyment. "Masochism is part of the jouissance the Real offers" (Le sinthome, p. 90). Undoubtedly, in the wake of Freud, Lacan claims that jouissance is fundamentally phallic (SXX,14). However, Lacan recognizes that for women, there is an additional jouissance, which represents more than phallic jouissance, it is an ineffable jouissance of the Other (SXX, 71).
As Serge Leclaire indicates in Psychanalyser, language is a barrier, a limit, and transgressing it is a necessary condition to experience jouissance; and this is the organising principle of thought itself. This is the stance the majority of Lacanians have on the subject of ontogeny. In fact, the prohibited is the literal demonstration, graphic or vocal, of jouissance. Just as it is advisable not to confuse a sign and a letter, it is necessary to differentiate between jouissance and pleasure. Jouissance, in question here, is the immediacy of access to the effect that eroticism can have, including its ultimate result of death. This is something that a machine has no concept of, or no connection with, except in the case of space odyssey back in 2001. Sometimes jouissance comes from crossing that limit, the pleasure deriving from the ability to cross the barrier, jouissance tempered by the assurance of a cyclical reversibility of desire. for human beings is just as natural for machines? We know that some machine errors can be meaningful, but going from that, to thinking that machines can be endowed with an unconscious, is a little premature. A desire to reproduce is a natural phenomenon for humans but does not constitute a deep desire in machines. We must establish certain principles before putting forward what it would take for machines to develop an unconscious. An unconscious and the manifestations that follow, is the result of what we are, we, beings of consciousness with a physiological brain, which acknowledges hazards we are unsure of, through reactions such as neuronal noise, with neurons that light up without us really knowing why.
"Finally, isn't the only way to achieve immortality for us, to have children?" (Freud) The hidden, unclear error in carrying out the AI program as a research object was addressed by Possati.
We can rely on the search for errors by AI to find phonological, semantic or graphic explanations, but it is necessary to know whether in these errors projective identification can operate on the psychotherapist or even on the machine.
The unconscious has meaning, while machines only produce pseudo-propositions according to Ludwig Wittgenstein (Wittgenstein, 1921). The unconscious is partially in the unspeakable and it makes us what we are as much as it constitutes our being, and it is inseparable from language. It is an established fact that we can revisit the most overt acts of our unconscious, but can an algorithm, even an emerging and advanced one do the same? No doubt, one day, they will be capable of it, but can we make sense of the errors of the machine?
The jouissance of exercising the unconscious has been completely ignored by a number of scientists. It has been researched many times; from the first research in neuroscientific imagery when researchers wanted to know more about the Freudian It, Ego and Super-ego. Critics of psychoanalysis often focus on the fact it diverges from objective representation of the brain. We have a body that asks only to manifest and experience pleasure but what is it that brings joy when one writes, speaks, or dances? For Lacan, jouissance is found in using language. Meanwhile, machine knowledge only serves to be useful.
Wittgenstein once said: "ask what it [a machine] can do, and not what it wants to say". "It" doesn't mean anything yet.
The language of the unconscious is hidden, unclear and poetic. Machine language is brutal and clear-except with regard to hidden or emerging errors-only the user can see the meaning of the error, a machine never considers it. We may see the emergence of meaning from complex cognitive systems, but it must be recognized that at the moment, technologies do not produce much meaning.
Metamemories are interesting in this respect, but how can one implement the archaic memories which are specific to the human body?
The question is, could it be possible to make a machine feel pleasure? Here again, the body is missing, and the machine must be closely associated with the human body through an interface, a neural interface, for example. The Xenobot is an organism in all respects, except that it is based on an artificial design. As it has a body, it has bodily senses, and, consequently, homeostatic and sensory affects. This solves many of the problems associated with robot embodiment (see Dietrich et al., 2008, p. 150;Possati, 2021). This prospect, although very anxiety-provoking in certain aspects, has the merit of doing what is necessary to give a body to such a concept.
Our interfaces are particularly poor, digital phenotyping proves that we are not at one with the algorithm. Errors are the result of a random meaningless calculation, even if the expert and the user can find one, [error] the fact remains that this does not reveal anything of a subject's unconscious. Maybe it will come in time with the increased use of neural interfaces.
The question is whether we can find an emerging unconscious in the complex system of an AI machine.
Minatti postulates that AI chatbots are likely to make miscalculations like humans and have unconscious symptoms as a result: lapsus linguae, missed actions etc. If we want to give the machine an unconscious, we must give it a corporeality. However, then, there is something that immediately presents a challenging concept, which is that in this notion of "body", if we apply its definition: something which has specified functions due to its organs, an automobile (or even a computer) is also a body. It goes without saying that a body is alive. What best proves that one is alive is the path of mental debility over time. However, it is not given to all bodies, insofar as they function, to suggest imbecility (Lacan, LESSON I, DECEMBER 10, 1974).
So, can the machine feel its organs like a human and enjoy or somatise in its stomach or its hands?
Probably not, and this is the barrier that prevents the machine from having an anthropomorphic unconscious.
The effect of anxiety or excitement is located deeper, in the activation of the amygdala. Excitement is a push to enjoy in the same way that anxiety can be a sign of something we have experienced or that we are going to experience. This causes stereotypical reactions, including hormone secretion, acceleration of the heart, vasodilation of blood vessels in the brain, kidneys, heart, lungs and muscles of the limbs, vasoconstriction of the skin and intestines as well as increased sweating. These are typical reactions, but they vary in magnitude. The localization of the ensuing jouissance is only the reflection of the scale of this excitement or anxiety.
Then, having a human brain means understanding some things a machine cannot: connotations, which are impossible to inhibit even when they are inappropriate or annoying, they are constantly present when uttering words. For humans, it is a question of awareness, "limited to the meaning and background" caused by the connotative meanings which one comes to associate with language.
Frequently, excitement is falsely associated with active semantic meaning, potentially causing a Freudian Slip.

This publication's fundamental hypothesis is that the concepts and methods of psychoanalysis can be
applied to the study of both AI and human-AI.

I apply projective identification to the study of AI. For instance, I claim that the concept of algorithmic
bias is a kind of projective identification (Possati, 2021).
According to Lacan, the thought process is not, as with a machine, the conscious repetition of a previously implemented lexicon: "One does not speak to the subject, it speaks to itself and that is how it apprehends itself". "Other studies show that hallucinations result from a subvocal utterance-the small voice whose origin it is difficult to find-the brain activity recorded during these hallucinations is similar to that observed in the production of inner speech and auditory brain imaging in a normal subject" (Lacan & Bazan, Lacan quoted by Bazan).
Having a body involves having both the brain and the vocal organs, and this matters a lot from a human language development standpoint. "Motor imagery recruiting specific neural networks prior to speech action, whether or not it is followed by execution [...] the internal counterpart of vocalization, the interior speech in all its forms: silent reflections and comments, refrains, swear words, prayers, expressions, slogans, fragments of sentences, isolated words, etc. The substratum of this speech imagery is constituted by the motor trajectories preparatory to the execution of articulation. This is why the jouissance of speech is inseparable from this preparatory process to vocalisation... [...]. Moreover, gestures condition the subject's mental organization:

"[…] Motor images based on the efference copies of the commands sent to the muscles of the hands.
There is reason to believe that these motor images of gesture-in a similar way to motor images of articulation-have a non-negligible importance for the psychic organization of the speaker" (Bazan, 2007).
"In the case of a missing arm, the sustained intention to move the arm, not followed by an execution, will bring out an experiential experience of the arm. By analogy, we must then make the hypothesis that in the context of a conflictual representation, the sustained intention of uttering desire, not followed by execution, will cause an impression of presence or concern to emerge through a motor imagery correspondence to that of the enunciation or the action of desire. It would be the driving phantom of repressed representation". The machine does not desire anything and does not enjoy, and these phantom phonemes are Lacan's equivalent of master-signifiers, but who can say if several occurrences of a signifier in an AI would have the characteristics of a repressed desire? (Bazan, 2007) When Lacan says that desire is desire for the other, we should not rush to consider it in the sense that desire is a social entity, as if it were an affair between already constituted subjects. Lacan fought hard against any sociological interpretation of incest, including that of Lévi-Strauss, and in favour of an interpretation that he does not hesitate to qualify as "metaphysical" and stick to a sociological conception of desire. Society is neither more nor less real than each individual subject. It does seem that the Lacanian ethics, far from being an ethics of the other, is to the contrary, that of an abysmal solitude, which has more to do with the anonymity of what Merleau-Ponty called a "lived solipsism" than with a sort of invading sociability. Morality is made with something that comes from deeper than the ego. Kant's morality itself is still too "socialized".
As Possati postulates, one can seek, not without reason, to evaluate the impact of the presence of the computer, and other AI smartphones, as objects of projective identification. Dr. Nasio once said during a seminar in Paris the computer can be endowed with a small object "a"-in the sense of Lacan, which magnetizes desire and has the function of a transitional object (Paris, 2021).
Once we consider that the machine is a living being, how do we consider its bodily limits? It has no shell, nor "moi peau" (skin-ego) as Anzieu would have said (Anzieu, 1985). To the contrary it has indefinite outlines and extra-corporeal options thanks to the connectivity of extra functions. It cannot experience the pleasure of a gesture or a caress, which profoundly differentiates it from the human. One can also consider an experimental algorithm which is based on the NLP errors of the machine during a session, whose results are only valid if they can be relied on. Again, the addition of a living body is the only way to give meaning to the "slip" of the machine. We propose a device which interprets the words of the analyst and which instead of seeking the interpretation in NLP closest to reality, looks for machine error and inaccuracy in NLP operation. Then these phonological approximations are transmitted to a screen placed in front of the analyst who retains or rejects the proposals made by the machine, according to their semantic, phonological, graphic and pragmatic contiguities and their relevance according to the expert. Finally, a researcher observing without interfering, perhaps after having filmed the scene on video, tries to analyze what the projective identification is about (Ogden, 1982) and what its nature is. In short, the lapsus of the machine applied to analytical therapy. But in this case AI is only a mediator.
It is not legitimate to give to certain things, for whatever reason one imagines, a human function, such as representing thought. Thought is not a category. I would almost say it's an effect. It would be fair to say that it is in its most fundamental point of view, an affect (Lacan, LES FILLONS DE L'ALETHOSPHÉRE , May 20, 1970 Lacan Seminar "The psychic apparatus is not only a system of transformation of forces, the relative arrangement of the sub-systems which compose it defines the psychic space, [...] still remain in the imagination of Freud, very dependent anatomical and neurological patterns, before finding their topographical basis in the projection of the surface of the body, on the background of which sensory experiences emerge as significant figures" (Anzieu, 1985). This topographical support has not yet been discovered, despite repeated attempts to prove -or deny -heir existence by neuroscientific imaging.
Regarding the body, Anzieu maintains the position that the Ego has a mental sense and a bodily sense.
There should also be a third sense added, that of the fluctuating borders between psychic ego and bodily ego. He insists on the association of the first two in the waking state and their dissociation during sleep. Obsessive impulses and ideas come from the Super-ego and are accompanied by the feeling that a motor unit discharge will occur, when it does not. In hysteria, conversion transforms the feelings of the internal psychic ego into bodily phenomena. In psychosis, they arise from the outside... (Anzieu, 1985).
Thus, we have areas of the personality which are poorly known but which exist, and which seem inseparable from physicality. Moreover, where neuronal noise seems hazardous, the machine is deterministic and only knows chance through randomization, probabilistic and artificial algorithms.
Without a doubt, an AI "knows", in its simplest sense. But it does not have the enjoyment of the acquisition of this knowledge nor of its exercise.
We have tried to show that without a body the position of psychoanalysis is clear: an unconscious is impossible. For Anzieu, Lacan, Leclaire, and even Freud it is impossible to see the unconscious emerging from an AI. The only possibility is to invent artificial intelligence machines which would add a biological body, with its desire, enjoyment and its infinite quest for the "the thing", akin to Hegel's Das Ding. We also presented the outline of an AI slip-up machine as a mediation between two individuals, which we will return to later, but which it would be good to seek the opinions of experts in the fields of computer science, AI and psychology, on this idea. If the machine is endowed with a small object "a" for the user, what to do with this function in terms of both its absorption and excremental function? When one feeds on the "transitional object' of Winnicott, a residue remains.