Intervista a Daniel Andler

A cura di Martina Bacaro

Daniel Andler, trained in mathematics and philosophy in Paris (thèse d’État, 1975) and UC Berkeley (PhD, 1973), taught mathematics before transitioning to philosophy. He held the chair of philosophy of science and epistemology at Université Paris-Sorbonne (now Sorbonne Université) from 1999 to 2015 and is an honorary member of the Institut universitaire de France. Elected to the Académie des sciences morales et politiques in 2016, he became emeritus in 2015.
Andler’s research focuses on the foundations of cognitive science, its implications for understanding human affairs, and its intersection with philosophy. His work spans models of the mind, contextuality, and reasoning, advocating a minimal naturalism to bridge cognitive and social sciences. He also explores the application of scientific knowledge to policy and education, addressing transformations driven by cognitive science and technology. He has published extensively, including the books La Silhouette de l’humain (2016) and Intelligence artificielle, intelligence humaine: la double énigme (2023), and a co-edited textbook on cognitive science (2018). He founded and directed various initiatives, such as the Department of Cognitive Studies at École normale supérieure (ENS), the “Sciences, normes, démocratie” research team, the Société de philosophie des sciences, and the Cogmaster program (ENS-EHESS-University of Paris). He was a founding member of the European Society for Philosophy and Psychology and recently directed Lato Sensu, a web journal. In 2019, he initiated a study on “Emerging technologies and collective intelligence” at the Académie des sciences morales et politiques, funded by the Fondation Simone et Cino Del Duca.

The interview is based on Daniel Andler. Il duplice enigma. Intelligenza artificiale e intelligenza umana (Einaudi, 2023).

Artificial Intelligence (AI) is undeniably experiencing a golden age, both in terms of technological advancements and its widespread popularity among the general public. Just fifteen years ago, the term "Artificial Intelligence" was mostly confined to university computer science labs and the discussions of philosophers in cognitive science. In recent years, however, it has come to denote a vast array of technical applications that have transformed—or even revolutionized—how tasks are carried out in many domains of daily life and work across much of the world. As a result, questions once reserved for the realms of philosophy and cognitive science have entered the everyday imagination of anyone interacting with AI-integrated devices: is there a difference between how my computer thinks and how I do? And if so, what is it?
Daniel Andler, professor of mathematical logic, philosophy of science, and cognitive science, emeritus professor at the Sorbonne, and member of the Academy of Moral and Political Sciences, offers a potential pathway to address these questions in his book The Dual Enigma: Artificial Intelligence and Human Intelligence. In the introduction, Andler is quick to clarify that natural and artificial intelligence are concepts that should not be conflated. On the contrary, despite often being considered as already fully understood, grasping the meanings of these two terms represents one of the greatest challenges of our time. The first step in tackling this challenge lies in distinguishing between the notions of mystery, problem, and enigma.

 

I distinguish the enigma both from the mystery and from the problem. The mystery lies beyond us; the distance separating it from our capacity for understanding seems too vast to imagine bridging. The problem, on the other hand, presents itself as a task within our reach. The enigma occupies an intermediate position between the two: it astonishes us, paralyzes us, but challenges us to solve it. And what we aim for are two explanations, not just one: the first provides the key to the enigma, resolving it; the second helps us understand what made it an enigma in the first place. (Andler 2023: ix)

 

Andler proposes that the challenge can indeed be tackled, but only by considering artificial intelligence as both a pathway for research and discovery regarding our cognitive abilities on one hand, and as a reflection of the evolution of our technical capabilities on the other. This dual intent manifests in at least two distinct meanings of the term "Artificial Intelligence." On one side, AI is the object being created—a pursuit that began in the mid-1950s and still retains some of the foundational assumptions that gave rise to the endeavor: an object endowed with specific capacities, namely, the ability to perform cognitive tasks which, if executed by a human, we would unquestionably describe as intelligent. On the other side, AI is the discipline tasked with building this object, a field that has developed through its own history, embracing various theories about human intelligence and the possibility of mechanically replicating it, and which today dominates the scientific discourse on the feasibility of creating intelligent machines.
It is through an exploration of the relationship between natural and artificial intelligence that Daniel Andler constructs a critical reinterpretation of AI's history in both senses mentioned above: as a materially realized objects — referred to as AIS (Artificial Intelligent System) — and as an empirical discipline aimed at developing techniques to create machines or programs capable of reasoning intelligently and acting in ways consistent with such reasoning. In this sense, the book's title encapsulates the key points the author elaborates throughout the text, engaging in a rigorous dialogue with practical AI implementations, including the most recent, such as ChatGPT.
The dual enigma framing Andler's reflections is articulated into three core theses. First, artificial intelligence itself represents an enigma, its core lying in the ambition of cognitive scientists and computer scientists to create an artificial equivalent of human intelligence. Yet, the goal always seems just out of reach, reminiscent of a modern reimagining of Achilles' pursuit of the tortoise. Second, human intelligence also constitutes an enigma, one that can be analyzed in two stages: on one hand, intelligence should not be seen as an intrinsic quality of the mind but as a property exhibited through the behavior of an agent, defining the relationship between an individual and their world. On the other hand, as a characteristic of the coupling between an agent and its environment, intelligence cannot be confined to specific, explicitly cognitive tasks but instead reflects a general adaptive capacity demonstrated in vastly different circumstances.
The third thesis posited by the author is that the two enigmas—the one defining human intelligence and the one shaping artificial intelligence—are inseparable. Evidence of this inextricable link lies in cognitive science, a discipline born in parallel with the sciences of the artificial, grounded in the interplay between the two, where each has nourished the other.
In the first part of the book, Andler provides the tools necessary to grasp the depth and complexity of these theses, tracing AI's evolution both as a discipline and as a series of artificial system implementations. In the second part, he seeks to open a space for reflection that avoids reducing the debate to the possibility or impossibility of achieving the goals set at the dawn of these sciences. Instead, he emphasizes the opportunity to reinterpret human and artificial intelligence in light of their mutual influences, the progress made in both fields, and the possibilities afforded by the creation of AISs capable of interacting with humans in diverse ways.

After a conference hosted at the University of Bologna and organized by the Centre for Knowledge & Cognition within the Department of Philosophy, we had the privilege of engaging with Professor Daniel Andler, posing a series of questions about the hypotheses and ideas explored in his book.

Segnalibri Filosofici: Professor Andler, thank you so much for being here. Writing a book on artificial intelligence today is undoubtedly one of the most challenging endeavors for a scholar. This is not only due to the field's vast and complex nature but also because, as you highlighted during your seminar, the rapid pace of technological advancements can render parts of a book outdated almost immediately after its completion. Indeed, this is precisely what happened to you during the writing process when ChatGPT was released to the general public. Recognizing the significance of this event as a turning point in the public's perception of artificial intelligence proved to be a remarkable insight, as your book is among the first to offer a thorough examination of these new AI tools from both a technical and philosophical perspective. What was your initial reaction to the release of ChatGPT? And what steps led you to decide to incorporate new sections into the book in response?

Andler: I don’t recall the exact sequence of events at the time, but my initial reaction was to try to make sense of what was happening. I began by conducting a bit of research—reading some foundational papers, including the Transformer paper—and trying to understand the technological advancements behind it. What struck me was the increasing level of technical sophistication in AI, which placed me in an unfamiliar and somewhat uncomfortable position. Up until that point, I had felt confident enough in my grasp of the technical underpinnings of AI to comment competently. But with these developments, I started to feel more like one of those philosophers who write about quantum physics without fully grasping the mathematics behind it—a state I strongly dislike. I firmly believe that if you don’t have a solid understanding of a subject, it’s better to remain silent.
This discomfort was compounded by the fact that many details about these technologies are deliberately obscured—trade secrets and empirical know-how that are not fully disclosed. This left me with a sense of unease. At the same time, I needed to grapple with a conceptual shift: while I had long viewed AI as fundamentally reactive, ChatGPT presented itself as something more. It is reactive in the sense that it predicts the next word based on a given input. But in doing so, it also generates entirely new texts, which introduces a generative aspect. This duality challenged my prior understanding of AI.
Ultimately, I realized there was something genuinely new and unexpected at play here, which required a thorough re-evaluation of my assumptions. I couldn’t simply dismiss it as “the same old story.” There was a great deal to unpack, and I had to reframe my thoughts accordingly.

SF: This might be why you describe the current state of artificial intelligence as the dawn of a third era—one that is neither fully symbolic nor purely connectionist, but rather represents a synthesis of narrow symbolic architectures and massive generative models. My question is: what do you see as the defining characteristics of this new era? And, in your opinion, what should we expect given the trajectory we are currently observing?

Andler:  We don’t have a complete account of how large language models are put together, parts of it are a trade secret, notwithstanding the publication of academic papers and industry documentation. However, it seems clear that these systems are far from being mere connectionist models old-style on steroids. The transformer architecture itself implicitly encodes or mimics symbolic operations, and models such as GPT-4 are an assemblage of transformer-based models, harnessing the connectionist resources to symbolic-like operations. The other reason I have for believing that a third era is dawning is, simply, that it’s what some of deepest thinkers in AI research have been saying over the last few years, either explicitly or implicitly when they speculate that new ideas are required if we are to reach the ultimate goal of synthetizing full intelligence.
I have no crystal ball that can tell me where the field will go. But I’ll venture two conjectures. The first is that revisiting the symbolic paradigm might run into some of the same limitations it encountered during the first phase in the history of AI; but that on the other hand it may be able to leverage the progress achieved by cognitive science and neuroscience since then. 
The second conjecture is that, while some cutting-edge efforts will go to the development of a foundation for this new paradigm, most of the coming period will consist in consolidation. I don’t expect that in five or ten years we will think about AI in a fundamentally different way. Instead, I anticipate that we will build better systems, classify them more effectively, and—most importantly—develop a practical know-how for using AI systems. This includes understanding where challenges might arise and how to navigate them, much like we’ve learned to use cars.
In the early days of automobiles, there were no rules, and accidents were frequent. Even when I was younger, cars often had flaws—some would oversteer, others would understeer. Today, even the most affordable cars are remarkably reliable, and we have learned how to handle them safely. For example, we know to adjust our driving on roads that are wet, sandy, or icy.
Cars are, of course, far simpler than AI, but I believe a similar process will occur. We will develop methods to control AI, guidelines for its proper use, and a social framework for its responsible deployment. Just as we wouldn’t hand the keys of a car to a six-year-old, we won’t allow unrestricted access to AI for anyone who wants to use it indiscriminately. Over time, I think we will cultivate both an individual and collective understanding of how to manage AI effectively.

SF: I now have two more theoretical questions about your book, particularly concerning the relationship between cognitive science and artificial intelligence. In your work, you outline a series of mutual influences between these two fields, which have shaped the current diverse landscape regarding the role of intelligent technologies and the questions surrounding their future—and, by extension, our future in terms of knowledge. How would you describe the current relationship between artificial intelligence and cognitive science? Do you see it as a direct consequence of this reciprocal interaction?

Andler: Well, at the moment, there’s still a lot of wishful thinking. Many argue that there should be more input from cognitive science into AI, and perhaps even some influence from AI on cognitive science. For example, Nancy Kanwisher, a very sharp researcher, has pointed out how empirical studies of deep learning networks—particularly the way these networks gradually transform crude representations into more sophisticated ones—mirror processes observed in the visual cortex. She believes that these models allow us to explore mechanisms we cannot directly study in the human brain due to ethical and technical limitations. According to her, this is a significant and promising contribution of AI, especially through deep learning, to cognitive science.
Conversely, there are figures like Yann LeCun, who differentiates himself from colleagues like Geoffrey Hinton and Yoshua Bengio by asserting that true artificial intelligence cannot emerge from deep learning alone, despite being one of its pioneers. LeCun argues that we need to revisit approaches similar to those developed by cognitive scientists 40 or 50 years ago, focusing on cognitive architectures. He has even proposed a cognitive architecture of his own, developed in collaboration with cognitive scientists, including Emmanuel Dupoux, who is part of his research team at Facebook. Dupoux continuously reminds LeCun of the insights cognitive science has to offer, and LeCun acknowledges that achieving machine intelligence will require architectures grounded in cognitive principles, not just massive networks trained with billions of examples. He believes the current approach is too crude to capture the full range of intelligent behavior.
That said, these ideas will likely be carried forward by younger researchers. The field has become so dynamic that it’s challenging to keep up with every new direction. Personally, I feel that I have managed up to recently to stay relatively up to date, though I admit that might be pretentious—and, of course, one doesn’t know what one doesn’t know. But I sense that it is no longer the case. It’s entirely possible that somewhere, perhaps in a lab in England or Italy, groundbreaking work is being done.
One area worth mentioning is artificial life and synthetic biology. There’s a wealth of ideas and models in these fields that seem relevant to artificial intelligence, yet the two domains have yet to truly converge. If I were to make a prediction, I’d say that within 30 years, we’ll see a synthesis of artificial life and AI, potentially opening up entirely new perspectives.

SF: In your book, you describe robotics as a kind of reality check for artificial intelligence — a testing ground, so to speak. Considering the potential convergence of synthetic biology, artificial life, and deep learning networks, could robotics serve as the testing ground or reality check for these models? Specifically, could robotics help validate the idea that neural networks might model the workings of our brains, or would you see its role differently? How do you think robotics can function as a reality check for this emerging era of artificial intelligence?

Andler: I’m not entirely sure, but I believe there are some genuinely revolutionary ideas in this area that have been around for some time. Unfortunately, I’d have to look up the exact references. I recall a student of mine who worked in the United States—he completed his PhD with me but also pursued another doctorate at the University of Connecticut under Michael Turvey, who developed a radical rethinking of the scientific status of perception and action. Traces of these ideas can be found in his thesis. What we’re talking about is a fundamentally different approach to physical movements. It involves considering the vast range of possible ways to perform an action, such as raising one’s arm, rather than focusing on identifying a single optimal method. This contrasts sharply with traditional approaches, whether symbolic programming or more modern methods based on deep learning. Both still aim at discovering the “optimal” solution, either through explicit conceptual analysis—where a sequence of commands is defined, executed, and refined—or by feeding models with countless examples. The pursuit of optimality remains the guiding principle.
To move beyond this paradigm, we would need to engage more deeply with the biological reality of movement. This might lead us toward a new framework reminiscent of what Rodney Brooks advocated: instead of starting with an abstract model of intelligence and then using it to control a robot, we could begin with the robot itself, allowing its embodied interactions to give rise to cognitive functions.
I find this perspective very promising. In fact, I’d recommend looking into the work of a young colleague of mine, Mehdi Khamassi, who trained in cognitive science, particularly the study of movement, in Alain Berthoz’s lab. He’s now working in robotics and exploring innovative ideas in this field.

SF: And the last question, which is perhaps broader, concerns the title of your book, which addresses the dual enigma of human intelligence and artificial intelligence. You mentioned that this enigma can be unveiled because it is not a mystery or a miracle. However, it is still more complex than just a problem. So, from your perspective, what tools or approaches do you believe are essential to domesticate this enigma and perhaps deepen our understanding of what defines us as humans and them as machines?

Andler: That's a broad question. I think what I do is typical of the work in this field. In a way, I feel that the book attempts to unravel some of the enigmas without claiming to have completely solved them. They will remain with us for some time, but I hope that I have opened the way, to some extent, for unraveling these enigmas. One approach is by rethinking human intelligence as essentially the faculty given to us by evolution to deal with situations. In fact, one of the things I hope to explore in the coming years is to elaborate on my concept of "situation"—perhaps in more detail than I was able to in the book—and to show how this could also help solve some of the other mysteries or challenges in AI, such as the commonsense problem. There’s this long-standing idea that the commonsense problem hindered symbolic AI and remains unsolved, but I think there is a very close relation between the commonsense problem and my notion of situation. So, I believe that conceptual analysis, particularly relying on the study of scientific concepts, will be helpful. Another aspect, which is more of a cautionary note, is to stop being misled by the words and grammar of the problem. Sometimes the way a problem is framed makes it unsolvable, and I think that's one of the challenges we face. It always depends on the game being played.

Martina Bacaro is a PhD student at the Department of Philosophy at the University of Bologna – Alma Mater Studiorum. Her research focused on Human-Robot Interaction (HRI) and the Philosophy of Cognitive Sciences, particularly exploring enactive and participatory approaches to interaction, including the dynamics of social cognition and sense-making processes in HRI. Currently, she is a research fellow at the same department, conducting studies on the normative aspects of concepts in cognitive sciences and robotics within the PRIN 2022 Project “Normative Kinds” (PRIN 2022SYAW7A). Her work examines how embodied, situated, and participatory perspectives can inform the development of robotic architectures and enrich theoretical frameworks in cognitive science.