Intervista a Ossi Naukkarinen

A cura di Nicolò Berni e Sara Fiori

Ossi Naukkarinen is Professor of Aesthetics and former Vice President of Aalto University, Finland. He has published articles and books on the nature of aesthetics as an academic discipline, environmental aesthetics, everyday aesthetics, and mobility culture, e.g., Aesthetics as Space (Aalto University, 2020). Naukkarinen is a member of the editorial board of Contemporary Aesthetics. At the moment, he is writing a monograph about the current state of the humanities.

Segnalibri Filosofici: Your talk’s title was “What kind of humanities do we need when we have ChatGPT or other A.I. tools?” and it indeed asks a crucial question. However, this latter is based on an assumption which is not so obvious: that we still need humanities. We would like to start from here. You said that you feel optimistic and that you think humanities will continue to play an important role in the future. Yet, you also said that there are many pessimists: what do you think the reason is for this mistrust in the future of humanities? More importantly, what is it that makes you an optimist?

Ossi Naukkarinen: There is certainly mistrust and pessimism in certain contexts. But as I see it, it is a rather superficial mistrust that doesn’t touch the core of the humanities. It probably stems from overplaying those aspects of the humanities that are not seen as practical from the STEM and business point of view. It is true that humanists typically don’t have the skills to offer handy and quick solutions for technical, mathematical, and economic issues tackled in companies. A STEM approach is also indeed needed if we want to survive and prevent the environmental (climate, biodiversity, water) crisis from turning globally lethal. This may lead to thinking that humanists are not needed at all in companies and elsewhere, and even academic funding of humanities is cut in many countries. This, in turn, can lead to further decreasing numbers of students, as has happened especially in the US; people don’t dare to study humanities because they think that there are no jobs. 
What is superficial in this? Even in the US, when students are asked, many would like to study humanities and see them meaningful for their personal development. They don’t mistrust the contents, the value, and importance of the humanities but are simply worried about how to pay their student loan and rent. Historically seen, some sorts of humanities – or predecessors of what we now group together as the humanities, after Wilhelm Dilthey’s Einleitung in die Geisteswissenschaften – have always existed, and they have been understood to be important. Moreover, humanists’ works stand the test of time very well; we still read Plato, Aristotle, Seneca, Montaigne, Kant, and others. Humanities have survived all kinds of political and economic periods and environments. This is because they serve fundamental human needs. It is practically impossible to imagine a society where questions typical for humanities would not be discussed.
This does not remove the funding challenges and other difficulties on the level of an individual humanist. But the nucleus is strong and that makes me optimistic. Even now, there is a demand for interpretations of the world, life, and culture that humanists can offer – and there are still also jobs available. 
I agree with many who point out that the humanities are valuable and needed for learning critical thinking, supporting democracy, social coherence, understanding cultural heritage and other cultures, ethical and aesthetic development, and many other things. And I do not think that the environmental crisis can be solved by STEM skills alone; on top of that, we need changes in the ways of thinking, philosophy, politics, world-views – in areas where humanities have a lot to offer even if is often impossible to prove their direct effect. But I don’t think it’s enough to just state this. Humanists must be able to concretely show how and why they are necessary. How do we support democracy? How do we strengthen ethical and aesthetic discussion? What do we do? How do we operate? Show what you can do, in practice! This is the central theme for my next book that I hope to finish at some point next year: I try to make the value claims that praise the humanities but are often very general and abstract a bit more tangible. There are good examples that do show that and how humanists are specialists in addressing humanity, culture, languages, emotions, and other human related issues in their temporal (historical) contexts – specialists in things that we, as humans, cannot but try to figure out. Moreover, humanities approach these things in ways no other point of view covers. That’s why they are indispensable, and this is where my optimism comes from (apart from my overall optimistic world-view).

SF: Now we can get to the point. During your talk you tried to explain briefly what we usually deal with when we do humanities (we try to understand what we are, what we do and the way we do it as humans), how we do humanities (the way we work is characterized by a certain “openness”, the use of natural languages and a strong interest in temporality and history) and why what we do is valuable. Does this still apply today or have things changed? There is a new wave of A.I., which is getting increasingly autonomous, self-learning and self-regulating and which in the next few years could perhaps solve very quickly challenging tasks which humanities could perhaps solve slowly and laboriously. That said, what kind of humanities do you think we need today and will need in the future? How do humanities have to adapt to this new situation?

ON: A.I. is not just ChatGPT, Gemini, and other generative LLMs. Different versions of A.I. are everywhere, have been for some time, and they belong to a larger group of digital tools without which the world as we know it would not exist. Everything from basic infrastructure to your daily phone apps is computer controlled. Digital tools, in turn, belong to an even larger group of technologies, some of which exist as physical machines, some as organizing principles of activities. There’s nothing fundamentally new in this as Jacques Ellul pointed out already in the 1950s in his La technique, ou l’enjeu du siècle
True, the latest A.I. innovations are changing many things, but essentially there is no huge difference compared with earlier times. Humanists must both adapt to the latest technologies and try to affect them so that they will serve us humans (and other organisms, as post-humanists have for a long time emphasized) and not enslave us. This means that we must use and analyze them to understand how they work and what they can do. It also means collaboration with engineers and scientists who have knowledge about those aspects of A.I. we don’t. That is the best way we can also affect them: we (among others) should tell the scientists and engineers what A.I.s should do, what not, and engineers are typically more than happy when provided with new challenges to solve. All this also brings new types of jobs for humanists. We are needed to make A.I. understandable for all, from school kids to grandmothers. I quote the mission statement of the Finnish Center for Artificial Intelligence (also including humanists): “A.I.s we create can interact with humans in ways that are understandable, trustworthy, and ethical.” Well, yes, sounds good – let’s make it happen.
I cannot see that any kind of A.I. could solve humanistic issues for us humans just by themselves. They are tools that can do astonishing things: propose new ways of thinking, effectively summarize large sets of materials, analyze and combine points of view, write texts, and no one knows what they will be capable of in the coming years. But anything they produce now and in the future is only a proposal, a suggestion that must not be accepted just like that. Their internal processes are, to some extent, autonomous, but their outcomes cannot be. It is our job, necessarily, to estimate whether the results they produce are of any use to us. A.I.s are much better at many tasks than humans are, but so are traditional power tools, cars, and Jacquard machines. It is always us humans who have to evaluate and think about what we want to do with them and why. Humanists are trained in building good arguments, not just simply uttering our personal beliefs, and good arguments will be needed also in the future. Humanities are not religions where all you need is a leap of faith or arts where an interesting personal point of view might be the point; no, humanities try to build arguments that show why we would benefit from thinking in the way the humanist proposes. We need to build the arguments that critically examine what kind of A.I. is good for us and why. This does not mean that humanists could prove their point in any strict sense (like mathematicians can, within certain frames) but humanists’ discourse is argumentation nevertheless, based on notions we can consider more or less true (while understanding that “true” is a philosophical problem on its own). 
I would also carefully think what kinds of things require quick, effective advancement and which benefit from slow and profound consideration, sometimes with no final and clear-cut conclusion. It would be great if A.I. offered a quick fix for the environmental crisis or found an effective cure for cancer in a day, and it already does quicken many steps in research projects. But what would a quick fix for your personal development as a human be? We all grow and learn things throughout our lives. Humanities offer tools and materials for that, not decisive solutions, quick or otherwise. You probably know the video on YouTube where Stephen Fry reads Nick Cave’s letter about ChatGPT to Leon and Charlie, his fans I guess. I don’t agree with everything in it, but one of the main messages is very wise: struggling and hard work are valuable as such and make life interesting. If you achieve something all without effort, with A.I. or otherwise – what’s the point? –  can you be proud of that? Moreover, while it is so easy to produce, for example, academic articles with A.I., journals are already in trouble with increasing numbers of submissions. I don’t believe that we really need all that can be generated with A.I. at an accelerating pace, countless new apps, services, objects and more.

SF: You gave your talk in the framework of a conference titled “How can we use A.I. sensibly in aesthetics?”, and the word “sensibly” was key (either meant as “responsibly”, “intelligently”, “critically” or as something that pertains to sensible experience).  We would like to explore this side of the problem as well. As a matter of fact, digital tools and computers have changed some ways in which we interact with the environment. We have lost some skills and perhaps learned new ones. Taking an example from your talk, we might no longer need to learn to write in other languages the way we always have, considering how well machine translation programs work. How is our relationship with the world changing and how will it continue to change as we work with these new A.I. tools?

ON: It is impossible, at least for me, to foresee what will happen in the future. Instead, think how deeply cell phones (that are A.I.s in many respects) have already changed our behavior and relation to the world. The ways we navigate in a new city, listen to music, communicate with friends (or “virtual friends”), try to find partners, learn new things, and do our shopping have changed. Our gaze is on a small screen and even our posture has changed so that some have a so-called phone neck. Have you tried living a week without your phone lately? That requires a special effort because you are cut off from more or less everything that is normal nowadays. Is this sensible? It is not self-evident that it would not be.
Our phone-dependence means that we are tightly anchored to one user interface of the whole super complex digital world. At the same time, we don’t typically see or even think about the whole, the complicated system behind the interface. It is an invisible and a-aesthetic basis of our existence. Our everyday aesthetic perception doesn’t reach it, and in that sense sense-based sensitivity is superficial and not very sensible. Yes, we have learned new skills and fewer and fewer need old-school skills of the agrarian society. This necessarily affects our sensuality as well. We have become more vision oriented with our screens, and the little physicality that is left is concentrated on our fingertips that don’t feel much when they swipe the smooth surface. Compare that with a farmer, cook, or carpenter who smells, listens to, feels, and even tastes the materials that are important to them. Their relation to the environment is much more many-faceted, bodily, and sensual. Their life has more aspects of sense. No, I am not against new technologies and have no lust to return to the agrarian world, but I don’t want to lose too many of my old-fashioned physical abilities either. To me, they make life more versatile and interesting and when using them I’m not quite that dependent on the digital infrastructure that, in some situations, may not be available. On top of that, I want to understand something about the layers of our world that are not easily perceivable, to make sense of the non-sensual, a-aesthetic. Without that, I'd be at the mercy of those who know better.
With effective A.I., we can become very lazy with mental skills such as language and mathematics. Why would we learn them by ourselves if we trust computers? Well, in many cases, I would not blindly trust them. How can I check when and why I can use them if I have no idea what the criteria are for a correct solution? Even a traditional pocket calculator is much better than any human with mathematical operations, but you have to know how to use it and what it does in principle. If you don’t, you cannot know which answers are correct and useful, which ones are not. As current and future A.I.s are so much more powerful than pocket calculators, figuring out what they do and how they operate is much more challenging, if not overwhelming – and as soon as quantum computers are combined with them, even more so – but I see no other option that to try to understand what is going on. And here, especially: what is going on in the human-computer interaction where we are the other main player? 
It is good to keep in mind that in many ways A.I.s are no more complex and difficult than other things we are used to cope with (sometimes rather sensibly). We do not know everything about how the human brain works, we do not know how other animals think, neither do we know all that we should about climate change. Some scientists know quite a bit about their special field, but average people have to settle for much less. Still, for most of the time, we can manage. Not always very sensibly, for sure, but somehow, and we can try to improve. I don’t see why this would be different with A.I. It is a new type of thing, but we will learn how to deal with it. 

SF: Our previous question brings to the fore a further issue: how should we deal with this change? For example, you said that one of the risks of not only A.I., but of all digital tools, is to make us believe that everything important can be digitalized, while there is a whole world out there. Is it a matter of defining the boundary between experiential impoverishment or laziness and the opening up of new opportunities for organism-environment interaction? Or, to better understand the issue, should we shift our focus to a different type of analysis?

ON: I partly addressed this in the previous answer, but let’s continue a bit (and I will discuss these themes, too, in my forthcoming book). The relation between analogical and digital is interesting. If we say that not all is digitized, it can mean (at least) two different things. First, in practice, many things that could, in principle, exist in digital format, have not been digitized (or created digitally directly), i.e., they have not been converted into data that computational systems could process as discrete data units, fundamentally on the level of electro-magnetic impulses on micro-chips (and cubits, differently). Second, there are aspects of things that cannot be digitized and thus must remain analogical. 
More and more things that can be digitized, will be. However, for practical, economic, energy, and material reasons, there are limits to that. Necessarily, a lot will remain non-computerized and also in the future we will need skills to work with those non-digital things. Note that things can, in principle, become digitized without computers. This means that we approach them through a conceptual model that selects and simplifies certain aspects of them to be data units that we then process by thinking (with paper and pen sometimes). This has been done, say, in accounting for thousands of years. There, we pay attention to only certain aspects of things (price, volume, ownership, etc.) and we have units and quantities defined in the SI-system that we use.
An analogical approach, typical for aesthetics, is different. In theory, perhaps all knowledge that we as humans have, can be digitized and on some level the human brain does just that in its own complex way (it makes use of electro-chemical impulses and neuron activation). In practice, in our everyday living-worlds, in our experiences, we feel, perceive and approach things as a continuous multi-modal flow of something for which we don’t have a simple model or algorithm to process. Before I go out, I can check the digital thermometer that gives me a discrete, exact number telling the temperature – but nothing more. However, when I step out, my whole body perceives a rich mix of things related to the weather: wind, moisture, the temperature, scent, the feel of the sunlight on my skin, and this may affect my mood and how I behave. Nelson Goodman has written interesting things about these different approaches in the context of art. With art, we have learned to pay attention to everything, even the tiniest nuances. Anything can be important. This is analogical, rich, replete, and dense – while digital is simplified, selective, discrete. I cannot see why we would not use both types of approaches also in the future, and it is important to consider when they are needed and where their potentialities are.
Interestingly, when computers (A.I.-based or otherwise) produce something, they operate digitally. Yet, whatever the outcome is, that is something we, as humans, approach with our senses, analogically. Pictures, videos, sounds, texts. The relation with our phones is also analogical: we look at, listen, and touch them with our bodily organs. The computational-digital and bodily-analogical merge.

SF: Aesthetics plays a privileged role in our questions; however, it needs to be understood not simply as a philosophy of art, but in a much broader sense. In 2019 you wrote an article, “Feeling (with) machines”, and your purpose was to investigate how the use of computers, which pervades our daily lives, also affects our everyday aesthetics, that is how we experience and interpret things, actions and events that we interact with in our daily lives. In that article you mentioned systems partly capable of self-learning, machines that are more and more independent and active, and A.I.-generated art. Not the least, you argued that they can be involved in generating aesthetic phenomena and experiences. Are the aesthetic phenomena and experiences resulting from this very process of generation peculiar or are they indistinguishable from the other ones? Are there specific elements that characterize the aesthetic experience of A.I. products or operations?

ON: The dream that we can build machines that can imitate us is age-old. Think of Pygmalion. Now, we are there. A.I.s can mimic certain aspects of human behavior so well that it is often impossible to say whether a text, video, song, or a picture is human made or an A.I. creation. At the same time, there are still many contexts where the difference is clear and there is no perfect humanoid robot. Perhaps one day there will be; let’s hope it won’t mean a Blade Runner world in all respects.
In many ways, computers have been changing our everyday aesthetics and art for a long time. Phones, as said above, are the clearest example. Perhaps even more importantly, robots and computers have played a key role in the mass production of all consumer goods since the 1960s. Just think how profoundly that has changed everything we live with. Moreover, artists such as Desmond Paul Henry started to create art with computers as soon as they got their hands on them, also in the 1960s (I keep on reminding about history because that’s one thing humanists do).
Now, what is A.I. changing? It can take care of many phases of the art-work production quite autonomously, and unlike in more traditional computer-based art projects, the process is a black-box, meaning that the human artist does not know what exactly is behind the outcome. There are more surprises, and it is not obvious which aspects of their tool artists can really master (as a painter masters her tools and materials). Sometimes A.I. produces something that could have been created by a human artist, and we cannot tell the difference. Some other times it produces something that we could not have achieved. So what?
For us, humans, who interpret the outcomes, it is crucial to know what the production process was (Remember the Nick Cave letter, from one perspective). This is to some extent comparable with what we must think with fakes and forgeries and with other types of indiscernibles Arthur C. Danto has written so much about. Information about the author and the process is essential and might change everything. Objects (understood broadly) cannot be evaluated in isolation, but they must be put into a context of a whole culture of interpretation. This was also what Kendall Walton pointed out in his classic article Categories of Art. For now, there are no relevant interpretation categories for A.I. created art in its many forms. We haven’t yet figured out what their standard, variable, and contra-standard properties are and why, and how to make a difference between those and the categories of human art. For example, it seems to be important for certain types of heavy-metal guitarists that they can play very fast (“more is more” as Yngwie Malmsteen reminded us) and undoubtedly that is difficult. For a machine, that’s not a feat because they can be programmed to play so fast that the human ear only hears a monotonous hum. It would be good to know who or what is playing.
For me, it seems that it can be an interesting science and engineering challenge to build machines that ape (!) humans. In a way, I don’t see the point. There are already too many humans on this planet, why build replicants and waste valuable resources on that! Moreover, for many things, the optimal solution is not that which humans can achieve. An excavator is much better for digging holes than a machine that would imitate a human with a spade. Why should we think differently in the arts? A.I.s can be used for creating things that humans could not achieve, and this has in some cases been done, for example in architecture where new types of structures have been created. That is interesting and enriches the art world. 
Ok, if the goal is to produce an endless flow of trance music for clubs, A.I. is really good for that, based on what humans originally invented, but it brings us nothing new. I would rather hope for a different, two-tier approach: let’s use A.I. for creating things that humans simply cannot do and at the same time see if there still are things humans can do but A.I. cannot (for the time being, they cannot, for example, take care of a renovation process of an old house).

SF: Let us now talk about art. When we are dealing with a work of art, some concepts traditionally come into play: among them, the concepts of creativity, authorship and expression, but the list could be much longer. In front of a Picasso painting we are generally able to use them: but what if we look at the famous “Portrait of Edmond Belamy”? Is it a result of creativity? Is there an author? Who is it? Is the work expressive? It is known that A.I. can produce text, images and music. Do these cases of so-called A.I.-generated art lead to a kind of rethinking of traditional concepts of art philosophy and aesthetics? 

ON: Again, I touched upon this above, but let me add a couple of points.
As the legacy of romanticism and modernism, the idea of individual, genius-type creativity is still overplayed in the arts (and business consulting), even if postmodernism luckily managed to strip some of its glory. Most art is not exceptionally creative but repeats what is conventional in the field. There’s nothing wrong with that, and it doesn’t mean that artists are not creative at all. Still, I don’t think that they are more creative than scientists, cooks, priests, or anyone else. Claiming the opposite puts unnecessary pressure on poor art students and it is based on nothing. 
I don’t see why A.I. could not be creative in the same way as we are. It can combine things in surprising and fresh ways. Eduardo Navas has a fairly new book on this, with lots of examples, The Rise of Metacreativity. He talks about meta-creativity and remixes where humans and machines collaborate and combine different layers of creative work into new, constantly changing wholes. Technically, this is much easier than before which means that more and more people are doing that. There are elements of both human and A.I. creativity put together. Again, what is created like this, will be, eventually, evaluated by us, humans (yes, even evaluation or parts of it can technically be outsourced to A.I., and this is what happens with students’ assignments at universities, like it or not. However, I believe that in many cases, we still want to evaluate artworks and other aesthetic phenomena by ourselves). 
Who is the author when we use A.I.? That is often difficult to say. Still, is that so different from, say, movie production where there’s a big team of people working with all kinds of equipment (produced by other teams of people and machines at factories)? Yes, in principle, there the elements are somewhat more transparent, and we can point out who did what and how each tool functions, whereas with A.I. that remains fuzzier. Yet, I think this is only a difference of degree. In both cases, “main” artists are managers organizing and combining (creative) work. A similar trend has been going on in contemporary art programs for a long time before A.I. appeared. It is very typical to educate artists to work more like curators who can manage processes with teams and collaborating partners rather than as skilled artisans and craftsmen (who artists can hire to realize their ideas). Now, A.I. is a new kind of collaboration partner.
The question of authorship is also related to copyright and other legal and ethical issues. Some think that A.I.s could be treated similarly as companies who are not humans but still have legal rights and responsibilities (not related to their individual employees). It is interesting to see how this thinking may develop. However, it is more difficult to think that A.I.s could have ethical responsibility or that they could express something. That would mean that they are truly autonomous actors, probably with emotions and self-consciousness. Well, they might become that, but first we should know what it means. If there was a humanoid robot that behaved exactly as we do and we could not see the difference between it and us, what would it matter what’s behind its behavior, a biological brain or a computer, socially learned, biology-based emotions or algorithms? I really don’t know. Luckily, such creatures do not (yet) exist.

SF: To conclude this interview, we would like to draw at least some threads together. As a result of what we said, can humanities, and more specifically aesthetics, help us understand the A.I. phenomenon? And, on the other hand, can A.I. make us reconsider the way we do humanities and more particularly aesthetics? If this is the case, how?

ON: Humanities can help us understand ourselves as cultural beings, with perspectives, concepts, and tools typical for us. Art can help in that, too, and so can sciences, sports, and many other approaches. A.I. is a great mirror that forces us to re-consider the human way of existence once again. A part of self-understanding is always to think how we relate to something or someone else. How do Finns differ from Italians and in what respect are we similar? Now, A.I. is a new kind of phenomenon that we are related to, and in some respects its many variations are very similar to us, in other respects different. How we are or should be human is a theme that anti-, post- and trans-humanists have been pondering for decades – and it is the same theme that troubled philosophers and religious thinkers in Antiquity and before. A.I. seems to bring some new aspects to this discussion.
Aesthetics deals with certain aspects of human (and animal) life, the ones that have to do with, for example, perception, art, beauty, enjoyment, taste, emotions, and a bunch of historically developed concepts such as the sublime and picturesque. There’s a lot to do when we try to figure out how A.I. is related to all of these things that are so important and dear to us. That is something we won’t be able to make sense of on our own, but we have to collaborate with scientists, engineers, and others. Undoubtedly, that will help us understand what we are and what A.I. is in a new way. In practice, this, simply put, means that as we use A.I.s we need to constantly analyze and discuss what is happening and what is not. Little by little, the best ways to understand what is going on will stand out. 

Welcome to take part in this discussion!

Sara Fiori: Student @ M.A. in Philosophical Sciences, University of Bologna. She received her B.A. in Philosophy at the University of Cagliari. Her interests lie at the intersection of aesthetics, the philosophy of art and the philosophy of literature. She aims to tackle them through a broad and multidisciplinary perspective.

Nicolò Berni: Student @ M.A. in Philosophical Sciences, University of Bologna. He received his B.A. in Philosophy from the University of Bologna with a thesis on the concepts of constitution and passive synthesis in Husserl.