Vogon Today

Selected News from the Galaxy

StartMag

Because AI is not as smart as we think

Because AI is not as smart as we think

Side considerations of an interesting interview in the newspaper La Repubblica with the Italian mathematician, Alessio Figalli. Michele Guerriero's comment

An interesting interview by the newspaper La Repubblica with the Italian mathematician, Alessio Figalli – winner of the Fields medal, the Nobel of mathematics, in 2018 – who teaches at the Zurich Polytechnic opens very interesting glimmers on the relationship between mathematics and Artificial Intelligence for the future. The significant fears and great opportunities offered by the development of this large area of ​​research – involving mathematicians, computer scientists, anthropologists, scientists and philosophers – question everyone about our future life and how it will change our work, our way of treating ourselves , our life.
According to Figalli, AI, in the mathematical field, can certainly replace some human activities that involve researchers, students and teachers. Activities such as correcting an exam paper or finding solutions faster than human times, but not discovering new things, according to Figalli. The problem exists for the Italian mathematician who teaches in Switzerland, and for this reason it will be necessary to explain to our students – he claims – the beauty of the effort in finding solutions. On the other hand, the risk does not exist only for the application of AI in mathematics in the rapid search for solutions to problems, but also for subjects such as Latin or Greek, always from the point of view of the working method , comparable to mathematics (in the face of those who maintained that the old Liceo Classico was poor in mathematical activity!).
But the most interesting passage of the interview with Figalli is in the definition, in the light of the facts, of the activity that Artificial Intelligence carries out. Behind Artificial Intelligence there are models built on neural networks or "artificial models of the brain". Mathematics decides how many layers can be built for a specific model, how strong the connections of these networks, modeled on those of the brain, must be. But why neural networks work is unknown, says Figalli. The functioning of neural networks is an empirical fact: a neural network works because we experience it in practice. And we still don't know why.
The discourse of a mathematician like Figalli recalls two interesting elements that show us how AI cannot easily replace human activity. The first is related to the fact that artificial intelligence is not intelligent, by definition, but bases its effectiveness on model training. The second element, very interesting from my point of view, is represented by the characteristic trait of human intelligence which is intuition and independence in finding solutions. Intuition has always been one of the models of human knowledge, which is not found in artificial elaborations. It is not found in the computer, in the Turing Machine which was the first model of intelligence, built on the architecture of the mind. Intuition is not found in robots and AI-related businesses. Machines lack insight. Intuitions are properly human and the fact that man is aware of an experience – taking advantage of the definition given by David Chalmers, an Australian philosopher engaged in studies on consciousness – represents a problematic intuition ( problem intuitions ). Machines or AI have this problem, (un)fortunately, at least they don't have it.

This is a machine translation from Italian language of a post published on Start Magazine at the URL https://www.startmag.it/innovazione/perche-lintelligenza-artificiale-non-e-cosi-intelligente-come-pensiamo/ on Sun, 09 Jul 2023 06:31:56 +0000.