top of page

Diego Brasioli

food for thought

Italy adopts its National Strategy for Artificial Intelligence

The recently adopted Italian Strategy 2024-2026 for Artificial Intelligence is a crucial step for our country, which aims to take on a leading role in AI and technological transition, also thanks to the important role played with the G7 Presidency. This is the full text of the Italian Strategy for Artificial Intelligence 2024-2026 (also in English). The document reflects the Government's commitment to creating an environment in which AI can develop in a safe, ethical and inclusive way, maximizing benefits and minimizing potential adverse effects. After an analysis of the global context and Italian positioning, it defines the strategic actions, grouped into four macro-areas: Research, Public Administration, Business and Training. The strategy also proposes a system for monitoring its implementation and an analysis of the regulatory context that outlines the framework within which it must be deployed.

War, the end of reason

In 1964 Norberto Bobbio decided to dedicate his lessons in philosophy of law to the theme of war and peace, exploring the concept of just war, and arriving, among other things, to formulate his famous thesis on the impossibility of justifying war in an era in which the use of such powerful weapons risks calling into question the very survival of the human race. «And after every war, thought Dori, after every battle, not one, but two, three, ten, a hundred versions. Who is right, in the end? What appears seems one thing, but then seems another, and then changes perspective again. In the end, what does it matter who is right, if reason itself has been lost?» (Diego Brasioli, il Caffè di Tamer, Mursia 2002, 2nd ed. 2023)

Can machines become conscious?

The answer is yes, through World Modelling. Yann LeCun’s (Meta, Alan Turing Prize winner) vision of the future of artificial intelligence.

Artificial Intelligence, Consciousness and Emotions

The great neuroscientist Antonio Damasio, in his studies on the neural bases of cognition and behavior, has highlighted the importance of emotions in the decision-making process. His theories suggest that feeling and perceiving are fundamental aspects in guiding human choices. Artificial intelligence opens new frontiers in the field of understanding the mechanisms that animate humanity. Through data analysis and machine learning, AI can understand and anticipate the emotional reactions of users. This not only makes digital systems more effective, but allows them to help us feel and perceive in unique and personal ways. “It is time to recognize these facts and open a new chapter in the history of AI and robotics. It is clear that we can develop machines operating along the lines of “homeostatic feelings”. What we need, to do so, is to provide robots with a “body” that, in order to maintain itself, requires adjustments and adjustments. In other words, almost paradoxically, we must add a certain degree of vulnerability to the robustness so appreciated in robotics. Today, this can be done by placing sensors throughout the robot structure and making them detect and record the more or less efficient states of the body, integrating the corresponding functions. […] Do these “feeling” machines then become “conscious machines”? Well, not so quickly. Their “feelings” are not like those of living creatures, although they develop functional elements related to consciousness (feeling is part of the path to it). The degree of consciousness finally achieved by such machines will depend on the complexity of their internal representations, concerning both the “inside of the machine” and its “surrounding environment”. It is very likely that […] this new generation of machines would constitute a unique laboratory for the study of human behavior and the human mind, in many authentic realistic scenarios.»

Thinking Machines? The Original Experiment
by Valentino Braitenberg

When we consider the controversial notion of thinking machines, Braitenberg vehicles come to mind, a fascinating concept introduced forty years ago by the Italian neuropsychologist Valentino Braitenberg (1926-2011) in his study Vehicles: Experiments in Synthetic Psychology. He imagines relatively simple machines that move based on input from their sensors, through purely mechanical actions, but that can exhibit very multifaceted behaviors, to the point of appearing intentional, as if they were endowed with real intelligence. This illustrates how simple rules and interactions can lead to emergent phenomena, a phenomenon relevant in fields such as robotics, artificial intelligence and biology: a different and original way of looking at reality.

The environmental impact of artificial intelligence

Large amounts of water are needed to cool the data centers that power AI systems, raising concerns about the environmental cost of the boom in generative AI. Several studies point out that the growing demand for AI could have a very significant impact on the extraction of water resources from underground and surface sources in the coming years: between 4.2 and 6.6 billion cubic meters by 2027, or about half the amount consumed by the entire United Kingdom in a year!

What are the answers to the potential risks of AI? Towards open source software

The question of whether it is more advantageous and morally acceptable for companies that create AI programs to keep the specifications of their computer code secret or to make them freely available to software developers around the world has been a topic of discussion among industry experts for many years. Concerns that artificial intelligence is becoming a security threat have fueled the debate between closed source and open source software, which has become increasingly heated due to the rapid progress of new technologies. In an open letter published in July 2024, Mark Zuckerberg, one of the founders of Facebook and CEO of Meta, asserted the position taken by his company, considered by many to be revolutionary, of liberalizing the use of its programs as much as possible. In particular, the Llama 3.1 405B model is now available to developers in open source mode, like all the artificial intelligence models that Meta has worked on so far. This means that anyone can access, view, modify and redistribute the source code of this program. For example, someone who wants to use LLama 3.1 to create a new chatbot that can converse naturally with users - just like ChatGPT or Gemini do - would not have to pay Meta the licensing fees for the technology they are using. Zuckerberg argues that the open source approach to AI will allow the widest possible number of developers of new technological models to use this knowledge to build their own AI programs. The basis of this reasoning is the awareness that it is unrealistic to think that a handful of companies can keep their AI technology secret for long, especially when Silicon Valley is evidently exposed to continuous attempts at industrial espionage. But Zuckerberg’s ambition goes beyond these, albeit important, competitive aspects, and ultimately aims to create truly accessible and transparent AI systems, for the benefit of all: “Open source will ensure that more people around the world can enjoy the benefits and opportunities of artificial intelligence, that power is not concentrated in the hands of a small number of companies, and that the technology can be implemented more uniformly and safely across society.”

How much energy does artificial intelligence consume?

A key factor to consider is the energy cost required to run new digital systems, supercomputers, and quantum computers. It has been estimated that by 2030, data centers powering AI could consume between 85 and 134 terawatt hours of energy per year, equivalent to the consumption of entire nations such as the Netherlands, Poland, or Argentina.

Moore's Law and the Singularity.

Progress in artificial intelligence is accelerating tremendously, along with our difficulties in understanding and managing its development. What seemed impossible only yesterday already appears outdated today, and we do not really know what tomorrow holds. In the mid-1960s, the young chemist Gordon Moore - who would become one of the pioneers of microelectronics and co-founder of Intel - made a prediction that would shape the entire semiconductor industry. It states that the number of transistors on a silicon chip doubles approximately every two years, leading to an exponential increase in computer processing power and a decrease in cost per transistor. Moore's law, which proved to be surprisingly accurate in the decades to come, greatly influenced the digital industry, with rapid advances in various fields such as computing, communication and electronics. But today, after sixty years, the question arises whether it still makes sense to talk about the validity of this law. Well, Moore himself argued in a 2005 interview that the limits of his theory are essentially physical, and that it will last until chips approach the size of atoms, and that therefore in the future, as miniaturisation proceeds, it is theoretically destined to run out. According to scholar and entrepreneur Ray Kurzweil, AI represents a paradigm shift in the scalability-based scheme of progress. His studies emphasise the ‘singularity’, i.e. a virtually unlimited explosion of the power of artificial intelligence that will render the entire history and culture of human civilisations obsolete. Kurzweil imagines that we will soon (in the coming decades, and in any case by 2050) experience the advent of algorithmic systems capable of autonomously improving themselves in thousandths of a second at an ever-increasing rate. Man and his role in the world would cease to exist in the way we have always known it. Geoffrey Hinton, the British-Canadian scientist who won the Nobel Prize for Physics in 2024 and who is considered one of the fathers of AI and whom we have already met as one of the inventors of artificial neural networks, has taken the resounding decision to leave his position as an executive at Google in order to devote himself entirely to the one mission that is now close to his heart: warning the public about the most disturbing aspects of artificial intelligence. If the most immediate danger is that technology will make it increasingly difficult to distinguish reality from fake news originated by AI systems, he fears that the future holds even more alarming scenarios. According to Hinton, in fact, there is a real risk that artificial intelligence systems could autonomously learn unexpected and pernicious behaviour, and that soon the human being will literally be supplanted by a superior intelligence that he is no longer able to control, like a genie that has escaped from the bottle: ‘Many people thought the idea that these systems could become smarter than people was wrong. I myself used to think that this was an eventuality yet to come, that it would not occur until 30 or 50 years from now, or even later. Obviously, I no longer think that way, and already in a few years we could develop an artificial intelligence that is much more intelligent than man. That is extremely frightening.’ This is a dystopian vision that echoes that previously expressed by the distinguished physicist Stephen Hawking, who warned that a super-intelligent AI could become unstoppable and beyond human control, with potentially catastrophic consequences, especially if its capabilities were used irresponsibly or maliciously. Hence, according to both Hawking and Hinton, the importance of carefully regulating AI to ensure that it remains at the service of humanity, rather than threatening our future. This basic pessimism, which fuels an essentially dystopian conception of the future, is not shared by Kurzweil who - along with other exponents of transhumanism, the new philosophical current that sees in technological progress the possibility of human improvement - instead espouses an optimistic view. In fact, he believes that artificial intelligence will help us solve hitherto insurmountable problems, for example in the bio-medical field, by allowing the integration of the human biological brain with external hardware and software in the future, which could potentially allow us to live indefinitely. We will thus witness not only the disappearance of degenerative diseases or the most aggressive forms of cancer, but a total biological improvement of man, towards immortality. If these predictions are true, the individuals who will live forever may already be born and present among us.

FFT.jpg
bottom of page