Giovanni De Micheli ©, EcoCloud EPFL
“It’s a UNIX system, I know this” screamed Lex in the epic finale of Spielberg’s Jurassic Park. Then, with few command-line clicks she saved her family, friends and possibly the whole of humanity from a miscarriage of technology aimed to recreate dinosaurs for profit. The movie described a questionable achievement of computing and biology, ruined by a power and communication breakdown triggered by dishonest behavior. The UNIX system, with a transparent interface between the human and the computer system, saved the day!
Over the last fifty years, progress has been fueled by the synergy of computing and communications. Most advances in engineering and business are enabled by powerful interconnected computing units housed in data centers. The unprecedented growth of computing power of processors and their interconnections is the physical enabler of Artificial Intelligence (AI) systems, that have come to everyone’s attention because of the strong ability in problem solving. As a scientist, I enthusiastically embrace the development of this technology, but I raise concerns about its boundaries for the best of the planet and humanity.
The increasingly larger power of computational systems has evolved through a constant technological progress in device manufacturing and in architectural design. Processors have evolved into multi-processors connected by Networks on Chips (NoCs) [1]. In turn, computing racks and data centers leverage advanced optical and electrical data communication schemes. The Internet enables seamless communication among users, local edge systems and data centers, thus providing the backbone for distributed operation. Social networks have not just enabled communication among individuals and groups but have fundamentally changed the nature of human social behavior. As a consequence, computing and communication are the keys to power in its broad sense, including political, military and economic power.
From a very high-level of abstraction, we can see interconnected computing as a machine. This machine provides us with an organization of information of all types, rendering us a useful service. But, as all machines, it requires energy to operate. The entropic cost (i.e., the energy to be paid for organizing information) is the minimum energy contribution required by physics. But the overall energy cost is much larger because of our current inability to design machines optimal in terms of energy efficiency. Decades of research efforts (for integrated circuits and software) have tackled the energy consumption problem, and despite progress we are still far from an optimum [2], because it is an inherently hard problem in theory and in practice. Whereas the energy cost may be small for a portable system, like a cell phone, it is significant for a large data center, where electric power demand is reaching 1GW [3], and now possibly requiring a dedicated power generation plant. Thus, the scaling up of AI systems and data centers has to take into account the large cost that they entail in terms of emissions and heat disposal. Comprehensive plans should include useful waste-heat disposal for both data centers and power generation plants. From a broader perspective, political and economic power used to be rooted around energy sources (e.g., oil) fifty years ago. Today power is centered around knowledge, as said by the metaphor: “Data is the oil of the 21st century”. But in turn, data acquisition is tied to availability of energy resources, thus making knowledge and energy the key pillars for economic growth and political stability.
Correctness is quintessential in the operation of systems, whether they leverage AI or not. As systems are applied to domains affecting human integrity (e.g., vehicular control, medical systems, defense), correct design and operation are necessary. Verifying correctness is known to be a computationally hard problem and realistically only some properties can and must be verified. Thus, it is imperative that systems (including the hardware and software components) have models supporting the formulation of verifiable properties. This is not the case of most AI systems, whose surprising excellent performances are not matched by a full understanding of their operations and thus limiting our ability to verify correctness.
Much has been said and written about the responsibility of physicists who conceived atomic weapons. Much attention is given to potential nuclear offensive capability. But little is said about the responsibility of engineers and computer scientists who design and operate systems where errors are possible and not quantified. Liability lays also in the hand of decision makers who replace human operation by agents with no guarantee of correctness. With any replacement of human activities by agents, there must be individuals who are responsible in front of the law. This issue is exacerbated by the application of AI to mass surveillance, questioning freedom of individuals and expanding the exercise of power beyond ethical limits. Furthermore, lethal autonomous weapon systems (LAWS) are still underregulated and becoming increasingly more powerful. Thus, they represent a large, possibly existential danger to humanity.
As the world evolves toward the broader use of AI systems, it is important to raise global awareness about the importance of free will. Free will is the major asset in most societies. Relinquishing free will to decisions made by systems, that may be smarter than us – even for small decisions – may represent a terrible loss for humanity. While this can seem apparently obvious, freedom from choice is a situation desired by many, because of laziness, lack of focus and search for convenience. Thus, education to value freedom of decision should go along with technical education. Next, legislation should set clear boundaries on risks, transparency and ethical AI practices. The EU AI act is a first step in this direction, but we will need new regulatory steps as technology advances.
Whereas the previous arguments hint diffidence towards recent advances in AI, many positive goals have shown to be reachable or within reach. Technological progress is unstoppable, but it can be steered. The balance between Techo-optimism and Techno-pessimism was well presented recently by M. Vardi [4]. The crucial factor is the reward function. The type of reward, i.e., the overall goal of the cyber world, has clearly a socio-political color. Even more for this reason, education on the advantages and risks of technology should be prioritized and be positioned in the present historical context of the world.
References:
[1] L. Benini and G. De Micheli, “Networks on Chips: A New SoC Paradigm”, IEEE Computer, 70-78.
[2] L. Benini, A. Bogliolo and G. De Micheli, “A survey of design techniques for system-level dynamic power management”, IEEE Transactions on very large-scale integration (VLSI) systems 8 (3), 299-31.
[3] M. Zyda, “Buddy, Can You Spare Me 44GW of Power for My AI Data Center Connection?”, IEEE Computer, Vol. 50, No.2, pp. 86-92.
[4] M. Vardi, “Techno-Optimism, Techno-Pessimism and Techno-Realism”, ACM Communications, Volume 69, Issue 1 Page 5.