The big problem with AI, and how to solve it

George Zarkadakis
6 min readDec 27, 2022

--

AI is a transformative technology with enormous economic, scientific and social benefit, one that is driving the fourth industrial revolution Intelligent machine learning algorithms are already dramatically changing how private and public institutions function, by automating numerous cognitive processes, generating new efficiencies, and enabling deeper, data-derived insights and predictions. Despite these benefits, there is a growing concern that AI systems are alienating the wider public by transferring power from humans to machines. The geopolitical implications of social alienation may impede the advance of AI, particularly in democratic states where public opinion matters. It is therefore vital to understand the reasons behind the problem that AI currently has, and examine ways to ameliorate the risk of social backlash.

Marvin Minsky, Claude Shannon, Ray Solomonoff and other scientists at the foundational conference on AI at Dartmouth College in 1956 (Photo: Margaret Minsky)

The root of the problem is that the philosophical foundation of AI rests on the idea of “machine autonomy”. This idea was explicitly stated in the original AI manifesto at the historical Dartmouth Workshop of 1956 that founded Artificial Intelligence: “[the study of AI is] to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.[..]”. Since then AI has advanced in juxtaposition — and arguably in competition — to humans. Take for instance just two of AI’s historical milestones: the 1997 victory of IBM’s Deep Blue over Kasparov in chess, and the 2016 victory of DeepMind’s AlphaGo over Lee Sedol in Go. In both cases AI’s “triumph” was beating the best of humans at their own game (2).

There are very powerful arguments in favour of machine autonomy. From a utility perspective autonomous AI systems are key to exploring environments where human survival is either very challenging or nigh impossible — for example deep space exploration, deep ocean exploration, or environments exposed to high radiation. Moreover, autonomous AI systems are necessary for complex pattern recognition problems in big data — for example astronomical, biological or financial data. The recent success of DeepMind’s AlphaFold to determine the structures of more than 200 million proteins from some 1 million species, covering almost every known protein on our planet, is nothing short of spectacular.

Autonomy’s dilemma

While autonomous AI systems — such as AlphaFold — make impressive contributions to science and society, there are multitude of ethical problems that arise from the very nature of machine autonomy. This is because AI systems are mostly designed to solve the so-called “canonical problem in AI”, which is a solitary machine confronting a non-social environment. In effect, autonomous AI systems are like aliens from another world landing on Earth where humans are the obstacles. This situation often results in misalignment between machine and human objectives, especially when these objectives may tend to vary and differ significantly between nations, and even within nations . Thus, whenever an AI algorithm takes an autonomous decision that may affect the wellbeing of a human being an ethical problem also arises. Think, for example, an autonomous car deciding on a life or death situation, or an algorithm that decides on someone’s parole based on their probability of reoffending, or an intelligent system that determines the issuance of a loan or an insurance policy. As more and more autonomous AI systems embed in IT processes such ethical problems will multiply and public trust will be eroded. We are faced with a classical principal-agent problem where the human principles, may have different incentives and priorities than our machine agents. To solve this problem we need to rethink AI systems so that their goals align with ours. This can only happen if those machine intelligence systems are embedded into human systems, social, economic or political. In such use cases “Autonomous AI” needs to become “Cooperative AI”.

Human-machine cooperation

Researchers have identified four elements of cooperative machine intelligence that are necessary for embedding AI systems into human society (1):

· Understanding; whereby the consequences of machine actions are taken into account;

· Communication; which suggests transparency and sharing of information in order to understand behaviour, intentions and preferences;

· Commitment; i.e. the ability to make credible promises when needed for cooperation;

· Norms and Institutions; the social infrastructure — such as shared beliefs or rules — that reinforces understanding, communication and commitment.

Examples of existing cooperative AI are collaborative industrial robots and care robots working alongside humans, or personal assistants that help us schedule our work more efficiently. Key to developing cooperative AI is implementing iterative interactions with humans while executing a task. Training such AI systems therefore requires a social environment, for instance a multi-player game, or a human-machine dialogue while training a language understanding model. Such “cooperative” AI systems tend to augment, rather than replace, human actors. Designing such systems requires a multi-disciplinary approach to avoid a purely engineering “tunnel vision”. For example, a social network where autonomous AI algorithms optimize the servicing of content by maximizing “likes” will result in echo chambers and polarization. Redesigning the optimization process of an AI algorithm to include other factors, for example exposure to opposing views, would drive different, and hopefully better, social and political outcomes. Policy makers should therefore encourage cross-disciplinary research to include psychology, sociology, game theory, biology, anthropology, and political science with AI research. This is vital for designing cooperative AI systems that optimize for socially-accepted parameters.

AI is by nature centralized and thus more aligned to authoritarian anti-liberal politics and ideologies that promote citizen surveillance and manipulation.

Moreover, human-centric design decisions for AI systems need appropriate governance. Embedding human-centred governance can help solve the current principal-agent problem in AI and realign objectives and incentives between humans and machines. One way to do so would be to implement feedback loops where human communities act as governors of AI systems. Such an idea is currently researched by Voxiberate, a startup, which is developing tools for participatory democracy on the web using a citizen assemblies’ model and semantic clustering AI algorithms. In a typical use case of community-based AI governance, a human community (e.g. citizens of a smart city) may assess AI systems performance and outcomes, and decide improvements or changes. By applying the four principles of Cooperative AI, those human governors are augmented by AI systems to better understand the implications of machine actions; for example, by an AI system that monitors human deliberations and clusters related perspective to enable dialogue and consensus, or an AI system that personalizes learning that is necessary to bridge information and knowledge asymmetries. These internal feedback loops between AI and humans, are then embedded into a wider feedback loop whereby the AI-augmented human community takes decisions on the further development and evolution of AI systems. Participatory democratic methods and web-based tools, such as the ones that Voxiberate are developing, are of critical importance so that everyone in a community is fairly represented in the decision-making process of AI governance. In practical terms, and given that AI systems are trained on massive data sets, human communities may — for instance — decide how the data are collected and processed, how privacy and liberty are protected, and what features to prioritize based on human social and cultural values.

Although autonomous AI systems are important and must be further advanced, the nature of intelligence is evidently social; and this is true across all living species on Earth, not only humans. AI research must expand its scope beyond machine autonomy and into human-machine collaboration as well. As we make progress in developing intelligent machines further we must keep this idea in mind, and appropriately embed AI systems into human society in a democratic, collaborative, and productive way.

References

(1) A Dafoe, Y Bachrach, G Hadfield, E Horvitz, K Larson & T Graepel, Cooperative AI: machines to find common ground, In: Nature, Vol 593, 6 May 2021, pp 33–36.

(2) G Zarkadakis, Cyber Republic: reinventing democracy in the age of intelligent machines, MIT Press (2020).

--

--

George Zarkadakis

PhD in AI, author of “Cyber Republic: reinventing democracy in the age of intelligent machines” (MIT Press, 2020), CEO at Voxiberate @zarkadakis