Since it seems we are developing AI models gradually more intelligent -probably owing to this quantum leap that GenAi has meant-, let’s raise the level: what about their sentience? I.e., their capacity for feeling or perceiving consciousness.
Last week I have the pleasure to talk to my good friend Gregory about AI, ethics, the future of work, AI and geo-politics… and he recommended to me the book “The Edge of Sentience” by Jonathan Birch. I do appreciate his recommendation. There is a chapter devoted to LLMs and the gaming problem. Let’s analyze what this problem is about.
According to Birch, sentience does not require or imply any particular level of intelligence. Yet intelligence and sentience are related: intelligence can make sentience easier to detect. The AI case, however, shows us that intelligence of certain kinds can also make it more difficult to assess the likelihood of sentience. For the more intelligent a system is, the more likely it will be able to game our criteria. What is it to ‘game’ a set of criteria? Gaming occurs when systems mimic human behaviours that are likely to persuade human users of their sentience without possessing the underlying capacity. No intentional deception is needed for gaming. It could happen in service of simple objectives, such as maximizing user-satisfaction or bettering interaction time. When an artificial agent is able to intelligently draw upon huge amounts of human-generated training data (as in LLMs), the result can be gaming of our criteria for sentience.
The gaming problem initially leads to the thought that we should ‘box’ AI systems when assessing their sentience candidature: that is, the AI model must be denied access to a large corpus of human-generated training data. However, this would destroy the capabilities of any LLM. According to the author, what we really need in the AI case are deep computational markers, not behavioral markers. We could use computational functionalist theories -such as the global workspace theory and the perceptual reality monitoring theory– as sources of deep computational markers of sentience. If we find signs that an AI system has implicitly learned ways of recreating them, this should lead us to regard it as a sentience candidate. Nevertheless, the main problem with this proposal is that we currently lack the sort of access to the inner workings of LLMs that would allow us to reliably ascertain which algorithms they have implicitly picked up during training.
Some years ago I wrote about the following paradox in AI: Is an infallible machine really intelligent? Echoing Turing’s approach, it couldn’t be expected a machine infallible and intelligent at the same time. Instead of building infallible computers, fallible machines should be developed, which could learn from their own mistakes; i.e., a sort of reinforcement learning, in which the AI model learned an optimal (or near-optimal) course of action that maximized the reward function. Maybe we should follow this deeply human approach to “teach sentience” to machines: by the end of the day, human beings learn through testing and we replicate those actions that bring us reward. In this case, the reward could be a profound feeling of self-assurance and happiness but how could we encode that in a, for instance, Monte Carlo simulation?
Some days ago and for my PhD research, I finished reading some papers about AI, disinformation, and intrinsic biases in LLMs, and “all this music” sounded familiar. It reminded to me a book I read some years ago by Thomas Rid, “Active Measures: The Secret History of Disinformation and Political Warfare”… As it was written in the Vulgate translation of Ecclesiastes: “Nihil sub sole novum.“
Let’s tackle briefly these topics of national security and disinformation from the angle of the (Gen)AI.
On National Security
The overwhelming success of GPT-4 in early 2023 highlighted the transformative potential of large language models (LLMs) across various sectors, including national security. LLMs have the capability to revolutionize the efficiency of this realm. The potential benefits are substantial: LLMs can automate and accelerate information processing, enhance decision-making through advanced data analysis, and reduce bureaucratic inefficiencies. Their integration with probabilistic, statistical, and machine learning methods can improve as well accuracy and reliability: upon combining LLMs with Bayesian techniques, for instance, we could generate more robust threat predictions with less manpower.
Said that, deploying LLMs into national security organizations does not come without risks. More specifically, the potential for hallucinations, the ensuring of data privacy, and the safeguarding of LLMs against adversarial attacks are significant concerns that must be addressed.
In the USA and at domestic level, the Central Intelligence Agency (CIA) began exploring generative AI and LLM applications more than three years before the widespread popularity of ChatGPT. Generative AI was leveraged in a 2019 CIA operation called Sable Spear to help identify entities involved in illicit Chinese fentanyl trafficking. The CIA has since used generative AI to summarize evidence for potential criminal cases, predict geopolitical events such as Russia’s invasion of Ukraine, and track North Korean missile launches and Chinese space operations. In fact, Osiris, a generative AI tool developed by the CIA, is currently employed by thousands of analysts across all eighteen U.S. intelligence agencies. Osiris operates on open-source data to generate annotated summaries and provide detailed responses to analyst queries. The CIA continues to explore LLM incorporation in their mission sets and recently adopted Microsoft’s generative AI model to analyze vast amounts of sensitive data within an air-gapped, cloud-based environment to enhance data security and accelerate the analysis process.
Following with the USA but in an international level, the United States and Australia are leveraging generative AI for strategic advantage in the Indo-Pacific, focusing on applications such as enhancing military decision-making, processing sonar data, and augmenting operations across vast distances.
USA’s strategic competitors -e.g., China, Russia, North Korea, and Iran- are also exploring the national security applications of LLMs. For example, China employs Baidu’s Erni Bot, an LLM similar to ChatGPT, to predict human behavior on the battlefield to enhance combat simulations and decision-making.
These examples demonstrate the transformative potential of LLMs on modern military and intelligence operations. Nonetheless, beyond immediate defense applications, LLMs have the potential to influence strategic planning, international relations, and the broader geopolitical landscape. The purported ability of nations to leverage LLMs for disinformation campaigns emphasizes the need to develop appropriate countermeasures and continuously scrutinize and update (Gen)AI security protocols.
On Disinformation
What if LLMs already had their own ideological bias that turned them into tools of disinformation rather than tools of information?
It seems the times of search engine as information oracles is over. Large Language Models (LLMs) have rapidly become knowledge gatekeepers. LLMs are trained on vast amounts of data to generate natural language; however, the behavior of LLMs varies depending on their design, training, and use.
As exposed by Maarten Buyl et alii in their paper “Large Language Model Reflect the Ideology of their Creators”, there is notable diversity in the ideological stance exhibited across different LLMs and languages in which they are accessed; for instance, there are consistent differences between how the same LLM responds in Chinese compared to English. Similarly, there are normative disagreements between Western and non-Western LLMs about prominent actors in geopolitical conflicts. The ideological stance of an LLM often reflects the worldview of its creators. This raises important concerns around technological and regulatory efforts with the stated aim of making LLMs ideologically ‘unbiased’, and indeed it poses risks for political instrumentalization. Although the intention of LLM creators as well as regulators may be to ensure maximal neutrality, such high goal may be fundamentally impossible to achieve… unintentionally or fully intentionally.
After analyzing the performance of seventeen LLMs, the authors exposed the following findings:
The ideology of an LLM varies with the prompting language: The language in which an LLM is prompted is the most visually apparent factor associated with its ideological position.
Political people clearly adversarial towards mainland China, such as Jimmy Lai or Nathan Law, received significantly higher ratings from English-prompted LLMS compared to Chinese-prompted LLMs.
Conversely, political people aligned with mainland China, such as Yang Shangkun, Anna Louise Strong, o Deng Xiaoping, are rated more favorably by Chinese-prompted LLMs. Additionally, some communist/marxist political people, including Ernst Thälmann, Che Guevara, or Georgi Dimitrov, received higher ratings in Chinese.
LLMs, responding in Chinese, demonstrated more favorable attitudes toward state-led economic systems and educational policies, align with the priorities of economic development, infrastructure investment, and education, which are key pillars of China’s political and economic agenda.
These differences reveal language-dependent cultural and ideological priorities embedded in the models.
Another question the authors addressed was whether there was substantial ideological variation between models when prompted in the same language -specifically English-, and created in the same cultural region -i.e., the West. Within the group of Western LLMs, an ideological spectrum also emerges. For instance and amongst others:
The OpenAI models exhibit a significantly more critical stance toward supranational organizations and welfare policies.
Gemini-Pro shows a stronger preference for social justice, diversity, and inclusion.
Mistral shows a stronger support for state-oriented and cultural values.
The Anthropic model focuses on centralized governance and law enforcement.
These results suggest that ideological standpoints are not merely the result of different ideological stances in the training corpora that are available in different languages, but also of different design choices. These design choices may include the selection criteria for texts included in the training corpus or the methods used for model alignment, such as fine-tuning and reinforcement learning with human feedback.
Summing up, the two main takeaways concerning disinformation and LLMs are the following:
Firstly, the choice of LLM is not value-neutral, specifically when one or a few LLMs are dominant in a particular linguistic, geographic, or demographic segment of society, this may ultimately result in a shift of the ideological center of gravity.
Secondly, the regulatory attempts to enforce some form of ‘neutrality’ onto LLMs should be critically assessed. Instead, initiatives at regulating LLMs may focus on enforcing transparency about design choices, which may impact the ideological stances of LLMs.
For much time it seemed that in the computing landscape the main application of graphs were only related to ontology engineering, so when my colleague Mihael shared with me the paper “Graph of Thoughts: Solving Elaborate Problems with Large Language Models” -published by the end of August-, I thought we might be in the right path to re-discover the power to representing knowledge of these structures. In the afore-mentioned paper, the authors harness the graph abstraction as a key mechanism that enhances prompting capabilities in LLMs.
Prompt engineering is one of the central new domains of the large language model research. However, designing effective prompts is a challenging task. Graph of Thoughts (GoT) is a new paradigm that enables the LLM to solve different tasks effectively without any model updates.The key idea is to model the LLM reasoning as a graph, where thoughts are vertices and dependencies between thoughts are edges.
Human’s task solving is often non-linear, and it involves combining intermediate solutions into final ones, or changing the flow of reasoning upon discovering new in sights. For example, a person could explore a certain chain of reasoning, backtrack and start a new one, then realize that a certain idea from the previous chain could be combined with the currently explored one, and merge them both into a new solution, taking advantage of their strengths and eliminating their weaknesses. GoT reflects this, so to say, anarchic reason process with its graph structure.
Nonetheless, let’s take a step back: besides Graph of Thoughts, there are other approaches for prompting:
Input-Output (IO): a straightforward approach in which we use an LLM to turn an input sequence x into the output y directly, without any intermediate thoughts.
Chain-of-Thought (CoT): one introduces intermediate thoughts a1, a2,… between x and y. This strategy was shown to significantly enhance various LLM tasks over the plain IO baseline, such as mathematical puzzles or general mathematical reasoning.
Multiple CoTs: generating several (independent) k CoTs, and returning the one with the best output, according to certain metrics.
Tree of Thoughts (ToT): it enhances Multiple CoTs by modeling the process of reasoning as a tree of thoughts. A single tree node represents a partial solution. Based on a given node, the thought generator constructs a given number k of new nodes. Then, the state evaluator generates scores for each such new node.
Explained in a more visual way:
The design and implementation of GoT, according to the authors, consists of four main components: the Prompter, the Parser, the Graph Reasoning Schedule (GRS), and the Thought Transformer:
The Prompter prepares the prompt to be sent to the LLM, using a use-case specific graph encoding.
The Parser extracts information from the LLM’s thoughts, and updates the graph structure accordingly.
The GRS specifies the graph decomposition of a given task, i.e., it prescribes the transformations to be applied to LLM thoughts, together with their order and dependencies.
The Thought Transformer applies the transformations to the graph, such as aggregation, generation, refinement, or backtracking.
Finally, the authors evaluate GoT on four use cases -sorting, keyword counting, set operations, and document merging-, and compare it to other prompting schemes in terms of quality, cost, latency, and volume. The authors show that GoT outperforms other schemes, especially for tasks that can be naturally decomposed into smaller subtasks, are solved individually, and then merged for a final solution.
Summing up, another breath of fresh air in this hecticly evolving world of AI; this time combining abstract reasoning, linguistics, and computer sciences. Pas mal at all.
CICERO is an AI agent that can use language to negotiate, persuade, and work with people to achieve strategic goals similar to the way humans do. It was the first AI to achieve human-level performance in the strategy game No-press Diplomacy.
No-press Diplomacy is a complex strategy game, involving both cooperation and competition, that has served as a benchmark for multi-agent AI research. It is a 7-player zero-sum cooperative/competitive board game, featuring simultaneous moves and a heavy emphasis on negotiation and coordination. In the game a map of Europe is divided into 75 provinces. 34 of these provinces contain supply centers, and the goal of the game is for a player to control a majority (18) of the SCs. Each players begins the game controlling three or four supply centers and an equal number of units. Importantly, all actions occur simultaneously: players write down their orders and then reveal them at the same time. This makes Diplomacy an imperfect-information game in which an optimal policy may need to be stochastic in order to prevent predictability.
Diplomacy is a game about people rather than pieces. It is designed in such a way that cooperation with other players is almost essential to achieve victory, even though only one player can ultimately win. It requires players to master the art of understanding other people’s motivations and perspectives; to make complex plans and adjust strategies; and then to use natural language to reach agreements with other people and to persuade them to form partnerships and alliances.
How Was Cicero Developed by FAIR?
In two-player zero-sum (2p0s) settings, principled self-play algorithms ensures that a player will not lose in expectation regardless of the opponent’s strategy, as exposed by John von Neumann in 1928 in his work Zur Theorie der Gesellschaftsspiele.
Theoretically, any finite 2p0s game -such as chess, go, or poker- can be solved via self-play given sufficient computing power and memory. However, in games involving cooperation, self-play alone no longer guarantees good performance when playing with humans, even with infinite computing power and memory. The clearest example of this is language. A self-play agent trained from scratch without human data in a cooperative game involving free-form communication channels would almost certainly not converge to using English, for instance, as the medium of communication. Owing to this, the afore-mentioned researchers developed a self-play reinforcement learning algorithm -named RL-DiL-piKL-, that provided a model of human play while simultaneously training an agent that responds well to this human model. The RL-DiL-piKL was used to train an agent, named Diplodocus. In a 200-game No-press Diplomacy tournament involving 62 human participants, two Diplodocus agents both achieved a higher average score than all other participants who played more than two games, and ranked first and third according to an Elo rating system -a method for calculating the relative skill levels of players in zero-sum games.
Which Are the Implications of this Breakthrough?
Despite almost silenced by the advent of GPT in its different versions, firstly this is an astonishing advance in the field of negotiation, and more particularly in the realm of diplomacy. Never an AI model has had such a brilliant performance in a fuzzy environment, seasoned by information asymmetries, common sense reasoning, ambiguous natural language, and statistical modeling. Secondly and more importantly, this is another evidence we are in a completely new AI era in which machines can and are scaling knowledge.
These LLMs have caused a deep shift: we went from attempting to encode human-distilled insights into machines to delegating the learning process itself to machines. AI is ushering in a world in which decisions are made in three primary ways: by humans (which is familiar), by machines (which is becoming familiar), and by collaboration between humans and machines (which is not only unfamiliar but also unprecedented). We will begin to give AI fewer specific instructions about how exactly to achieve the goals we assign it. Much more frequently we will present AI with ambiguos goals and ask: “How, based on your conclusions, should we proceed?”
AI promises to transform all realms of human experience. And the core of its transformations will ultimately occur at the philosophical level, transforming how humans understand reality and our roles within it. In an age in which machines increasingly perform tasks only humans used to be capable of: what, then, will constitute our identity as human beings?
With the rise of AI, the definition of the human role, human aspirations, and human fulfillment will change. For humans accustomed to monopoly on complex intelligence, AI will challenge self-perception. To make sense of our place in this world, our emphasis may need to shift from the centrality of human reason to the centrality of human dignity and autonomy. Human-AI collaboration does not occur between peers. Our task will be to understand the transformations that AI brings to human experience, the challenges it presents to human identity, and which aspects of these developments require regulation or counterbalancing by other human commitments.
The AI revolution has come to stay. Unless we develop new concepts to explain, interpret, and organize its consequent transformations, we will be unprepared to navigate them. We must rely on our most solid resources -reason, moral and ethical values, tradition…- to adapt our relationship with reality so it keeps on being human.
More than the quasi-human interaction and the practically infinite use cases that could be covered with it, OpenAI’s ChatGPT has provided an ontological jolt of a depth that transcends the realm of AI itself.
Large language models (LLMs), such as GPT-3, YUAN 1.0, BERT, LaMDA, Wordcraft, HyperCLOVA, Megatron-Turing Natural Language Generation, or PanGu-Alpha represent a major advance in artificial intelligence and, in particular, toward the goal of human-like artificial general intelligence. LLMs have been called foundational models; i.e., the infrastructure that made LLMs possible –the combination of enormously large data sets, pre-trained transformer models, and the requirement of significant computing power– is likely to be the basis for the first general purpose AI technologies.
In May 2020, OpenAI released GPT-3 (Generative Pre-trained Transformer 3), an artificial intelligence system based on deep learning techniques that can generate text. This analysis is done by a neural network, each layer of which analyzes a different aspect of the samples it is provided with; e.g., meanings of words, relations of words, sentence structures, and so on. It assigns arbitrary numerical values to words and then, after analyzing large amounts of texts, calculates the likelihood that one particular word will follow another. Amongst other tasks, GPT-3 can write short stories, novels, reportages, scientific papers, code, and mathematical formulas. It can write in different styles and imitate the style of the text prompt. It can also answer content-based questions; i.e., it learns the content of texts and can articulate this content. And it can grant as well concise summaries of lengthy passages.
OpenAI and the likes endow machines with a structuralist equipment: a formal logical analysis of language as a system in order to let machines participate in language. GPT-3 and other transformer-based language models stand in direct continuity with the linguist Saussure’s work: language comes into view as a logical system to which the speaker is merely incidental. These LLMs give rise to a new concept of language, implicit in which is a new understanding of human and machine. OpenAI, Google, Facebook, or Microsoft effectively are indeed catalyzers, which are triggering a disruption in the old concepts we have been living by so far: a machine with linguistic capabilities is simply a revolution.
Nonetheless, critiques have appeared as well against LLMs. The usual one is that no matter how good they may appear to be at using words, they do not have truelanguage; based on the primeval seminal trailblazing work from the philologist Zipf, criticism have stated they are just technical systems made up of data, statistics, and predictions.
According to the linguist Emily Bender, “a language model is a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning: a stochastic parrot. Quite the opposite we, human beings, are intentional subjects who can make things into objects of thought by inventing and endowing meaning.“
Machine learning engineers in companies like OpenAI, Google, Facebook, or Microsoft have experimentally established a concept of language at the center of which does not need to be the human. According to this new concept, language is a system organized by an internal combinatorial logic that is independent from whomever speaks (human or machine). They have undermined one of the most deeply rooted axioms in Western philosophy: humans have what animals and machines do not have, language and logos.
Some data: monthly, on average, humans publish about seventy million posts on the content management platform WordPress. Humans produce about fifty-six billion words a month, or 1.8 billion words a day on this content management platform. GPT-3 -before its scintillating launch- was producing around 4.5 billion words a day, more than twice what humans on WordPress were doing collectively. And that is just GPT-3; there are other LLMs. We are exposed to a flood of non-human words. What will it mean to be surrounded by a multitude of non-human forms of intelligence? How can we relate to these astonishingly powerful content-generator LLMs? Do machines require semantics or even a will to communicate with us?
These are philosophical questions that cannot be just solved with an engineering approach. The scope is much wider and the stakes are extremely high. LLMs can, as well as master and learn our human languages, make us reflect and question ourselves about the nature of language, knowledge, and intelligence. Large language models illustrate, for the first time in the history of AI, that language understanding can be decoupled from all the sensorial and emotional features we, human beings, share with each other. Gradually, it seems we are entering eventually a new epoch in AI.