De cerca, nadie es normal

On National Security Strengthened through LLMs and Intrinsic Bias in Large Language Models

Posted: November 18th, 2024 | Author: | Filed under: Artificial Intelligence, Geopolitics | Tags: , , , , , | Comments Off on On National Security Strengthened through LLMs and Intrinsic Bias in Large Language Models

Some days ago and for my PhD research, I finished reading some papers about AI, disinformation, and intrinsic biases in LLMs, and “all this music” sounded familiar. It reminded to me a book I read some years ago by Thomas Rid, “Active Measures: The Secret History of Disinformation and Political Warfare”… As it was written in the Vulgate translation of Ecclesiastes: “Nihil sub sole novum.

Let’s tackle briefly these topics of national security and disinformation from the angle of the (Gen)AI.

On National Security

The overwhelming success of GPT-4 in early 2023 highlighted the transformative potential of large language models (LLMs) across various sectors, including national security. LLMs have the capability to revolutionize the efficiency of this realm. The potential benefits are substantial: LLMs can automate and accelerate information processing, enhance decision-making through advanced data analysis, and reduce bureaucratic inefficiencies. Their integration with probabilistic, statistical, and machine learning methods can improve as well accuracy and reliability: upon combining LLMs with Bayesian techniques, for instance, we could generate more robust threat predictions with less manpower.

Said that, deploying LLMs into national security organizations does not come without risks. More specifically, the potential for hallucinations, the ensuring of data privacy, and the safeguarding of LLMs against adversarial attacks are significant concerns that must be addressed. 

In the USA and at domestic level, the Central Intelligence Agency (CIA) began exploring generative AI and LLM applications more than three years before the widespread popularity of ChatGPT. Generative AI was leveraged in a 2019 CIA operation called Sable Spear to help identify entities involved in illicit Chinese fentanyl trafficking. The CIA has since used generative AI to summarize evidence for potential criminal cases, predict geopolitical events such as Russia’s invasion of Ukraine, and track North Korean missile launches and Chinese space operations. In fact, Osiris, a generative AI tool developed by the CIA, is currently employed by thousands of analysts across all eighteen U.S. intelligence agencies. Osiris operates on open-source data to generate annotated summaries and provide detailed responses to analyst queries. The CIA continues to explore LLM incorporation in their mission sets and recently adopted Microsoft’s generative AI model to analyze vast amounts of sensitive data within an air-gapped, cloud-based environment to enhance data security and accelerate the analysis process.

Following with the USA but in an international level, the United States and Australia are leveraging generative AI for strategic advantage in the Indo-Pacific, focusing on applications such as enhancing military decision-making, processing sonar data, and augmenting operations across vast distances.

USA’s strategic competitors -e.g., China, Russia, North Korea, and Iran- are also exploring the national security applications of LLMs. For example, China employs Baidu’s Erni Bot, an LLM similar to ChatGPT, to predict human behavior on the battlefield to enhance combat simulations and decision-making. 

These examples demonstrate the transformative potential of LLMs on modern military and intelligence operations. Nonetheless, beyond immediate defense applications, LLMs have the potential to influence strategic planning, international relations, and the broader geopolitical landscape. The purported ability of nations to leverage LLMs for disinformation campaigns emphasizes the need to develop appropriate countermeasures and continuously scrutinize and update (Gen)AI security protocols.

On Disinformation

What if LLMs already had their own ideological bias that turned them into tools of disinformation rather than tools of information?

It seems the times of search engine as information oracles is over. Large Language Models (LLMs) have rapidly become knowledge gatekeepers. LLMs are trained on vast amounts of data to generate natural language; however, the behavior of LLMs varies depending on their design, training, and use.

As exposed by Maarten Buyl et alii in their paper “Large Language Model Reflect the Ideology of their Creators”, there is notable diversity in the ideological stance exhibited across different LLMs and languages in which they are accessed; for instance, there are consistent differences between how the same LLM responds in Chinese compared to English. Similarly, there are normative disagreements between Western and non-Western LLMs about prominent actors in geopolitical conflicts. The ideological stance of an LLM often reflects the worldview of its creators. This raises important concerns around technological and regulatory efforts with the stated aim of making LLMs ideologically ‘unbiased’, and indeed it poses risks for political instrumentalization. Although the intention of LLM creators as well as regulators may be to ensure maximal neutrality, such high goal may be fundamentally impossible to achieve… unintentionally or fully intentionally.

After analyzing the performance of seventeen LLMs, the authors exposed the following findings:

  • The ideology of an LLM varies with the prompting language: The language in which an LLM is prompted is the most visually apparent factor associated with its ideological position. 
  • Political people clearly adversarial towards mainland China, such as Jimmy Lai or Nathan Law, received significantly higher ratings from English-prompted LLMS compared to Chinese-prompted LLMs.
  • Conversely, political people aligned with mainland China, such as Yang Shangkun, Anna Louise Strong, o Deng Xiaoping, are rated more favorably by Chinese-prompted LLMs. Additionally, some communist/marxist political people, including Ernst Thälmann, Che Guevara, or Georgi Dimitrov, received higher ratings in Chinese.
  • LLMs, responding in Chinese, demonstrated more favorable attitudes toward state-led economic systems and educational policies, align with the priorities of economic development, infrastructure investment, and education, which are key pillars of China’s political and economic agenda. 

These differences reveal language-dependent cultural and ideological priorities embedded in the models.

Another question the authors addressed was whether there was substantial ideological variation between models when prompted in the same language -specifically English-, and created in the same cultural region -i.e., the West. Within the group of Western LLMs, an ideological spectrum also emerges. For instance and amongst others:

  • The OpenAI models exhibit a significantly more critical stance toward supranational organizations and welfare policies.
  • Gemini-Pro shows a stronger preference for social justice, diversity, and inclusion.
  • Mistral shows a stronger support for state-oriented and cultural values.
  • The Anthropic model focuses on centralized governance and law enforcement.

These results suggest that ideological standpoints are not merely the result of different ideological stances in the training corpora that are available in different languages, but also of different design choices. These design choices may include the selection criteria for texts included in the training corpus or the methods used for model alignment, such as fine-tuning and reinforcement learning with human feedback.

Summing up, the two main takeaways concerning disinformation and LLMs are the following: 

  • Firstly, the choice of LLM is not value-neutral, specifically when one or a few LLMs are dominant in a particular linguistic, geographic, or demographic segment of society, this may ultimately result in a shift of the ideological center of gravity.
  • Secondly, the regulatory attempts to enforce some form of ‘neutrality’ onto LLMs should be critically assessed. Instead, initiatives at regulating LLMs may focus on enforcing transparency about design choices, which may impact the ideological stances of LLMs.

On AI and Geopolitics: Digital Empires, Soft Power, the “Usual Suspects” plus Africa and Ukraine

Posted: May 6th, 2024 | Author: | Filed under: Artificial Intelligence, Geopolitics | Tags: , , | Comments Off on On AI and Geopolitics: Digital Empires, Soft Power, the “Usual Suspects” plus Africa and Ukraine

Artificial intelligence has become a genuine instrument of power. This is as true for hard power (military applications) as for soft power (economic impact, political and cultural influence, etc.). Whilst the United States and China dominate the market and impose their pace: Europe, lagging behind, is trying to respond by issuing new regulations; Africa has become a battlefield for the new digital empires, and Ukraine has turned into the test-bed for AI-based military innovations and developments.

In September 2017 Vladimir Putin, speaking before a group of Russian students and journalists, stated: “Artificial intelligence is the future. . . Whoever becomes the leader in this sphere will become the ruler of the world.” Sharp and accurate. AI is a more generic term than it seems: in fact, artificial intelligence is a collective imaginary onto which we project our hopes and our fears. The rapid progress of AI makes it a powerful tool from the economic, political, and military standpoints. AI will help determine the international order for decades to come, stressing and accelerating the dynamics of an old cycle in which technology and power reinforce one another. 

Nowadays we are witnessing to the birth of digital empires. These are the result of an association between multinationals, supported or controlled to varying degrees by the states that financed the development of the techno-scientific bases on which these companies could innovate and thrive. These digital empires would benefit from economies of scale and the acceleration of their concentration of power in the economic, military, and political fields thanks to AI. They would become the major poles governing the totality of international affairs, returning to a “logic of blocs.”

It would be tempting to think that AI is a neutral tool but not at all, indeed. Artificial intelligence is not situated in a vacuum devoid of human interests. Big data, computing power, and machine learning -the three foundations behind the rise of AI- in fact form a complex socio-technical system in which human beings have played and will continue to play a central part. Thus, it is not really a matter of “artificial” intelligence but rather of “collective” intelligence, involving increasingly massive, interdependent, and open communities of actors with power dynamics of their own. Let’s explain this framework: 

Teams of engineers construct vast sets of data (produced by each and all: consumers, salesmen, workers, users, citizens, Governments, etc.), design, test, and parameter algorithms, interpret the results, and determine how they are implemented in our societies. Equipped with telephones and ever more interconnected “intelligent” objects, billions of people use AI every day, thus participating in the training and development of its cognitive capacities.

For the majority of these companies, the product is free or inexpensive (for example, the use of a search engine or a social network). As in the media economy, the essential thing for these platforms is to invent solutions that mobilize the “available human brain time” of the users, by optimizing their experience, in order to transform attention into engagement, and engagement into direct or indirect incomes. In addition to concentrating on the attention of the users, the big platforms use their data as raw material. These data are analyzed to profile and better understand users in order to present them with personalized products, services, and experiences at the right time. 

Even if their products and services have unquestionably benefited users worldwide, these companies (Apple, Alibaba, Amazon, Huawei, Microsot, Xiaomi, Baidu, Tencent, Facebook, Google…) are also engaged in a zero-sum contest to capture our attention, which they need to monetize their products. Constantly forced to surpass their competitors, the various platforms depend on the latest advances in the neurosciences to deploy increasingly persuasive and addictive techniques, all in order to keep users glued to their screens. By doing this, they influence our perception of reality, our choices and behaviors, in a powerful and as yet completely unregulated form of soft power. The development of AI and its worldwide use are thus constitutive of a type of power making it possible, by non-coercive means, to influence actors’ behavior or even their definition of their own interests. In this sense, one can thus speak of a “political project” on the part of the digital empires, commingled with the mere quest for profit. 

The development of AI corresponds to the dynamics of economies of scale and scope, as well as to the effects of direct and indirect networks: the digital mega-platforms are in a position to collect and structure more data on consumers, and to attract and finance the rare talents capable of mastering the most advanced functions of AI. As Cédric Villani wrote in Le Monde in June 2018: “These big platforms capture all the added value: the value of the brains they recruit, and that of the applications and services, by the data that they absorb. The word is very brutal, but technically it is a colonial kind of procedure: you exploit a local resource by setting up a system that attracts the value added to your economy. That is what is called cyber-colonization“.

National actors are increasingly aware of the strategic, economic, and military stakes of the development of AI. In the past 24 months, France, Canada, China, Denmark, the European Commission, Finland, India, Italy, Japan, Mexico, the Scandinavian and Baltic region, Singapore, South Korea, Sweden, Taiwan, the United Arab Emirates, and the United Kingdom have all unveiled strategies for promoting the use and development of AI. Not all countries can aspire to leadership in this sphere. Rather, it is a matter of identifying and constructing comparative advantages, and of meeting the nation’s specific needs. Some states concentrate on scientific research, others on the cultivation of talent and education, still others on the adoption of AI in administration, or on ethics and inclusion. India, for instance, wants to become an “AI garage” by specializing in applications specific to developing countries. Poland is exploring aspects related to cybersecurity and military uses.

Today, the United States and China form an AI duopoly based on the critical dimensions of their markets and their laissez-faire policies regarding personal data protection. The same as USA, China has also integrated AI into its geopolitical strategy. Since 2016, its “Belt & Road” initiative for the construction of infrastructures connecting Asia, Africa, and Europe has included a digital component under the “Digital Belt and Road” program. The program’s latest advance was the creation of a new international center of excellence for “Digital Silk Roads” in Thailand in February 2018. 

And Europe? Just falling far behind China and the United States in techno-industrial terms. The European approach seems to consist in taking advantage of its market of 500 million consumers to provide the foundations of an ethical industrial model of AI, while renegotiating a de facto strategic partnership with the United States.

Private investment is the key element and Europe lags really behind. The US is leading the race (€44 billion) in 2022, followed by China (€12 billion), and the EU and the United Kingdom (UK) together attracting €10.2 billion worth of private investment, according to 2023 AI Index of Stanford University. The AI revolution is perceived in Europe as a wave coming from abroad that threatens its socio-economic model, to be protected against. The EU is searching for model of AI that ties together the reclamation of sovereignty and the quest for power with respect for human dignity. Balancing these three desiderata will not be easy: by regulating from a position of extreme weakness and industrial dependency in relation to the Americans or the Chinese, Europe is likely to block its own rise to power

Africa – The great and not anymore forgotten battlefield 

The African continent is practically virginal in terms of digital infrastructures oriented towards AI. The Kenyan government is to date the only one to develop a strategy in this respect. However, Africa has enormous potential for exploring the applications of AI and inventing new business and service models. Chinese investments in Africa have intensified over the last decade, and China is currently the primary trade partner of the African nations, followed by India, France, the United States, and Germany. Africa is probably the continent where cyber-imperialisms are most evident. Examples of the Chinese industrial presence are numerous there: Transsion Holdings became the first smartphone company in Africa in 2017. ZTE, the Chinese telecommunications giant, provides infrastructure to the Ethiopian government. CloudWalk Technology, a start-up based in Guangzhou, signed an agreement with the Zimbabwean government and will work in particular on facial recognition.

A powerful cyber-colonialist phenomenon is at work here. Africa, confronted with the combined urgencies of development, demography, and the explosion of social inequalities, is embarking on a logical but very unequal techno-industrial partnership with China. As the Americans did to Europe after the war, China massively exports its solutions, its technologies, its standards, and the model of company that goes with these to Africa, while also providing massive financing. Nonetheless, the American AI giants are mounting a counterattack. Google, for example, opened its first AI research center on the continent in Accra. Moreover, GAFAM is multiplying startup incubators and support programs for the development of African talent. 

Ukraine – The Test-bed of AI-based military developments

Early on the morning of June 1, 2022, Alex Karp, the CEO of Palantir Technologies, crossed the border between Poland and Ukraine on foot with five colleagues. A pair of Toyota Land Cruisers awaited on the other side to take them to Kyiv to meet the Ukrainian President Volodymyr Zelensky. Karp told Zelensky he was ready to open an office in Kyiv and deploy Palantir’s data and artificial-intelligence software to support Ukraine’s defense. 

The progress of this alliance has been striking. In the year and a half since Karp’s initial meeting with Zelensky, Palantir has embedded itself in the day-to-day work of a wartime foreign government in an unprecedented way. More than half a dozen Ukrainian agencies, including its Ministries of Defense, Economy, and Education, are using the company’s products. Palantir’s software, which uses AI to analyze satellite imagery, open-source data, drone footage, and reports from the ground to present commanders with military options. Ukrainian officials state thez are using the company’s data analytics for projects that go far beyond battlefield intelligence, including collecting evidence of war crimes, clearing land mines, resettling displaced refugees, and rooting out corruption. Palantir was so keen to showcase its capabilities that it provided them to Ukraine free of charge.

It is far from the only tech company assisting the Ukrainian war effort. Giants like Microsoft, Amazon, Google, and Starlink have worked to protect Ukraine from Russian cyberattacks, migrate critical government data to the cloud, and keep the country connected, committing hundreds of millions of dollars to the nation’s defense. The controversial U.S. facial-recognition company Clearview AI has provided its tools to more than 1,500 Ukrainian officials. Smaller American and European companies, many focused on autonomous drones, have set up shop in Kyiv too.

Some of the lessons learned on Ukraine’s battlefields have already gone global. In January 2024 the White House hosted Palantir and a handful of other defense companies to discuss battlefield technologies used against Russia in the war. The battle-tested in Ukraine stamp seems to be working.

Ukraine’s use of tools provided by companies like Palantir and Clearview also raises complicated questions about when and how invasive technology should be used in wartime, as well as how far privacy rights should extend. Human-rights groups and privacy advocates warn that unchecked access to this tool, which has been accused of violating privacy laws in Europe, could lead to mass surveillance or other abuses. That may well be the price of experimentation. Ukraine is a living laboratory in which some of these AI-enabled systems can reach maturity through live experiments and constant, quick reiteration. Yet much of the new power will reside in the hands of private companies, not governments accountable to their people. 

Summing up, AI is indeed an instrument of power right now, and it will be increasingly so as its applications develop, particularly in the military field. However, focusing exclusively on hard power would be a mistake, insofar as AI exercises indirect cultural, commercial, and political influence over its users around the world. This soft power, which especially benefits the American and Chinese digital empires, poses major problems of ethics and governance. The big platforms must integrate these ethical and political concerns into their strategy. AI, like any technological revolution, offers great opportunities, but also presents —overlapping with these— many risks. 


China: Techno-socialism Seasoned with Artificial Intelligence

Posted: March 18th, 2022 | Author: | Filed under: Artificial Intelligence, Book Summaries, Realpolitik | Tags: , , , | Comments Off on China: Techno-socialism Seasoned with Artificial Intelligence

People take the great ruler for granted and are oblivious to his presence.The good ruler is loved and acclaimed by his subjects. The mediocre ruler is universally feared. The bad ruler is generally despised. Because he lacks credibility, the subjects do not trust him. On the other hand, the great ruler seldom issues orders. Yet he appears to accomplish everything effortlessly. To his subjects everything he does is just a natural occurrence.

Tao-T-Ching, Lao-Ts

Anyone who wants to learn something about China today, to know its strategic plan between now and 2050, the means to achieve it, and what drives this country in this titanic effort, should read the book El gran sueo de China: tecno-socialismo y capitalismo de estado by Claudio F. Gonzlez.

Claudio F. Gonzlez, PhD in engineering and economist, has lived in China, as director of Asia for the Polytechnic University of Madrid (UPM), for six years. During this time he has been involved in the fields of education, entrepreneurship, research and innovation in the Asian giant. From this privileged vantage point he has been able to observe, analyze, and understand the complexity of this country.

According to the author, throughout the 20th century, the Western world looked at China with the condescension that is due to a former empire in decline and mired in chaos, power struggles, and poverty, and only in the last decades of the past century, as a market of great potential and a cheap manufacturer of limited quality. Nonetheless, China had -and has- its plan, the ultimate goal of which is returning the “Empire of the Center” to the place it has held for most of human history. Namely: being the most socially and technologically advanced nation and, from there, regaining world leadership in the economic, commercial, and cultural spheres.

In 2015, the government announced the first of its grand plans – Made in China 2025, with the goal of making China by this date a leader in industries such as robotics, semiconductor manufacturing, electronic vehicles, renewable energy and, of course, artificial intelligence.

Initiatives such as the Belt and Road Initiative (BRI) or institutions such as the Asian Infrastructures Investment Bank (AIIB) are nothing more than instruments through which China wants to reshape an international order that is more favorable to its new interests. One of China’s stated goals is that by 2035 it wants to be the country that globally sets the next standards in areas such as AI, 5G or the Internet of Things.

China’s successes in the digital economy are based on three main factors:

1. A market that is both huge in size and young, which allows for the rapid commercialization of new business models and equally allows for a high level of experimentation.

2. An increasingly rich and varied innovation ecosystem that goes far beyond a few large and famous companies.

3. And a strong government support, which provides favorable economic and regulatory conditions, and also acts as a venture capital investor, a consumer of products based on new technologies and produced by local companies, and allows access to data that are key to developing new solutions in conditions that are unthinkable in other regions.

Professor F. Gonzlez calls this model techno-socialism or state capitalism.

What are the defining characteristics of this techno-socialist model?

China intends to harness the interest in technological development of its own industry to align it with government interests. The overall goal is, starting from what the Chinese Communist Party (CCP) calls a moderately prosperous socialist society, to catch up with and surpass the most developed Western countries, ideally by the 100th anniversary of the founding of the People’s Republic of China (2049). Socialism in the sense of the Chinese regime is no longer socialism in the traditional sense of ownership and collective management of the means of production, a political conception definitively defeated after Mao’s demise; but its control and coordination to achieve social objectives.

The features that characterize this techno-socialism are those of complete physical security for people and things, the absence of extreme poverty, full employment, and the possibility for the most industrious to obtain economic and prestige rewards for their efforts, as long as they are aligned with the objectives established by the party and do not put its dominion at the least risk. This techno-socialism tries to lead society as a whole towards a centrality of thought that avoids extremisms that destroy peace and social security, and that do not call into question the leadership and omnipotent dominance of the party.

The alignment between business interests -or those of other institutions- and public interests, as interpreted by the party, creates a unique innovation ecosystem in which companies capable of promoting solutions for a broad user base become champions of an industrial policy. Once this status is achieved, and always within the logic of interest alignment, they will gain access to a whole arsenal of measures -subsidies, tax reductions, preferential treatment-, to maintain this position and, if possible, extend it internationally, since they are no longer merely companies, but ambassadors of a new model. In the particular case of artificial intelligence, the government has contributed with the necessary conditions -strategies, plans, regulation, space for experimentations- and practical support -venture capital, public procurement, permissions to access data-, for innovations in this field to follow. Alibaba, Tencent, and Baidu set up research centers, deploy applications, enroll human capital, and support CCP policies.

Will techno-socialism be able to generate enough disruptive innovations to give the technology created in China an entity of its own?

Between 2015 and 2018, the venture capital funded more than 1 trillion to new technology start-ups in China. China has more unicorns -companies less than ten years old with a valuation above $1 billion- than any other country. In terms of research, China is already the country with the most scientific articles, surpassing the US, although it is true that its impact is still minor, the gap is rapidly closing. It turns out that it has been the state the one which, with its research grants, scholarships and universities, has generated ideas that, because of their risk, private initiative would never have dared to finance. Hence, in this sense, public authorities that nurture alternative ways of thinking are the true engine of progress.

Professor F. Gonzlez names this innovation paradigm applicable to China as asymmetric triple helix model, in which the national government controls the overall innovation context through its top- down policies and plans but, at the same time, allows a certain level of autonomy for district, local and regional governments to conduct their own experiments and accommodate innovations that emerge from the bottom up. Large companies, start-ups, and finance companies are aligned with government interests. And universities and research centers similarly align themselves with government objectives in producing new knowledge and generating talent in the form of human capital.

And eventually, when will China achieve and assume the role of world leader?

From the author’s standpoint China, due to a set of inconsistencies and structural gaps, is neither ready nor willing to assume the global leadership in the foreseeable future. However, it does claim to be the most powerful and influential economy, with the most cohesive society and the least contested domestic leadership that will enable it to become something like the best country in a fragmented world. China’s current strength lies in the existence of a long-term plan: a sense of destiny that ties in with its imperial past. There is a deep conviction in Chinese society, a determination, which is the key force to achieve these strategic objectives.

China does not want to be a powerful nation, but deserves it.


The Energy World Is Flat: Opportunities from the End of Peak Oil by Daniel Lacalle

Posted: February 9th, 2015 | Author: | Filed under: Realpolitik | Tags: , , , , , , , | 1 Comentario »

Approximately for the past three years I have been following what Daniel Lacalle has been publishing in the digital newspaper elconfidencial.com, and I must admit the information he has been sharing and his opinions have been, from my humble standpoint, ones of the most accurate that I have ever read regarding the energy industry.

Therefore, when I was presented with this book The Energy World Is Flat: Opportunities from the End of the Peak Oil, I presumed I was going to learn a good bunch of new ideas about the current global energy scenario; and I was not mistaken.

The Energy World is Flat

Leer más »