Posted: December 27th, 2022 | Author: Domingo | Filed under: Artificial Intelligence, Natural Language Processing | Tags: AI, artificial intelligence, Large Language Models, LLMs, natural language processing, NLP | Comments Off on Large Language Models (LLMs): an Ontological Leap in AI
More than the quasi-human interaction and the practically infinite use cases that could be covered with it, OpenAI’s ChatGPT has provided an ontological jolt of a depth that transcends the realm of AI itself.
Large language models (LLMs), such as GPT-3, YUAN 1.0, BERT, LaMDA, Wordcraft, HyperCLOVA, Megatron-Turing Natural Language Generation, or PanGu-Alpha represent a major advance in artificial intelligence and, in particular, toward the goal of human-like artificial general intelligence. LLMs have been called foundational models; i.e., the infrastructure that made LLMs possible –the combination of enormously large data sets, pre-trained transformer models, and the requirement of significant computing power– is likely to be the basis for the first general purpose AI technologies.
In May 2020, OpenAI released GPT-3 (Generative Pre-trained Transformer 3), an artificial intelligence system based on deep learning techniques that can generate text. This analysis is done by a neural network, each layer of which analyzes a different aspect of the samples it is provided with; e.g., meanings of words, relations of words, sentence structures, and so on. It assigns arbitrary numerical values to words and then, after analyzing large amounts of texts, calculates the likelihood that one particular word will follow another. Amongst other tasks, GPT-3 can write short stories, novels, reportages, scientific papers, code, and mathematical formulas. It can write in different styles and imitate the style of the text prompt. It can also answer content-based questions; i.e., it learns the content of texts and can articulate this content. And it can grant as well concise summaries of lengthy passages.
OpenAI and the likes endow machines with a structuralist equipment: a formal logical analysis of language as a system in order to let machines participate in language. GPT-3 and other transformer-based language models stand in direct continuity with the linguist Saussure’s work: language comes into view as a logical system to which the speaker is merely incidental. These LLMs give rise to a new concept of language, implicit in which is a new understanding of human and machine. OpenAI, Google, Facebook, or Microsoft effectively are indeed catalyzers, which are triggering a disruption in the old concepts we have been living by so far: a machine with linguistic capabilities is simply a revolution.
Nonetheless, critiques have appeared as well against LLMs. The usual one is that no matter how good they may appear to be at using words, they do not have true language; based on the primeval seminal trailblazing work from the philologist Zipf, criticism have stated they are just technical systems made up of data, statistics, and predictions.
According to the linguist Emily Bender, “a language model is a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning: a stochastic parrot. Quite the opposite we, human beings, are intentional subjects who can make things into objects of thought by inventing and endowing meaning.“
Machine learning engineers in companies like OpenAI, Google, Facebook, or Microsoft have experimentally established a concept of language at the center of which does not need to be the human. According to this new concept, language is a system organized by an internal combinatorial logic that is independent from whomever speaks (human or machine). They have undermined one of the most deeply rooted axioms in Western philosophy: humans have what animals and machines do not have, language and logos.
Some data: monthly, on average, humans publish about seventy million posts on the content management platform WordPress. Humans produce about fifty-six billion words a month, or 1.8 billion words a day on this content management platform. GPT-3 -before its scintillating launch- was producing around 4.5 billion words a day, more than twice what humans on WordPress were doing collectively. And that is just GPT-3; there are other LLMs. We are exposed to a flood of non-human words. What will it mean to be surrounded by a multitude of non-human forms of intelligence? How can we relate to these astonishingly powerful content-generator LLMs? Do machines require semantics or even a will to communicate with us?
These are philosophical questions that cannot be just solved with an engineering approach. The scope is much wider and the stakes are extremely high. LLMs can, as well as master and learn our human languages, make us reflect and question ourselves about the nature of language, knowledge, and intelligence. Large language models illustrate, for the first time in the history of AI, that language understanding can be decoupled from all the sensorial and emotional features we, human beings, share with each other. Gradually, it seems we are entering eventually a new epoch in AI.
Posted: March 30th, 2022 | Author: Domingo | Filed under: Artificial Intelligence, Human-centered explainable AI | Tags: AI, artificial intelligence, Explainable AI, HCXAI, Human-centered Explainable AI | Comments Off on Explainable Artificial Intelligence: A Main Foundation in Human-centered AI
Human-centered explainable AI (HCXAI) is an approach that puts the human at the center of technology design. It develops a holistic understanding of who the human is by considering the interplay of values, interpersonal dynamics, and the socially situated nature of AI systems.
Explainable AI (XAI) refers to artificial intelligence -and particularly machine learning techniques- that can provide human-understandable justification for their output behavior. Much of the previous and current work on explainable AI has focused on interpretability, which can be viewed as a property of machine-learned models that dictates the degree to which a human userAI expert or non-expert usercan come to conclusions about the performance of the model given specific inputs.
An important distinction between interpretability and explanation generation is that explanation does not necessarily elucidate precisely how a model works, but aims to provide useful information for practitioners and users in an accessible manner. The challenges of designing and evaluating “black-boxed” AI systems depends crucially on who the human in the loop is. Understanding the who is crucial because it governs what the explanation requirements are. It also scopes how the data are collected, what data can be collected, and the most effective way of describing the why behind an action.
Explainable AI (XAI) techniques can be applied to AI blackbox models in order to obtain post-hoc explanations, based on the information that they grant. For Pr. Dr. Corcho, rule extraction belongs to the group of post-hoc XAI techniques. This group of techniques are applied over an already trained ML model -generally a blackbox one- in order to explain the decision frontier inferred by using the input features to obtain the predictions. Rule extraction techniques are further differentiated into two subgroups: model specific and model-agnostic. Model specific techniques generate the rules based on specific information from the trained model, while model-agnostic ones only use the input and output information from the trained model, hence they can be applied to any other model. Post-hoc XAI techniques in general are then differentiated depending on whether they provide local explanations -explanations for a particular data point- or global ones -explanations for the whole model. Most rule extraction techniques have the advantage of providing explanations for both cases at the same time.
The researchers Carvalho, Pereira, and Cardoso have defined a taxonomy of properties that should be considered in the individual explanations generated by XAI techniques:
- Accuracy: It is related to the usage of the explanations to predict the output using unseen data by the model.
- Fidelity: It refers to how well the explanations approximate the underlying model. The explanations will have high fidelity if their predictions are constantly similar to the ones obtained by the blackbox model.
- Consistency: It refers to the similarity of the explanations obtained over two different models trained over the same input data set. High consistency appears when the explanations obtained from the two models are similar. However, a low consistency may not be a bad result since the models may be extracting different valid patterns from the same data set due to the Rashomon Effect -seemingly contradictory information is fact telling the same from different perspectives.
- Stability: It measures how similar the explanations obtained are for similar data points. Opposed to consistency, stability measures the similarity of explanations using the same underlying model.
- Comprehensibility: This metric is related to how well a human will understand the explanation. Due to this, it is a very difficult metric to define mathematically, since it is affected by many subjective elements related to humans perception such as context, background, prior knowledge, etc. However, there are some objective elements that can be considered in order to measure comprehensibility, such as whether the explanations are based on the original features (or based on synthetic ones generated after them), the length of the explanations (how many features they include), or the number of explanations generated (i.e. in the case of global explanations).
- Certainty: It refers to whether the explanations include the certainty of the model about the prediction or not (i.e. a metric score).
- Importance: Some XAI methods that use features for their explanations include a weight associated with the relative importance of each of those features.
- Novelty: Some explanations may include whether the data point to be explained comes from a region of the feature space that is far away from the distribution of the training data. This is something important to consider in many cases, since the explanation may not be reliable due to the fact that the data point to be explained is very different from the ones used to generate the explanations.
- Representativeness: It measures how many instances are covered by the explanation. Explanations can go from explaining a whole model (i.e. weights in linear regression) to only be able to explain one data point.
In the realm of psychology there are three kinds of views of explanations:
- The formal-logical view: an explanation is like a deductive proof, given some propositions.
- The ontological view: events state of affairs explain other events.
- The pragmatic view: an explanation needs to be understandable by the demander.
Explanations that are sound from a formal-logical or ontological view, but leave the demander in the dark, are not considered good explanations. For example, a very long chain of logical steps or events (e.g. hundreds) without any additional structure can hardly be considered a good explanation for a person, simply because he or she will lose track.
On top of this, the level of explanation refers to whether the explanation is given at a high-level or more detailed level. The right level depends on the knowledge and the need of the demander: he or she may be satisfied with some parts of the explanation happening at the higher level, while other parts need to be at a more detailed level. The kind of explanation refers to notions like causal explanations and mechanistic explanations. Causal explanations provide the causal relationship between events but without explaining how they come about: a kind of why question. For instance, smoking causes cancer. A mechanistic explanation would explain the mechanism whereby smoking causes cancer: a kind of how question.
As said, a satisfactory explanation does not exist by itself, but depends on the demanders need. In the context of machine learning algorithms, several typical demanders of explainable algorithms can be distinguished:
- Domain experts: those are the professional users of the model, such as medical doctors who have a need to understand the workings of the model before they can accept and use the model.
- Regulators, external and internal auditors: like the domain experts, those demanders need to understand the workings of the model in order to certify its compliance with company policies or existing laws and regulations.
- Practitioners: professionals that use the model in the field where they take users input and apply the model, and subsequently communicate the result to the users situations, such as for instance loan applications.
- Redress authorities: the designated competent authority to verify that an algorithmic decision for a specific case is compliant with the existing laws and regulations.
- Users: people to whom the algorithms are applied and that need an explanation of the result.
- Data scientists, developers: technical people who develop or reuse the models and need to understand the inner workings in detail.
Summing up, for explainable AI to be effective, the final consumers (people) of the explanations need to be duly considered when designing HCXAI systems. AI systems are only truly regarded as “working” when their operation can be narrated in intentional vocabulary, using words whose meaning go beyond the mathematical structures. When an AI system “works” in this broader sense, it is clearly a discursive construction, not just a mathematical fact, and the discursive construction succeeds only if the community assents.
Posted: March 18th, 2022 | Author: Domingo | Filed under: Artificial Intelligence, Book Summaries, Realpolitik | Tags: artificial intelligence, China, geopolitics, techno-socialism | Comments Off on China: Techno-socialism Seasoned with Artificial Intelligence
“People take the great ruler for granted and are oblivious to his presence.The good ruler is loved and acclaimed by his subjects. The mediocre ruler is universally feared. The bad ruler is generally despised. Because he lacks credibility, the subjects do not trust him. On the other hand, the great ruler seldom issues orders. Yet he appears to accomplish everything effortlessly. To his subjects everything he does is just a natural occurrence.“
Tao-T-Ching, Lao-Ts
Anyone who wants to learn something about China today, to know its strategic plan between now and 2050, the means to achieve it, and what drives this country in this titanic effort, should read the book El gran sueo de China: tecno-socialismo y capitalismo de estado by Claudio F. Gonzlez.
Claudio F. Gonzlez, PhD in engineering and economist, has lived in China, as director of Asia for the Polytechnic University of Madrid (UPM), for six years. During this time he has been involved in the fields of education, entrepreneurship, research and innovation in the Asian giant. From this privileged vantage point he has been able to observe, analyze, and understand the complexity of this country.
According to the author, throughout the 20th century, the Western world looked at China with the condescension that is due to a former empire in decline and mired in chaos, power struggles, and poverty, and only in the last decades of the past century, as a market of great potential and a cheap manufacturer of limited quality. Nonetheless, China had -and has- its plan, the ultimate goal of which is returning the “Empire of the Center” to the place it has held for most of human history. Namely: being the most socially and technologically advanced nation and, from there, regaining world leadership in the economic, commercial, and cultural spheres.
In 2015, the government announced the first of its grand plans – Made in China 2025, with the goal of making China by this date a leader in industries such as robotics, semiconductor manufacturing, electronic vehicles, renewable energy and, of course, artificial intelligence.
Initiatives such as the Belt and Road Initiative (BRI) or institutions such as the Asian Infrastructures Investment Bank (AIIB) are nothing more than instruments through which China wants to reshape an international order that is more favorable to its new interests. One of China’s stated goals is that by 2035 it wants to be the country that globally sets the next standards in areas such as AI, 5G or the Internet of Things.
China’s successes in the digital economy are based on three main factors:
1. A market that is both huge in size and young, which allows for the rapid commercialization of new business models and equally allows for a high level of experimentation.
2. An increasingly rich and varied innovation ecosystem that goes far beyond a few large and famous companies.
3. And a strong government support, which provides favorable economic and regulatory conditions, and also acts as a venture capital investor, a consumer of products based on new technologies and produced by local companies, and allows access to data that are key to developing new solutions in conditions that are unthinkable in other regions.
Professor F. Gonzlez calls this model techno-socialism or state capitalism.
What are the defining characteristics of this techno-socialist model?
China intends to harness the interest in technological development of its own industry to align it with government interests. The overall goal is, starting from what the Chinese Communist Party (CCP) calls a moderately prosperous socialist society, to catch up with and surpass the most developed Western countries, ideally by the 100th anniversary of the founding of the People’s Republic of China (2049). Socialism in the sense of the Chinese regime is no longer socialism in the traditional sense of ownership and collective management of the means of production, a political conception definitively defeated after Mao’s demise; but its control and coordination to achieve social objectives.
The features that characterize this techno-socialism are those of complete physical security for people and things, the absence of extreme poverty, full employment, and the possibility for the most industrious to obtain economic and prestige rewards for their efforts, as long as they are aligned with the objectives established by the party and do not put its dominion at the least risk. This techno-socialism tries to lead society as a whole towards a centrality of thought that avoids extremisms that destroy peace and social security, and that do not call into question the leadership and omnipotent dominance of the party.
The alignment between business interests -or those of other institutions- and public interests, as interpreted by the party, creates a unique innovation ecosystem in which companies capable of promoting solutions for a broad user base become champions of an industrial policy. Once this status is achieved, and always within the logic of interest alignment, they will gain access to a whole arsenal of measures -subsidies, tax reductions, preferential treatment-, to maintain this position and, if possible, extend it internationally, since they are no longer merely companies, but ambassadors of a new model. In the particular case of artificial intelligence, the government has contributed with the necessary conditions -strategies, plans, regulation, space for experimentations- and practical support -venture capital, public procurement, permissions to access data-, for innovations in this field to follow. Alibaba, Tencent, and Baidu set up research centers, deploy applications, enroll human capital, and support CCP policies.
Will techno-socialism be able to generate enough disruptive innovations to give the technology created in China an entity of its own?
Between 2015 and 2018, the venture capital funded more than 1 trillion to new technology start-ups in China. China has more unicorns -companies less than ten years old with a valuation above $1 billion- than any other country. In terms of research, China is already the country with the most scientific articles, surpassing the US, although it is true that its impact is still minor, the gap is rapidly closing. It turns out that it has been the state the one which, with its research grants, scholarships and universities, has generated ideas that, because of their risk, private initiative would never have dared to finance. Hence, in this sense, public authorities that nurture alternative ways of thinking are the true engine of progress.
Professor F. Gonzlez names this innovation paradigm applicable to China as asymmetric triple helix model, in which the national government controls the overall innovation context through its top- down policies and plans but, at the same time, allows a certain level of autonomy for district, local and regional governments to conduct their own experiments and accommodate innovations that emerge from the bottom up. Large companies, start-ups, and finance companies are aligned with government interests. And universities and research centers similarly align themselves with government objectives in producing new knowledge and generating talent in the form of human capital.
And eventually, when will China achieve and assume the role of world leader?
From the author’s standpoint China, due to a set of inconsistencies and structural gaps, is neither ready nor willing to assume the global leadership in the foreseeable future. However, it does claim to be the most powerful and influential economy, with the most cohesive society and the least contested domestic leadership that will enable it to become something like the best country in a fragmented world. China’s current strength lies in the existence of a long-term plan: a sense of destiny that ties in with its imperial past. There is a deep conviction in Chinese society, a determination, which is the key force to achieve these strategic objectives.
China does not want to be a powerful nation, but deserves it.
Posted: March 9th, 2022 | Author: Domingo | Filed under: Artificial Intelligence, Human-centered Artificial Intelligence | Comments Off on Human-centered AI: from Artificial Intelligence (AI) to Intelligence Augmentation (IA)
The growth and evolvement of artificial intelligence has showcased the need that AI techniques should be human-centered, mainly with the increasing adoption of particularly inscrutable opaque-box machine learning (ML) models -such as neural networks models-, in which understanding has become increasingly difficult. For some people, this comprehension challenge will become the bottleneck to trust and adopt AI technologies. Others have warned that a lack of human scrutiny will inevitably lead to failures in usability, reliability, safety, fairness, and other moral crises of AI.
However, what is human-centered artificial intelligence?
HCAI concerns the study of how present and future AI systems will interact with human lives in a mixed society composed of artificial and human agents, and how to keep the human into the focus in this mixed society. The discussion involves both technical and non technical people which see AI agents from different perspectives. HCAI technologies bring superhuman capabilities, augmenting human creativity, while raising human performance and self-efficacy. A human-centered approach will reduce the out-of-control technologies, calm fears of robot-led unemployment, and give users the rewarding sense of mastery and accomplishment. It will as well bring AI wider acceptance and higher impact by providing products and services that serve human needs.
In the past, researchers and developers focused on building AI algorithms and systems, stressing machine autonomy and measuring algorithm performance. The new synthesis gives equal attention to human users and other stakeholders by raising the value of user experience design and by measuring human performance. This new synthesis reflects the growing movement to expand from technology-centered thinking to include human-centered aspirations that highlight societal benefit. The interest in HCAI has grown stronger since the 2017 Montreal Declaration for Responsible Development of AI.
What is the difference between HCAI and AI?
From the standpoint of processes, HCAI builds on user experience design methods of user observation, stakeholder engagement, usability testing, iterative refinement, and continuing evaluation of human performance in use of systems that employ AI.
From the product approach, HCAI systems are designed to be super-tools which amplify, augment, empower, and enhance human performance. They emphasize human control, while embedding high levels of automation.
The goal is to increase human self-efficacy, creativity, responsibility, and social connections while reducing the impact of malicious actors, biased data, and flawed software.
In an automation-enhanced world, clear interfaces could let humans control automation to make the most of people initiative, creativity, and responsibility. For instance, if AI technology developers increase their use of information visualization, their own algorithmic world will improve and they will help many stakeholders to better understand how to use these new technologies. Information visualization has proven its value in understanding deep learning methods, improving algorithms, and reducing errors. Visual user interfaces have become appreciated for providing developers, users, and other stakeholders with a better understanding of and more control over how algorithmic decisions are made for parole requests, hiring, mortgages, and other consequential applications.
Automation is invoked by humans, but they must be able to anticipate what happens because they are responsible. While AI projects are often focussed on replacing humans, HCAI designers favor developing information-rich visualizations and explanations built-in, rather than added on. These information abundant displays give users a clear understanding of what is happening and what they can do.
The future is human-centered. The goal is to create products and services that amplify, augment, empower, and enhance human performance. HCAI systems emphasize human control, while embedding high levels of automation, easing thus the transition from artificial intelligence (AI) to intelligence augmentation (IA).
Posted: November 19th, 2021 | Author: Domingo | Filed under: Artificial Intelligence, Machine Learning | Comments Off on On Master Algorithms, ML Schools of Thoughts, and Data Privacy
November, 19th 2021
Although not a recent one (2015), The Master Algorithm by Pedro Domingos is a pleasant book to be read, mainly as a sort of basic pedagogical introduction to machine learning. As the author stated in the book, “when a new technology is as pervasive and game changing as machine learning, it’s not wise to let it remain a black box. Opacity opens the door to error and misuse.” Therefore, this initial effort to democratize this subfield of artificial intelligence is logically welcome.
Professor Domingos is a machine learning practitioner and hence you can realize his bias concerning other approaches to artificial intelligence; said that, it’s interesting how he divides and frames the different schools of thoughts inside machine learning. From his standpoint, there are five schools:
- Symbolists: they view learning as inverse deduction and they take ideas from philosophy, psychology, and logic.
- Connectionists: they reverse engineer the brain and they are inspired by neuroscience and physics.
- Evolutionaries: they simulate evolution on the computer and they draw on genetics and evolutionary biology.
- Bayesians: they believe learning is a form of probabilistic inference and they have their roots in statistics.
- Analogizers: they learn by extrapolating from similarity judgements and they are influenced by psychology and mathematical optimization.
Each of the five tribes of machine learning has its own master algorithm, a general purpose learner that you can in principle use to discover knowledge from data in any domain. The symbolists’ master algorithm is inverse deduction; the connectionists’ is backpropagation; the evolutionaries’ is genetic programming; the bayesians’ is Bayesian inference; and the analogizers’ is the support vector machine.
For Symbolists, all intelligence can be reduced to manipulating symbols. Symbolists understand that you can’t learn from scratch: you need some initial knowledge to go with the data. Their master algorithm is inverse deduction, which figures out what knowledge is missing in order to make a deduction go through, and then makes it as general as possible.
Symbolist machine learning is an offshoot of the knowledge engineering school of AI. In the 1970s the so-called knowledge-based systems scored some impressive successes and in the 1980s they spread rapidly, but then they died out. The main reason was the infamous knowledge acquisition bottleneck: extracting knowledge from experts and encoding as rules is just too difficult, labor intensive, and failure-prone. Letting the computer automatically learn to, say, diagnose diseases by looking at databases of past patients’ symptoms and the corresponding outcomes turned out to be much easier that endlessly interviewing doctors.
For Connectionists, learning is what the brain does. The brain learns by adjusting the strengths of connections amongst neurons, and the crucial problem is figuring out which connections are to blame for which errors and changing them accordingly. The connectionist master algorithm is backpropagation, which compares a system output with the desired one and then successfully changes the connections in layer after layer of neurons, so as to bring the output closer to what it should be.
Connectionist representations are distributed, mirroring what happens in the human brain. Each concept is represented by many neurons and each neuron participates in representing many different concepts. Neurons that excite one another form a cell assembly. Concepts and memories are represented in the brain by cell assemblies. Each of these can include neurons from different brain regions and overlap with other assemblies.
The first formal model of a neuron was proposed by Warren McCulloch and Walter Pitts in 1943. It looked a lot like the logic gates computers are made of. McCulloch and Pitts’ neuron did not learn though. For that it was needed to give variable weights to the connections amongst neurons in resulting in what’s called perceptrons. Perceptrons were invented in the late 1960s by Frank Rosenblatt, a Cornell psychologist. In a perceptron, a positive weight represents an excitatory connection, and a negative weight an inhibitory one. The perceptron generated a lot of excitement. It was simple, yet it could recognize printed letters and speech sounds, just by being trained with examples.
In 1985 David Ackley, Geoff Hinton, and Terry Sejnowsky replaced the deterministic neurons in Hopfield networks with probabilistic ones. A neural network had then a probability distribution over its states, with higher energy-status being exponentially less likely than lower-energy ones. One year later, 1986, Backpropagation was invented by David Rumelhart, a psychologist at the University of California, with the help of Geoff Hinton and Ronald Williams.
Evolutionaries believe that the mother of all learning is natural selection. The master algorithm is genetic programming, which mates and evolves computer programs in the same way that nature mates and evolves organisms. Whilst backpropagation entertains a single hypothesis at any given time and the hypothesis changes until it settles into a local optimum, genetic algorithms consider an entire population of hypothesis at each step, and these can make big jumps from one generation to the next thanks to crossover. Genetic algorithms are full of random choices; they make no a priori assumptions about the structures they will learn, other than their general form.
Bayesians are concerned above all with uncertainty. The problem then becomes how to deal with noisy, incomplete, and even contradictory information without falling apart. The solution is probabilistic inference and the master algorithm is Bayes’ theorem and its derivates. Bayes theorem is just a simple rule for updating your degree of belief in a hypothesis, when you receive new evidence. If the evidence is consistent with the hypothesis, the probability of the hypothesis goes up, If not, it goes down.
For Analogizers the key to learning is recognizing similarities between situations and thereby inferring other similarities. The analogizers’ master algorithm is the support vector machine, which figures out which experiences to remember and how to combine them to make new predictions. The nearest neighbor algorithm, before the support vector machine, was the first preferred option in the analogy-based learning.
Up to the late 1980s researchers in each tribe mostly believed their own rhetoric, assumed their paradigm was fundamentally better and communicated little with the other schools. Today the rivalry continues but there is much more cross-pollination. For professor Domingos, the best hope of creating a universal learner lies in synthesizing ideas from different paradigms. In fact just a few algorithms are responsible for the great majority of machine learning applications.
As a coda to his pedagogical explanation of machine learning, professor Domingo’s views about data privacy are worthy to be highlighted. From his standpoint, our digital future begins with a realization every time we interact with a computer -whether it’s a smart phone or a server thousands of kilometers away- we do so on two levels: the first one is getting what we want there and then: an answer to a question, a product you want to buy, a new credit card. The second level, in the long run the most important one, is teaching the computer about us. The more we teach it, the better it can serve us -or manipulate us.
Life is a game between us and the learners which surround us. We can refuse to play but then we will have to live a twentieth-century life in the twenty-first. Or we can play to win. What model of us do we want the computer to have? And what data can we give it that will produce that model? Those questions should always be in the back our minds whenever we interact with a learning algorithm -as they are when we interact with other people.