De cerca, nadie es normal

The Paradox of the Artificial Intelligence: Is an Infallible Machine really Intelligent?

Posted: March 16th, 2015 | Author: | Filed under: Artificial Intelligence | Tags: , , , , , , | Comments Off on The Paradox of the Artificial Intelligence: Is an Infallible Machine really Intelligent?

And here we were developing the smart virtual assistant for our corporate website with natural language processing and computational semantics abilities, when it came to my mind what I read some days ago in the book Turing’s Cathedral by George Dyson about artificial intelligence and search engines…

John von Neumann and Alan Turing were the pioneers of the current digital universe but each of them in their own way. In different locations: United States, the former; United Kingdom, the latter. In different terms: von Neumann only talked about computation, whilst Turing only mentioned artificial intelligence or, to be more precise, mechanical intelligence. With different final goals: von Neumann searched how to achieve that machines could breedwhilst Turing wondered what would be required so that machines began thinking.

Turing's Cathedral by George Dyson

Dyson puts forward the following anomalous situation in his book: through processing large-scale probabilistic statistical information a great progress has been made in fields such as speech recognition, linguistic translation or even stock exchange forecast. However, how can this be intelligence if it is just inserting probabilistic statistical power in the problem and waiting to see what happens, without any underlying knowledge?

The digital computers can answer most of the questions asked in defined and unambiguous terms -i.e, in perfectly delimited working fields– by computer engineers. Nonetheless, what happens with the difficult-to-be-posed questions?

As mentioned by the author, for Turing the path towards the artificial intelligence was to build a machine with a kid’s curiosity and to let its intelligence evolve. The machine imagined by Turing could answer all the likely questions which could be answered and posed by anyone. From Turing’s point of view, it couldn’t be expected a machine infallible and intelligent at the same time. Instead of building infallible computers, fallible machines should be developed, which were able to learn from their own mistakes.

Probably the current search engines are the closest to those “fallible machines which can learn from their own mistakes”. Their origin mark is the Monte Carlo Method, developed by von Neumann and its team in the beginnings of the computational era: quantified random search paths in order to statistically accumulate results more and more precise; i.e., ability to extract meaningful solutions regarding an avalanche of information, acknowledging the meaning is not in the data of the final items, but in the intermediate ways.

As Dyson explains, an Internet search engine is a determinist machine of finite state, unless in those situations in which people take a non-determinist option upon deciding which results are the meaningful ones. Those clicks are immediately included in the state of the determinist machine which, in this way, becomes gradually wiser with every single click. This is what Turing called an “oracle machine”.

Every time a person searches for something and finds an answer, it leaves a weak but persistent trail about where a certain piece of meaning is -and what it is. Those pieces accumulate up to a moment in which, as Turing stated in 1948, “the machine would have grown up” and it would begin thinking.

In October 2005 Google launched its project to digitalize books. Another stroke of genius of the couple Page & Brin to democratize knowledge, thought most of the people. Nonetheless that was not the aim of the Big Brother founders: they wanted the biggest number of books to be digitalized so that an artificial intelligence could read them. Will this be the stage immediately before the development of a thinking machine? I don’t know. If there is not a reasoned computational learning model on its base, we’ll end up having what Turing predicted: a enormous probabilistic statistical power but no underlying knowledge.

Computationally interesting times await us. You’d better stay tuned.


Comments are closed.