Inteligencia
artificial
Biografía
Marvin Lee Minsky nació en
la ciudad de Nueva York, en el seno de una familia judía. Asistió a la Escuela
Fieldston y a la Escuela Secundaria de Ciencias del Bronx. Más tarde asistió a
la Academia Phillips
en Andover, Massachusetts.
Minsky fue consejero en la
película 2001: Una
odisea del espacio y hay referencias a él tanto en la película como
en el libro. Durante la filmación Minsky casi murió en un accidente.1
Minsky también es
responsable de sugerir la trama de "Jurassic Park" a Michael Crichton durante un paseo por la playa de
Malibú. En ese
punto los dinosaurios fueron concebidos como autómatas. Más tarde Crichton hizo
uso de sus conocimientos en biomedicina y concibió los dinosaurios como clones.
Minsky ha recibido el Premio Fundación BBVA Fronteras del Conocimiento
2013 en Tecnologías de la Información y la Comunicación. El jurado de dicho
premio destacó sus trabajos sobre el aprendizaje de las máquinas, en sistemas
que integran la robótica, el lenguaje, la percepción y la planificación además
de la representación del conocimiento basada en marcos (frames), han conformado
el campo de la Inteligencia Artificial.
Marvin Minsky (New York,
1927) es uno de los padres de la Inteligencia Artificial. En 1959, tras
completar sus estudios de Matemáticas en Harvard y Princenton, fundó,
conjuntamente con John McCarthy el Laboratorio de Inteligencia Artificial del MIT.
Ha escrito varias obras de referencia sobre IA e incluso es mencionado en el
libro y película '2001: Una Odisea en el Espacio' como responsable de la
existencia de HAL. Hoy, a los 88 años, en una lúcida reflexión para Technology Review, sigue mostrando con
contundencia su escepticismo acerca del progreso en el campo de la Inteligencia
Artificial durante las últimas décadas.
Minsky convivió con personajes de la talla de
John von Neumann, Alan Turing, Claude Shannon, Albert Einstein, Isaac Asimov o
Carl Sagan en un
momento en el que se dieron los primeros pasos en muchos campos de
investigación. Y la Inteligencia Artificial fue uno de ellos. Precisamente el
término Inteligencia Artificial fue acuñado por John McArthy en 1956 para dar título
a una charla donde abordaba esta nueva disciplina diferenciándola
explícitamente de otras como la Cibernética.
La inteligencia artificial nació oficialmente
como disciplina en una conferencia de ciencias de la computación en el
Dartmouth College (New Hampshire, Estados Unidos), en 1956.
Los padres de este nuevo campo fueron John
McCarthy, de la Universidad de Stanford; Allen Newell y Herbert Simon, de
Carnegie Mellon, y Minsky, el único que sobrevive.
El auge y la
decadencia de la IA
Básicamente, Minsky está
convencido de que la Inteligencia Artificial no puede progresar porque no hay
ideas de suficiente trascendencia como para abrir nuevos caminos de
investigación. Es más, los que se han seguido durante las últimas décadas han
llevado a un punto muerto en el que es imposible resolver los obstáculos
que impiden que se consiga diseñar sistemas inteligentes.
Si hay un detalle relevante
en estas reflexiones de ahora, es lo poco que ha cambiado el panorama
frente al del año 2003 cuando Minsky, en una charla impartida en la Universidad
de Boston, sentenció: "La Inteligencia Artificial padece muerte cerebral
desde los años 70". Desde entonces hasta ahora han pasado 12 años y la
visión de Minsky sigue en la misma línea.
En TED hay una interesante charla de Marvin Minsky del año 2003 en el
que ya muestra abiertamente su espíritu crítico acerca de la situación de la
Inteligencia Artificial. Para él, se ha seguido un camino equivocado en la
investigación desde los años 60.
El tono con el que aborda
la situación de la IA en el momento presente, a sus 88 años de edad, no está
exento de melancolía recordando los primeros momentos de la Cibernética. Muchos
de sus primeros alumnos en el MIT provenían del Club de Modelismo Ferroviario.
No hay que dejarse llevar por las apariencias: los miembros de aquel club, allá
en los años cincuenta, eran los hackers del momento, innovadores y
expertos diseñando mecanismos y máquinas que nadie más se atrevía a construir.
Minsky, ya en 1951,
construyó una máquina buscando un comportamiento inteligente a partir de sus
circuitos electrónicos: la SNARC (Stochastic Neural Analog Reinforcement
Calculator) usaba 3.000 lámparas de vacío para simular una red neuronal
con 40 nodos conectados aleatoriamente. Eran tiempos en los que se hacían
grandes descubrimientos cada dos o tres días frente al momento actual, en el
que los dos o tres días se han convertido en dos o tres años.
En aquellos primeros días
de la cibernética, se daba por hecho que todo era mecanizable, no se pensaba si
algo sería posible, sino cuándo y cómo se podría hacer, incluyendo la
Inteligencia Artificial. Minsky habla de los años 50 y 60 como maravillosos y
fructíferos, pero a partir de ahí se tomaron decisiones equivocadas que han
conducido a vías muertas sin margen para avanzar.
Arthur C. Clarke habla de Marvin Minsky en su novela '2001: Una Odisea
en el Espacio' como el "padre" de HAL9000. En los años 80 (de 1968),
Minsky y Good habrían conseguido construir un sistema computacional inteligente
a partir de un programa capaz de aprender. Esta idea parece estar en sintonía
con la idea de IA de Minsky.
Para Minsky, lo único que
se ha hecho en los últimos 10 años es tratar de mejorar sistemas que no son
especialmente buenos y que, de hecho, no han evolucionado mucho en las
dos últimas décadas. Ya en anteriores declaraciones años atrás, allá por 2003,
se mostraba muy crítico con el giro de la Inteligencia Artificial hacia los
sistemas expertos, los sistemas basados en reglas, las redes neuronales, los
algoritmos genéticos o la robótica. Según él, no son capaces de resolver el
problema de la IA al ignorar problemas de alto nivel como el sentido común.
El sentido común al que se
refiere Minsky, es el que hace que un niño de tres años sea más
inteligente que cualquier sistema diseñado bajo los auspicios de la IA hasta la
fecha. Y es precisamente la falta de interés hacia la investigación de aspectos
como el sentido común lo que le hace ser tan poco optimista. Según declaraba
Minsky en 2003, "La peor moda ha sido la de esos estúpidos pequeños robots.
Los estudiantes que se gradúan pierden tres años de sus vidas soldando y
reparando robots en vez de hacer que sean inteligentes. Es impactante."
La solución:
despide a los expertos
Los años no han hecho que
el tono de Minsky se suavice la hora de proponer soluciones para que la
Inteligencia Artificial vuelva a la senda de la innovación y el progreso.
"Grandes compañías y malas ideas no combinan bien", afirma. La
táctica de comercializar los avances existentes no funciona. No hay
investigadores que se atrevan a abordar los problemas de más alto nivel que
conciernen a la Inteligencia Artificial y se quedan en aspectos parciales en
vez de abordar el problema de la IA desde una perspectiva mucho más amplia.
Lo que propone es sencillo:
"Despide a los expertos". Y al mismo tiempo apoyar a los
innovadores. Para Minsky es esencial apostar por nuevas ideas de jóvenes
capaces de abrir otros caminos de investigación allí donde los actuales se
muestran incapaces de avanzar. Abrir la Inteligencia Artificial a perfiles como
los de los estudiantes que venían del Club de Modelismo Ferroviario del MIT.
Volver atrás, en un melancólico ejercicio de la imaginación, a 1965 y auditar
los sistemas y las ideas que siguieron para identificar dónde estaba el error.
Libros
- “Redes neuronales y el problema del modelo
de cerebro”. Título original en inglés “Neural Nets and the Brain
Model Problem”. Ph.D. disertación, Universidad de Princeton, 1954.
- “Computación: máquinas finitas e infinitas”.
Título original en inglés “Computation: Finite and Infinite Machines”, Prentice-Hall,
1967.
- “Procesamiento de información semántica”.
Título original en inglés “Semantic Information Processing”. MIT Press, 1968.
- “Perceptrones”. Título original en
inglés “Perceptrons” (con Seymour Papert). MIT Press, 1969.
- “Inteligencia artificial”. Título
original en inglés “Artificial Intelligence” (con Seymour Papert). Prensa
de la Universidad de
Oregón, 1972.
- “Robótica”. Título original en inglés
"Robotics" Doubleday, 1986.
- “La sociedad de la mente”. Título
original en inglés “The Society of Mind”. Simon and Schuster,
1987.
- “La
opción de Turing”. Título original en inglés “The Turing Option” (con Harry Harrison).
Warner Books, New York, 1992.
- “La máquina con emociones”. Título original
en inglés “The Emotion Machine”. ISBN / ASIN: 0743276639.
Marvin Minski Page
The mind,
artificial intelligence and emotions
Interview with Marvin Minsky
Marvin Minsky is respected as one of foremost
researchers and writers in many fields of the Computer Sciences, particularly
in Artificial Intelligence, the area which studies ways of imitating the human
brain’s cognitive functions in a computer. As a professor with the prestigious Massachussetts Institute of Technology (MIT),
in Cambridge, USA, he founded the Artificial Intelligence Laboratory, a place where many of the
ground-breaking research projects in computer sciences have occurred and still
occur, such as the development of programming languages LISP and LOGO. He’s is
one of the founders of robotics and is the recipient of a number of awards and
honors, such as the Turing Award, which
is considered the Nobel of computing. He also participates in the renowned MIT Media Lab, where the media of the future are being
researched.
Due to the many points of contact and
interaction between the neurosciences, psychology and computer sciences in the
area of Artificial Intelligence, it’s no wonder that the genial mind of Prof.
Minsky soon turned to the commonalities and interfaces between both and started
to write about the brain and its product, the mind. His "opus magnum"
in this area has been a fascinating book, "TheSociety of Mind",
which has been translated to many languages,including Portuguese, and which has
a number of interesting theories about the organization and workings of the
mind. His interest in the area is a long standing one: the first electrical
realization of a artificial neural network was made by Minsky while a
student. He has even written anovel about building a super-intelligence in
2023 A.D, titled "The Turing Option",
in 1991.
Prof. Minsky visited São Paulo for his fourth
time last May 1998, as invited speaker to Inet’98, a conference on intelligent
technologies and networking. During three grueling days, we had the
opportunity, as the president of the scientific committee of the conference, to
accompany the indefatigable professor (despite his age) in an unending round of
press interviews and conferences; as well as to ask many questions of ours. He
tremendously impressed us with his intelligence, depth of thinking and
originality of ideas, and with his personality. The result is a patchwork of a interview,
patched from severalquestions in different settings and times, which we present
here for the delight of the reader, who will undoubtedly be mesmerized by the
brilliant mind of Marvin Minsky and his many original (and some say,
outrageous) ideas and catch phrases.
Sabbatini: Prof. Minsky, in your
view, what is the contribution that computer sciences can make to the study of
thebrain and the mind ?
Minsky: Well, it
is clear to me that computer sciences will change our lives, but not because
it’s about computers. It’s because it will help us to understand our own
brains, to learn what is the nature of knowledge. It will teach us how we learn
to think and feel. This knowledge will change our views of Humanity and enable
us to change ourselves.
Computer sciences are about managing
complicated processes and the most complicated thing around are us.
Sabbatini: Why
computers are so stupid?
Minsky: A
vast amount of information lies within our reach. But no present-day machine
yet knows enough to answer the simplest questions about daily life, such as:
- "You
should not move people by pushing them"
- "If
you steal something, the owner will be angry"
- "You
can push things with a straight stick but not pull them"
- "When
you release a thing holding in your hand it will fall toward earth (unless
it is a helium balloon)"
- "You
cannot move a object by asking it "please come here"
No computer knows such things, but every
normal child does.
There are many other examples. Robots make
cars in factories, but no robot can make a bed, or clean your house or
baby-sit. Robots can solve differential equations, but no robot can understand
a first grade child’s story. Robots can beat people at chess, but no robot can
fill your glass.
We need common-sense knowledge – and programs
that can use it.Common sense computing needs several ways of representing
knowledge. It is harder to make a computer housekeeper than a computer
chess-player, because the housekeeper must deal with a wider range of
situations.
Sabbatini: How large such a knowledge
base would be?
Minsky: I
think it would fit all in one CD-ROM. Of course, there is no psychological
experiment ever done to see if a person knows more than a CD’s content (650
MB). It is fairly impossible to estimate how many megabytes of information a
person knows, but I think that is not more than this. If you memorize 10 books,
it would take no more than 1 megabyte of memory, but very few persons know even
a small book by heart.
Hardware is not the limiting factor for
building an intelligent computer. We don’t need supercomputers to do this; the
problem is that we don’t know what’s the software to use with them. A 1 MHz
computer probably is faster than the brain and would do the job provided that
it has the right software.
Sabbatini: Why there are no
computers already working with common sense knowledge ?
Minsky: There are
very few people working with common sense problems in Artificial Intelligence.
I know of no more than five people, so probably there are about ten of them out
there. Who are these people ? There’s John McCarthy, at Stanford University, who was the first to formalize common sense using
logics. He has a very interesting web page. Then, there is Harry Sloaman, from
the University of Edinburgh, who’s probably the best philosopher in the world
working on Artificial Intelligence, with the exception of Daniel Dennett, but he knows more about
computers. Then there’s me, of course. Another person working on a strong
common-sense project is Douglas Lenat, who
directs the CYC project in Austin. Finally, Douglas Hofstadter, who wrote many
books about the mind, artificial intelligence, etc., is working on similar
problems.
We talk only to each other and no one else is
interested. There is something wrong with computer sciences.
Sabbatini: Is there any AI software
that uses the common sense approach ?
Minsky: As I
said, the best system based on common sense is CYC, developed by Doug Lenat, a
brilliant guy, but he set up a company, CYCorp, and is
developing it as a proprietary system. Many computer scientists have a good
idea and then made it a secret and start making proprietary systems. They
should distribute copies of their system to graduate systems, so that they
could evolve and get new ideas. We must understand how they work.
I don’t believe in intellectual property. The
world has gone crazy. People are patenting genes. Why? They didn’t invent them!
Sabbatini: Are
automatic language translation programs and chess-playing programs good
examples of truly intelligent software ?
Minsky:
Of course not. Current machine translation technology still falls short of a
reasonably good human translator, because it doesn’t really understands what it
is translating. Again, it would need common sense knowledge, besides having
knowledge about the vocabulary, syntax, etc. of the source and target
languages. Prof Noam Chomsky is to be
faulted why we don’t have good machine translation programs. He is so brilliant
and his theory of generational grammar is so good, that for 40 years it has
been used by everyone in the field, shifting the focus from semantics to
syntax.
In the beginning of the AI field, teaching a
computer how to play complex games was a big thing. Arthur Samuel wrote a
checkers-player program in1957. He will be remembered as the pioneer of
computer gaming. We have learnt nothing in 40 years, I think, about making
chess playing programs:IBM's Deep Blue plays chess (which has beated Gerry
Kasparov, the world chess champion) only more rapidly, but not in a different
way from the first chess-playing programs. These programs play well and can
even beat the current world chess champion but they do not play in the same
mannera s the human brain plays.
Sabbatini: How
your concept of the "society of mind" relate to common-sense
knowledge ?
Minksy: Take
the human vision system, for example. There is no computer today that can look
around a room and make a map of what it seems, a feat that even a four-year old
is able to do. We have programs that can recognize faces, that can do somef
ocal vision processing and recognition, but not this higher-order processing.
Thus, human distance perception is a great example of a "society of
mind". There is a suite of cooperating methods, such as gradients, border
detection, haze, occlusion, shadow, focus, brightness, motion, disparity,
perspective, convergence, shading knowledge, etc.
A computer program typically has one or two
ways of doing something, a human brain has dozen of different methods to use.
Sabbatini:
You seem to believe that we will be able to build truly intelligence in the
future. But humans have consciousness, awareness of themselves. Will computers
ever be able to have this ?
Minsky: It’s very
easy to make computers aware of themselves. For example, all computers have a
stack; a special area of memory where the computer can look to see it’s past
actions. It is a trivial problem and not a very important one. The real problem
is to know how the mind knows about itself. We don’t understand how this
happens. Persons have a very shallow awareness about themselves. This is no
mysterious thing.
As soon as computers gets a minimum of common
sense we will know.
Sabbatini: When will this
happen?
Minsky:
Never, at the present rate. The public doesn’t value basic research enough to
let this situation be fixed. I suggest that you get a Brazilian center to work
on common sense problems for the next 10 years!.
Sabbatini: Your
upcoming book will be about the role of emotions. Could you tell us a little
about this ?
Minsky: Emotion
is only adifferent way to think. It may use some of the body functions, such
aswhen we prepare to fight (the heart beats faster, etc.). Emotions havea
survival value, so that we are able to behave efficiently in some
situations.Animals have better, stronger and faster emotions than us.
Therefore, truly intelligent computers will
need to have emotions.This is not impossible or even difficult to achieve. Once
we understand the relationship between thinking, emotion and memory, it will be
easy to implement these functions into the software.
Freud was one of the first computer scientists,
because he studied the importance of memory. He was also a pioneer in proposing
the role of emotions in personality and behavior. It is a pity because everyone
listened only to his ideas on sex. Freud is more about complicated processes.
According to Freud, the mind is organized as a
sandwich. It is made of three layers: the superego, which provides us with attachment,
self-image, etc., and that learns social values and ideas, prohibitionsa nd
taboos, acquired mainly from our parents. Under it there’s the ego, which
mediates conflict resolution and connects to sensory input and motor
expression. Under the ego, we find the id, which is responsible for thei nnate
drives system, our basic urges, such as hunger, thirst, sex, etc.
This could be a model for a computer program
having personality, knowledge and emotion, social perception, moral
constraints, etc.
Selected Resources
on the Internet: Artificial
Intelligence
- Yahoo! Artificial Intelligence
- Artificial Intelligence FAQ
- CMU Artificial Intelligence Repository
- The Turing Test and
Alan Turing's Home Page
- Robotics Internet Resources Page
- CYCorp, Inc
- MINDS AND MACHINES.
Journal for Artificial Intelligence, Philosophy, and Cognitive Science. Kluwer
Academic Publishers
- Bibliography on the Philosophy
of Artificial Intelligence, by David J. Chalmers
- Web-based
natural language conversation with intelligent agents: Eliza, TIPS (questions
about sex), Alice.
- Computer
chess: history,
what is computer chess, International Computer Chess
Association, programming
resources Also: play against tkChess on
the Web, and the historic Kasparovvs Deep Blue game
- The 21st Century
Artilect. Moral Dilemmas Concerning the Ultra Intelligent Machine, by
Dr. Hugo de Garis .
- Is the Brain a
Digital Computer? Classic paper by John R. Searle (a.k.a.The
Chinese Room Paper).
- The Brain vs. the
Computer: Similiarities and Differences. By
Dr. Eric Chudler
- American Association for Artificial Intelligence
- UseNet
Newsgroup on AI Philosophy: comp.ai.philosophy
Hubert Dreyfus's views on artificial intelligence
Book cover of the 1979 paperback edition
Dreyfus argued that human intelligence and
expertise depend primarily on unconscious instincts rather than conscious symbolic manipulation, and that these unconscious skills
could never be captured in formal rules. His critique was based on the insights
of modern continental philosophers such as Merleau-Ponty and Heidegger, and was
directed at the first wave of AI research which used high level formal symbols to represent reality and tried to reduce
intelligence to symbol manipulation.
When Dreyfus' ideas were first introduced in the
mid-1960s, they were met with ridicule and outright hostility.[2][3] By the
1980s, however, many of his perspectives were rediscovered by researchers
working in robotics and the
new field of connectionism—approaches
now called "sub-symbolic" because they eschew early AI research's
emphasis on high level symbols. Historian and AI researcher Daniel Crevier writes:
"time has proven the accuracy and perceptiveness of some of Dreyfus's
comments."[4] Dreyfus
said in 2007 "I figure I won and it's over—they've given up."[5]
Dreyfus'
critique
The
grandiose promises of artificial intelligence
- A
computer would be world champion in chess.
- A
computer would discover and prove an important new mathematical theorem.
- Most
theories in psychology will take the form of computer programs.
The press reported these predictions in glowing
reports of the imminent arrival of machine intelligence.
Dreyfus felt that this optimism was totally
unwarranted. He believed that they were based on false assumptions about the
nature of human intelligence. Pamela McCorduck explains Dreyfus position:
[A] great misunderstanding accounts for public
confusion about thinking machines, a misunderstanding perpetrated by the
unrealistic claims researchers in AI have been making, claims that thinking
machines are already here, or at any rate, just around the corner.[7]
These predictions were based on the success of an
"information processing" model of the mind, articulated by Newell and
Simon in their physical symbol systems hypothesis, and
later expanded into a philosophical position known as computationalism by
philosophers such as Jerry Fodor and Hilary Putnam.[8]
Believing that they had successfully simulated the essential process of human
thought with simple programs, it seemed a short step to producing fully
intelligent machines. However, Dreyfus argued that philosophy, especially 20th-century philosophy, had discovered serious problems
with this information processing viewpoint. The mind, according to modern
philosophy, is nothing like a computer.[7]
Dreyfus' four assumptions of
artificial intelligence research
In Alchemy and AI and What Computers
Can't Do, Dreyfus identified four philosophical assumptions that supported
the faith of early AI researchers that human intelligence depended on the
manipulation of symbols.[9] "In
each case," Dreyfus writes, "the assumption is taken by workers in
[AI] as an axiom, guaranteeing results, whereas it is, in fact, one hypothesis
among others, to be tested by the success of such work."[10]
The biological assumption
The brain
processes information in discrete operations by way of some biological
equivalent of on/off switches.
In the early days of research into neurology,
scientists realized that neurons fire in
all-or-nothing pulses. Several researchers, such as Walter Pitts and Warren McCulloch, argued that neurons functioned similar to the way
Boolean logic gates operate, and so could be imitated by
electronic circuitry at the level of the neuron.[11] When
digital computers became widely used in the early 50s, this argument was
extended to suggest that the brain was a vast physical symbol system, manipulating the binary symbols
of zero and one. Dreyfus was able to refute the biological assumption by citing
research in neurology that suggested
that the action and timing of neuron firing had analog components.[12] To be
fair, however, Daniel Crevier observes that "few still held that belief in
the early 1970s, and nobody argued against Dreyfus" about the biological
assumption.[13]
The psychological assumption
The mind
can be viewed as a device operating on bits of information according to formal
rules.
He refuted this assumption by showing that much of
what we "know" about the world consists of complex attitudes
or tendencies that make us lean towards one interpretation over another.
He argued that, even when we use explicit symbols, we are using them against an
unconscious background of commonsense knowledge and that without this background
our symbols cease to mean anything. This background, in Dreyfus' view, was not
implemented in individual brains as explicit individual symbols with explicit
individual meanings.
The epistemological assumption
All
knowledge can be formalized.
This concerns the philosophical issue of epistemology, or the
study of knowledge. Even if
we agree that the psychological assumption is false, AI researchers could still
argue (as AI founder John McCarthy has) that it was possible for a
symbol processing machine to represent all knowledge, regardless of whether
human beings represented knowledge the same way. Dreyfus argued that there was
no justification for this assumption, since so much of human knowledge was not
symbolic.
The ontological assumption
The world
consists of independent facts that can be represented by independent symbols
Dreyfus also
identified a subtler assumption about the world. AI researchers (and futurists
and science fiction writers) often assume that there is no limit to formal,
scientific knowledge, because they assume that any phenomenon in the universe
can be described by symbols or scientific theories. This assumes that
everything that exists can be understood as objects, properties of
objects, classes of objects, relations of objects, and so on: precisely those
things that can be described by logic, language and mathematics. The question
of what exists is called ontology, and so Dreyfus calls
this the ontological assumption. If this is false, then it raises doubts about
what we can ultimately know and what intelligent machines will ultimately be
able to help us to do.
Knowing-how vs. knowing-that: the
primacy of intuition
In Mind Over Machine (1986), written
during the heyday of expert systems, Dreyfus analyzed the difference between human
expertise and the programs that claimed to capture it. This expanded on ideas
from What Computers Can't Do, where he had made a similar argument
criticizing the "cognitive simulation" school of AI research practiced by Allen Newell and Herbert A. Simon in the
1960s.
Dreyfus argued that human problem solving and
expertise depend on our background sense of the context, of what is important
and interesting given the situation, rather than on the process of searching
through combinations of possibilities to find what we need. Dreyfus would
describe it in 1986 as the difference between "knowing-that" and
"knowing-how", based on Heidegger's
distinction of present-at-hand and ready-to-hand.[14]
Knowing-that is our conscious, step-by-step problem
solving abilities. We use these skills when we encounter a difficult problem that
requires us to stop, step back and search through ideas one at time. At moments
like this, the ideas become very precise and simple: they become context free
symbols, which we manipulate using logic and language. These are the skills
that Newell and Simon had
demonstrated with both psychological experiments and computer programs. Dreyfus
agreed that their programs adequately imitated the skills he calls
"knowing-that."
Knowing-how, on the other hand, is the way we deal
with things normally. We take actions without using conscious symbolic
reasoning at all, as when we recognize a face, drive ourselves to work or find
the right thing to say. We seem to simply jump to the appropriate response,
without considering any alternatives. This is the essence of expertise, Dreyfus
argued: when our intuitions have been trained to the point that we forget the
rules and simply "size up the situation" and react.
The human sense of the situation, according to
Dreyfus, is based on our goals, our bodies and our culture—all of our
unconscious intuitions, attitudes and knowledge about the world. This “context”
or "background" (related to Heidegger's Dasein) is a
form of knowledge that is not stored in our brains symbolically, but intuitively
in some way. It affects what we notice and what we don't notice, what we expect
and what possibilities we don't consider: we discriminate between what is
essential and inessential. The things that are inessential are relegated to our
"fringe consciousness" (borrowing a phrase from William James): the
millions of things we're aware of, but we're not really thinking about right
now.
Dreyfus did not
believe that AI programs, as they were implemented in the 70s and 80s, could
capture this "background" or do the kind of fast problem solving that
it allows. He argued that our unconscious knowledge could never be
captured symbolically. If AI could not find a way to address these issues, then
it was doomed to failure, an exercise in "tree climbing with one's eyes on
the moon."[15]
History
Dreyfus began to formulate his critique in the
early 1960s while he was a professor at MIT, then a
hotbed of artificial intelligence research. His first publication on the
subject is a half-page objection to a talk given by Herbert A. Simon in the
spring of 1961.[16] Dreyfus
was especially bothered, as a philosopher, that AI researchers seemed to
believe they were on the verge of solving many long standing philosophical
problems within a few years, using computers.
Alchemy and AI
In 1965, Dreyfus was hired (with his brother Stuart Dreyfus' help) by
Paul Armer to spend the summer at RAND Corporation's Santa
Monica facility, where he would write Alchemy and AI, the first salvo of
his attack. Armer had thought he was hiring an impartial critic and was
surprised when Dreyfus produced a scathing paper intended to demolish the
foundations of the field. (Armer stated he was unaware of Dreyfus' previous
publication.) Armer delayed publishing it, but ultimately realized that
"just because it came to a conclusion you didn't like was no reason not to
publish it."[17] It
finally came out as RAND Memo and soon became a best seller.[18]
The paper flatly ridiculed AI research, comparing
it to alchemy: a
misguided attempt to change metals to gold based on a theoretical foundation that
was no more than mythology and wishful thinking.[19] It
ridiculed the grandiose predictions of leading AI researchers, predicting that
there were limits beyond which AI would not progress and intimating that those
limits would be reached soon.[20]
Reaction
The paper "caused an uproar", according
to Pamela McCorduck.[21] The AI
community's response was derisive and personal. Seymour Papert
dismissed one third of the paper as "gossip" and claimed that every
quotation was deliberately taken out of context.[22] Herbert A. Simon accused
Dreyfus of playing "politics" so that he could attach the prestigious
RAND name to his ideas. Simon says "what I resent about this was the RAND
name attached to that garbage".[23]
Dreyfus, who taught at MIT,
remembers that his colleagues working in AI "dared not be seen having
lunch with me."[24] Joseph Weizenbaum, the
author of ELIZA, felt
his colleagues' treatment of Dreyfus was
unprofessional and childish. Although he was an outspoken critic of Dreyfus'
positions, he recalls "I became the only member of the AI community to be
seen eating lunch with Dreyfus. And I deliberately made it plain that theirs
was not the way to treat a human being."[25]
The paper was the subject of a short in The New Yorker magazine
on June 11, 1966. The piece mentioned Dreyfus' contention that, while computers
may be able to play checkers, no computer could yet play a decent game of
chess. It reported with wry humor (as Dreyfus had) about the victory of a ten-year-old
over the leading chess program, with "even more than its usual
smugness."[20]
"A
Ten Year Old Can Beat the Machine— Dreyfus: But the Machine Can Beat Dreyfus"[28]
Dreyfus complained in print that he hadn't said a
computer will never play chess, to which Herbert A. Simon replied:
"You should recognize that some of those who are bitten by your
sharp-toothed prose are likely, in their human weakness, to bite back ... may I
be so bold as to suggest that you could well begin the cooling---a recovery of
your sense of humor being a good first step."[29]
Vindicated
By the early 1990s several of Dreyfus' radical
opinions had become mainstream.
Failed predictions. As Dreyfus had foreseen, the
grandiose predictions of early AI researchers failed to come true. Fully
intelligent machines (now known as "strong AI") did not appear in the
mid-1970s as predicted. HAL 9000 (whose
capabilities for natural language, perception and problem solving were based on
the advice and opinions of Marvin Minsky) did not
appear in the year 2001. "AI researchers", writes Nicolas Fearn,
"clearly have some explaining to do."[30] Today
researchers are far more reluctant to make the kind of predictions that were
made in the early days. (Although some futurists, such as Ray Kurzweil, are
still given to the same kind of optimism.)
The biological assumption, although
common in the forties and early fifties, was no longer assumed by most AI
researchers by the time Dreyfus published What Computers Can't Do.[13] Although
many still argue that it is essential to reverse-engineer the brain by
simulating the action of neurons (such as Ray Kurzweil[31] or Jeff Hawkins[32]), they
don't assume that neurons are essentially digital, but rather that the action
of analog neurons can be simulated by digital machines to a reasonable level of
accuracy.[31] (Alan Turing had made
this same observation as early as 1950.)[33]
The psychological assumption and unconscious
skills. Many AI researchers have come to agree that human reasoning does
not consist primarily of high-level symbol manipulation. In fact, since Dreyfus
first published his critiques in the 60s, AI research in general has moved away
from high level symbol manipulation or "GOFAI",
towards new models that are intended to capture more of our unconscious
reasoning. Daniel Crevier writes that by 1993, unlike 1965, AI researchers
"no longer made the psychological assumption",[13] and had
continued forward without it. These new "sub-symbolic" approaches include:
- Computational intelligence
paradigms, such as neural nets, evolutionary algorithms and
so on are mostly directed at simulated unconscious reasoning. Dreyfus
himself agrees that these sub-symbolic methods can capture the kind of
"tendencies" and "attitudes" that he considers
essential for intelligence and expertise.[34]
- Research
into commonsense knowledge has
focussed on reproducing the "background" or context of
knowledge.
- Robotics
researchers like Hans Moravec and
Rodney Brooks
were among the first to realize that unconscious skills would prove to be
the most difficult to reverse engineer. (See Moravec's paradox.) Brooks would spearhead a
movement in the late 80s that took direct aim at the use of high-level
symbols, called Nouvelle AI.
The situated
movement in robotics
research attempts to capture our unconscious skills at perception and
attention.[35]
- Statistical AI use techniques related to
economics and statistics to allow machines to "guess" – to make
inexact, probabilistic decisions and predictions based on experience and
learning. These highly successful techniques are similar to what Dreyfus
called "sizing up the situation and reacting", but here the
"situation" consists of vast amounts of numerical data.
This research has gone forward without any direct
connection to Dreyfus' work.[36]
Knowing-how and knowing-that.
Research in psychology and economics has been able to show that Dreyfus' (and
Heidegger's) speculation about the nature of human problem solving was essentially
correct. Daniel Kahnemann and Amos Tversky collected a
vast amount of hard evidence that human beings use two very different methods
to solve problems, which they named "system 1" and "system
2". System one, also known as the adaptive unconscious, is fast, intuitive and unconscious. System 2 is
slow, logical and deliberate. Their research was collected in the book Thinking, Fast and Slow, and inspired Malcolm Gladwell's
popular book Blink. As with
AI, this research was entirely independent of both Dreyfus and Heidegger.[36]
Ignored
Although clearly AI research has come to agree with
Dreyfus, McCorduck writes that "my impression is that this progress has
taken place piecemeal and in response to tough given problems, and owes nothing
to Dreyfus."[36]
The AI community, with a few exceptions, chose not
to respond to Dreyfus directly. "He's too silly to take seriously" a
researcher told Pamela McCorduck.[29] Marvin Minsky said of
Dreyfus (and the other critiques coming from philosophy) that
"they misunderstand, and should be ignored."[37] When
Dreyfus expanded Alchemy and AI to book length and published it as What
Computers Can't Do in 1972, no one from the AI community chose to respond
(with the exception of a few critical reviews). McCorduck asks "If Dreyfus
is so wrong-headed, why haven't the artificial intelligence people made more
effort to contradict him?"[29]
Another problem was that he claimed (or seemed to
claim) that AI would never be able to capture the human ability to
understand context, situation or purpose in the form of rules. But (as Peter Norvig and Stuart Russell would
later explain), an argument of this form can not be won: just because one can
not imagine formal rules that govern human intelligence and expertise, this
does not mean that no such rules exist. They quote Alan Turing's answer
to all arguments similar to Dreyfus':
"we cannot so easily convince ourselves of the
absence of complete laws of behaviour ... The only way we know of for finding
such laws is scientific observation, and we certainly know of no circumstances
under which we could say, 'We have searched enough. There are
no such laws.'"[40][41]
Dreyfus did not anticipate that AI researchers
would realize their mistake and begin to work towards new solutions, moving
away from the symbolic methods that Dreyfus criticized. In 1965, he did not
imagine that such programs would one day be created, so he claimed AI was
impossible. In 1965, AI researchers did not imagine that such programs were
necessary, so they claimed AI was almost complete. Both were wrong.
A more serious issue was the impression that
Dreyfus' critique was incorrigibly hostile. McCorduck writes "His
derisiveness has been so provoking that he has estranged anyone he might have
enlightened. And that's a pity."[36] Daniel
Crevier writes that "time has proven the accuracy and perceptiveness of
some of Dreyfus's comments. Had he formulated them less aggressively,
constructive actions they suggested might have been taken much earlier."[4]
See also
Notes
1.
· Note also that Dreyfus was one of the only
non-computer scientists asked for a comment in IEEE's survey of AI's greatest
controversies. (Hearst et al. 2000)
· · Crevier 1993,
p. 125. Cite error: Invalid <ref> tag; name
"FOOTNOTECrevier1993125" defined multiple times with different
content (see the help page).
· · Crevier 1993,
p. 126. Cite error: Invalid <ref> tag; name
"FOOTNOTECrevier1993126" defined multiple times with different
content (see the help page).
· · McCorduck 2004,
p. 230. Cite error: Invalid <ref> tag; name
"FOOTNOTEMcCorduck2004230" defined multiple times with different
content (see the help page).
· · The
bulletin was for the Special Interest Group in Artificial Intelligence. (ACM
SIGART).
· · Turing 1950 under
"(7) Argument from Continuity in the Nervous System."
· · McCorduck 2004,
p. 236. Cite error: Invalid <ref> tag; name
"FOOTNOTEMcCorduck2004236" defined multiple times with different
content (see the help page).
· · Turing 1950 under
"(8) The Argument from the Informality of Behavior"
References
- Brooks, Rodney
(1990), "Elephants
Don't Play Chess" (PDF),
Robotics and Autonomous Systems 6: 3–15, doi:10.1016/S0921-8890(05)80025-9,
retrieved 30 August 2007
- Crevier, Daniel
(1993), AI: The Tumultuous Search for Artificial Intelligence, New York,
NY: BasicBooks, ISBN 0-465-02997-3
- Dreyfus, Hubert (1965),
Alchemy and AI, RAND Corporation
- Dreyfus, Hubert
(1972), What Computers Can't Do, New York: MIT Press, ISBN 0-06-090613-8
- Dreyfus, Hubert
(1979), What Computers Can't Do, New York: MIT Press, ISBN 0-06-090624-3.
- Dreyfus, Hubert; Dreyfus, Stuart
(1986), Mind over Machine: The Power of Human Intuition and Expertise in
the Era of the Computer, Oxford, U.K.: Blackwell.
- Dreyfus, Hubert
(1992), What Computers Still Can't Do, New York: MIT Press, ISBN 0-262-54067-3
- Fearn,
Nicholas (2007), The Latest Answers to the Oldest Questions: A
Philosophical Adventure with the World's Greatest Thinkers, New York:
Grove Press.
- Gladwell, Malcolm (2005), Blink: The Power of
Thinking Without Thinking, Boston: Little, Brown, ISBN 0-316-17232-4.
- Hawkins, Jeff;
Blakeslee, Sandra (2005), On Intelligence, New York, NY: Owl Books, ISBN 0-8050-7853-3.
- Hearst,
Marti A.; Hirsh, Haym; Bundy, A.; Berliner, H.; Feigenbaum, E.A.;
Buchanan, B.G.; Selfridge, O.; Michie, D.; Nilsson, N. (Jan./Feb. 2000),
"AI's Greatest Trends and Controversies", IEEE Intelligent Systems
15 (1): 8–17, doi:10.1109/5254.820322. Check
date values in: |date= (help)
- Horst,
Steven (Fall 2005), "The Computational Theory of Mind", in
Zalta, Edward N., The Stanford
Encyclopedia of Philosophy.
- Kurzweil, Ray
(2005), The Singularity is Near,
New York: Viking Press, ISBN 0-670-03384-7.
- McCorduck, Pamela (2004), Machines Who Think
(2nd ed.), Natick, MA: A. K. Peters, Ltd., ISBN 1-56881-205-1
- Moravec, Hans
(1988), Mind Children, Harvard University Press, ISBN 0-674-57616-0.
- Newell, Allen; Simon, H. A. (1963), "GPS: A
Program that Simulates Human Thought", in Feigenbaum, E.A.; Feldman,
J., Computers and Thought, New York: McGraw-Hill
- Russell, Stuart J.; Norvig, Peter
(2003), Artificial
Intelligence: A Modern Approach (2nd ed.), Upper Saddle
River, New Jersey: Prentice Hall, ISBN 0-13-790395-2.
- Turing, Alan
(October 1950), "Computing Machinery and
Intelligence", Mind LIX (236): 433–460, doi:10.1093/mind/LIX.236.433, ISSN 0026-4423,
retrieved 2008-08-18.