Monday, July 30, 2007

What's wrong with AI (AMNAP 1.0 Repost)

Skeptic magazine has a meticulously-footnoted article that evicerates the dubious claims of AI:



On March 24, 2005, an announcement was made in newspapers across the country, from the New York Times1 to the San Francisco Chronicle,2 that a company3 had been founded to apply neuroscience research to achieve human-level artificial intelligence. The reason the press release was so widely picked up is that the man behind it was Jeff Hawkins, the brilliant inventor of the PalmPilot, an invention that made him both wealthy and respected.4

You’d think from the news reports that the idea of approaching the pursuit of artificial human-level intelligence by modeling the brain was a novel one. Actually, a Web search for “computational neuroscience” finds over a hundred thousand webpages and several major research centers.5 At least two journals are devoted to the subject.6 Over 6,000 papers are available online. Amazon lists more than 50 books about it. A Web search for “human brain project” finds more than eighteen thousand matches.7 Many researchers think of modeling the human brain or creating a “virtual” brain a feasible project, even if a “grand challenge.”8 In other words, the idea isn’t a new one. . .

The fact is, we have no unifying theory of neuroscience. We don’t know what to build, much less how to build it.12 As one observer put it, neuroscience appears to be making “antiprogress” — the more information we acquire, the less we seem to know.13 . . .

A Brief History of A.I.
Duplicating or mimicking human-level intelligence is an old notion — perhaps as old as humanity itself. In the 19th century, as Charles Babbage conceived of ways to mechanize calculation, people started thinking it was possible — or arguing that it wasn’t. Toward the middle of the 20th century, as mathematical geniuses Claude Shannon,17 Norbert Wiener,18 John von Neumann,19 Alan Turing, and others laid the foundations of the theory of computing, the necessary tool seemed available.

In 1955, a research project on artificial intelligence was proposed; a conference the following summer is considered the official inauguration of the field. The proposal20 is fascinating for its assertions, assumptions, hubris, and naïveté, all of which have characterized the field of A.I. ever since. The authors proposed that ten people could make significant progress in the field in two months. That ten-person, two-month project is still going strong — 50 years later. And it’s involved the efforts of more like tens of thousands of people. . .

According to the roboticists and their fans, Moore’s Law will come to the rescue. The implication is that we have the programs and the data all ready to go, and all that’s holding us back is a lack of computing power. After all, as soon as computers got powerful enough, they were able to beat the world’s best human chess player, weren’t they? (Well, no — a great deal of additional programming and chess knowledge was also needed.)

Sad to say, even if we had unlimited computer power and storage, we wouldn’t know what to do with it. The programs aren’t ready to go, because there aren’t any programs. . .

With admirable can-do spirit, technological optimism, and a belief in inevitability, psychologists, philosophers, programmers, and engineers are sure they shall succeed, just as people dreamed that heavier-than-air flight would one day be achieved.88 But 50 years after the Wright brothers succeeded with their proof-of-concept flight in 1903, aircraft had been used decisively in two world wars; the helicopter had been invented; several commercial airlines were routinely flying passengers all over the world; the jet airplane had been invented; and the speed of sound had been broken.

After more than 50 years of pursuing human- level artificial intelligence, we have nothing but promises and failures. The quest has become a degenerating research program89 (or actually, an ever-increasing number of competing ones), pursuing an ever-increasing number of irrelevant activities as the original goal recedes ever further into the future — like the mirage it is.

2 comments:

Anonymous said...

If a source of information consistently gives you misleading and incorrect information in your area of expertise, don't trust that source when it provides information on a area outside your expertise.

M.C. said...

I've been writing software since age 10, and get paid to do it now. Including working on some AI domain problems in the past.