Goodreads helps you keep track of books you want to read. Want to Read saving…. Want to Read Currently Reading Read. Other editions. Enlarge cover.
|Published (Last):||5 March 2007|
|PDF File Size:||13.26 Mb|
|ePub File Size:||8.78 Mb|
|Price:||Free* [*Free Regsitration Required]|
For the past 30 years, most of them spent in an old house just northwest of the Indiana University campus, he and his graduate students have been picking up the slack: trying to figure out how our thinking works, by writing computer programs that think. Their operating premise is simple: the mind is a very unusual piece of software, and the best way to understand how a piece of software works is to write it yourself.
Computers are flexible enough to model the strange evolved convolutions of our thought, and yet responsive only to precise instructions. He was free to think about whatever he wanted; he chose to think about thinking itself. How could a few pounds of gray gelatin give rise to our very thoughts and selves?
Roaming in his Mercury, Hofstadter thought he had found the answer—that it lived, of all places, in the kernel of a mathematical proof. Seven years later, they had not so much germinated as metastasized into a 2.
GEB , as the book became known, was a sensation. Its success was catalyzed by Martin Gardner, a popular columnist for Scientific American , who very unusually devoted his space in the July issue to discussing one book—and wrote a glowing review. Hofstadter seemed poised to become an indelible part of the culture. GEB was not just an influential book, it was a book fully of the future.
People called it the bible of artificial intelligence, that nascent field at the intersection of computing, cognitive science, neuroscience, and psychology. Ambitious AI research had acquired a bad reputation. Work was increasingly done over short time horizons, often with specific buyers in mind. In GEB , Hofstadter was calling for an approach to AI concerned less with solving human problems intelligently than with understanding human intelligence—at precisely the moment that such an approach, having borne so little fruit, was being abandoned.
His star faded quickly. He would increasingly find himself out of a mainstream that had embraced a new imperative: to make machines perform in any way possible, with little regard for psychological plausibility. Deep Blue won by brute force. With a fast evaluation function, it would calculate a score for each possible position, and then make the move that led to the best score.
It could evaluate up to million positions a second, while Kasparov could evaluate only a few dozen before having to make a decision.
Does that tell you something about how we play chess? Does it tell you about how Kasparov envisions, understands a chessboard? He distanced himself from the field almost as soon as he became a part of it. One answer is that the AI enterprise went from being worth a few million dollars in the early s to billions by the end of the decade. The more staid an engineering discipline AI became, the more it accomplished.
Today, on the strength of techniques bearing little relation to the stuff of thought, it seems to be in a kind of golden age. AI pervades heavy industry, transportation, and finance. AI started working when it ditched humans as a model, because it ditched them. But it loses some bite when you consider what we want: a Google that knows, in the way a human would know, what you really mean when you search for something. Engines that could actually extract all that information and understand it would be worth 10 times as much.
Consider that computers today still have trouble recognizing a handwritten A. But how does understanding work? We do it by having the right concepts come to mind. This happens automatically, all the time. Colleagues talk about him in the past tense. New fans of GEB , seeing when it was published, are surprised to find out its author is still alive. Which sounds exactly like the self-soothing of the guy who lost. D ouglas R. Hofstadter was born into a life of the mind the way other kids are born into a life of crime.
He grew up in s Stanford, in a house on campus, just south of a neighborhood actually called Professorville. His father, Robert, was a nuclear physicist who would go on to share the Nobel Prize in Physics; his mother, Nancy, who had a passion for politics, became an advocate for developmentally disabled children and served on the ethics committee of the Agnews Developmental Center, where Molly lived for more than 20 years.
Dougie ate it up. He once spent weeks with a tape recorder teaching himself to speak backwards, so that when he played his garbles in reverse they came out as regular English. Just totally possessed, totally obsessed, by this kind of stuff. Hofstadter is 68 years old. Can someone like that age in the usual way? But he has the self-seriousness, the urgent earnestness, of a still very young man.
For more than 30 years, Hofstadter has worked as a professor at Indiana University at Bloomington. He lives in a house a few blocks from campus with Baofen Lin, whom he married last September; his two children by his previous marriage, Danny and Monica, are now grown. Although he has strong ties with the cognitive-science program and affiliations with several departments—including computer science, psychological and brain sciences, comparative literature, and philosophy—he has no official obligations.
He spends most of his time in his study, two rooms on the top floor of his house, carpeted, a bit stuffy, and messier than he would like. His study is the center of his world. Wall-to-wall there are books and drawings and notebooks and files, thoughts fossilized and splayed all over the room. In his back pocket, Hofstadter carries a four-color Bic ballpoint pen and a small notebook.
In what used to be a bathroom adjoined to his study but is now just extra storage space, he has bookshelves full of these notebooks. He makes photocopies of his notebook pages, cuts them up with scissors, and stores the errors in filing cabinets and labeled boxes around his study. In an isolated error, the mechanisms involved yield only slight traces of themselves; however, in a large collection, vast numbers of such slight traces exist, collectively adding up to strong evidence for and against particular mechanisms.
In this he is the modern-day William James, whose blend of articulate introspection he introduced the idea of the stream of consciousness and crisp explanations made his text, Principles of Psychology , a classic. The difference is that where James had only his eyes, Hofstadter has something like a microscope. While their competitors were testing wing ideas at full scale, the Wrights were doing focused aerodynamic experiments at a fraction of the cost.
The quick unconscious chaos of a mind can be slowed down on the computer, or rewound, paused, even edited. Parts of a program can be selectively isolated to see how it functions without them; parameters can be changed to see how performance improves or degrades. When the computer surprises you—whether by being especially creative or especially dim-witted—you can see exactly why. The programs all share the same basic architecture—a set of components and an overall style that traces back to Jumbo, a program that Hofstadter wrote in that worked on the word jumbles you find in newspapers.
And indeed they are—I just wrote a program that can handle any word, and it took me four minutes. My program works like this: it takes the jumbled word and tries every rearrangement of its letters until it finds a word in the dictionary. Hofstadter spent two years building Jumbo: he was less interested in solving jumbles than in finding out what was happening when he solved them. He had been watching his mind.
It was just them doing things. They would be trying things themselves. The architecture Hofstadter developed to model this automatic letter-play was based on the actions inside a biological cell. Some enzymes are rearrangers pang-loss becomes pan-gloss or lang-poss , others are builders g and h become the cluster gh ; jum and ble become jumble , and still others are breakers ight is broken into it and gh. Each reaction in turn produces others, the population of enzymes at any given moment balancing itself to reflect the state of the jumble.
Hofstadter of course offers an analogy: a swarm of ants rambling around the forest floor, as scouts make small random forays in all directions and report their finds to the group, their feedback driving an efficient search for food. Such a swarm is robust—step on a handful of ants and the others quickly recover—and, because of that robustness, adept.
When you read Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought , which describes in detail this architecture and the logic and mechanics of the programs that use it, you wonder whether maybe Hofstadter got famous for the wrong book.
But very few people, even admirers of GEB , know about the book or the programs it describes. T he modern era of mainstream AI—an era of steady progress and commercial success that began, roughly, in the early s and continues to this day—is the long unlikely springtime after a period, known as the AI Winter, that nearly killed off the field. It came down to a basic dilemma. On the one hand, the software we know how to write is very orderly; most computer programs are organized like a well-run army, with layers of commanders, each layer passing instructions down to the next, and routines that call subroutines that call subroutines.
On the other hand, the software we want to write would be adaptable—and for that, a hierarchy of rules seems like just the wrong idea. Their approach was fundamentally broken.
Take machine translation from one language to another, long a holy grail of AI. The standard attack involved corralling linguists and translators into a room and trying to convert their expertise into rules for a program to follow. If machine translation was to survive as a commercial enterprise—if AI was to survive—it would have to find another way. Or better yet, a shortcut.
And it did. You could say that it started in , with a project out of IBM called Candide. The idea behind Candide, a machine-translation system, was to start by admitting that the rules-based approach requires too deep an understanding of how language is produced; how semantics, syntax, and morphology work; and how words commingle in sentences and combine into paragraphs—to say nothing of understanding the ideas for which those words are merely conduits.
So IBM threw that approach out the window. What the developers did instead was brilliant, but so straightforward, you can hardly believe it. Imagine a box with thousands of knobs on it.
Some of these knobs control general settings: given one English word, how many French words, on average, should come out? And some control specific settings: given jump , what is the probability that shot comes next?
The question is, just by tuning these knobs, can you get your machine to convert sensible English into sensible French? It turns out that you can. What you do is feed the machine English sentences whose French translations you already know. Candide, for example, used 2. You proceed one pair at a time.
Can AI become conscious? Bach, Escher and Gödel's 'strange loops' may have the answer
Gödel, Escher, Bach: When logic flies out of the window