Thursday, June 21, 2012

John Searle and Chinese Rooms

John Searle, possibly a distant relative of mine, is an American philosopher who describes the concept of the "Chinese Room".

In 1980, Searle presented the "Chinese room" argument, which purports to prove the falsity of strong AI.[39] (Familiarity with the Turing test is useful for understanding the issue.) Assume you do not speak Chinese and imagine yourself in a room with two slits, a book, and some scratch paper. Someone slides you some Chinese characters through the first slit, you follow the instructions in the book, write what it says on the scratch paper, and slide the resulting sheet out the second slit. To people on the outside world, it appears the room speaks Chinese—they slide Chinese statements in one slit and get valid responses in return—yet you do not understand a word of Chinese. This suggests, according to Searle, that no computer can ever understand Chinese or English, because, as the thought experiment suggests, being able to 'translate' Chinese into English does not entail 'understanding' either Chinese or English: all which the person in the thought experiment, and hence a computer, is able to do is to execute certain syntactic manipulations

Basically it says that computers will never gain conciousness or understanding, though they can have the appearance of such and simulate enough of it to fool most people.  This is the concept of Strong AI.  This is due to the fact that they have no physical  or chemical attributes that could replicate conciousness, as the brain has. 

There is no physical law, Searle insists, that can see the equivalence between a personal computer, a series of ping-pong balls and beer cans, and a pipe-and-water system all implementing the same program.

He describes the concepts of Brute facts versus institutional facts.   A Brute fact is that, according to standards of measures (and Google), the height of Mount Everest is 29, 029 feet.   An institutional fact is that LeBron James has scored over 2,000 points in seven consecutive seasons.

The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.
The Chinese room (and all modern computers) manipulate physical objects in order to carry out calculations and do simulations. AI researchers Allen Newell and Herbert A. Simon called this kind of machine a physical symbol system. It is also equivalent to the formal systems used in the field of mathematical logic. Searle emphasizes the fact that this kind of symbol manipulation is syntactic (borrowing a term from the study of grammar). The computer manipulates the symbols using a form of syntax rules, without any knowledge of the symbol's semantics (that is, their meaning).

To turn a computer into a true intelligence, it would have to be less programmed syntactically and driven more by semantic learning or understanding. 

Does Google have true intelligence?  Or is it just part of a system that makes someone with true intelligence smarter, or makes us think we are smarter based on the most common brute and institutional facts?

The speed at which human brains process information is (by some estimates) 100 billion operations per second.
The IBM Sequoia is currently the world's fastest computer, at 16.32 petaflops, or 1015 floating-point operations per second.  That's about 15 quadrillion operations per second.  15,000,000,000,000,000 as opposed to 100,000,000,000.

Since the main arguments were first written in the late 70s and early 80s, does the concept of a Chinese Room as being the barrier for AI still hold up?

With the cost per Gigaflip currently sitting around $1.80 (perhaps a bit more due to parts shortages from flooding and earthquakes), is it just a matter of time before we have a truly learning computer?

Or do we need to look less at the concept of a syntatically-programmed computer, and more at a physically-created, self-sustaining semantic intelligence?

As I came into work today, someone had put a copy of a June 2012 Scientific American on my desk, with an article entitled Building a Machine Brain.  Before I even opened it, I had put together this posting, after looking at a wikipedia article online mentioning someone with a surname like mine, while doing a query related to semantic data modeling. 

I searched for id brain in Google, to see if I could find the name of what truly determines reasoning and thought outside of the physical brian, which are the concepts of id, ego, and super-ego.  I found the Wikipedia article describing Freud's concepts, but further down another article from a May, 2012 Scentific American entitled The Brain's Highways, Mapping The Last Frontier.

It's funny the paths your brain in conjunction with the internet will take you.

John Searle - Wikipedia, the free encyclopedia

No comments: