Stupid numeric single-column auto-incrementing identities. Those are good keys. In a warehouse model anyway....
However, in SQL, identity columns sometimes do get reused/refilled.
http://stackoverflow.com/questions/6738732/why-are-my-sql-server-identity-values-being-reused
And when using integer identities, be sure that the number of rows do not exceed the limits of integer.
The max limit is 2,147,483,647. Hopefully your company is successful enough to get this many sales transactions.
If they're expected to do, use bigint.
The max limit is 9,223,372,036,854,775,807.
Good keys, what are they like? « Thomas Kejser's Database Blog
Friday, June 22, 2012
Thursday, June 21, 2012
John Searle and Chinese Rooms
John Searle, possibly a distant relative of mine, is an American philosopher who describes the concept of the "Chinese Room".
In 1980, Searle presented the "Chinese room" argument, which purports to prove the falsity of strong AI.[39] (Familiarity with the Turing test is useful for understanding the issue.) Assume you do not speak Chinese and imagine yourself in a room with two slits, a book, and some scratch paper. Someone slides you some Chinese characters through the first slit, you follow the instructions in the book, write what it says on the scratch paper, and slide the resulting sheet out the second slit. To people on the outside world, it appears the room speaks Chinese—they slide Chinese statements in one slit and get valid responses in return—yet you do not understand a word of Chinese. This suggests, according to Searle, that no computer can ever understand Chinese or English, because, as the thought experiment suggests, being able to 'translate' Chinese into English does not entail 'understanding' either Chinese or English: all which the person in the thought experiment, and hence a computer, is able to do is to execute certain syntactic manipulations
Basically it says that computers will never gain conciousness or understanding, though they can have the appearance of such and simulate enough of it to fool most people. This is the concept of Strong AI. This is due to the fact that they have no physical or chemical attributes that could replicate conciousness, as the brain has.
There is no physical law, Searle insists, that can see the equivalence between a personal computer, a series of ping-pong balls and beer cans, and a pipe-and-water system all implementing the same program.
He describes the concepts of Brute facts versus institutional facts. A Brute fact is that, according to standards of measures (and Google), the height of Mount Everest is 29, 029 feet. An institutional fact is that LeBron James has scored over 2,000 points in seven consecutive seasons.
The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.
The Chinese room (and all modern computers) manipulate physical objects in order to carry out calculations and do simulations. AI researchers Allen Newell and Herbert A. Simon called this kind of machine a physical symbol system. It is also equivalent to the formal systems used in the field of mathematical logic. Searle emphasizes the fact that this kind of symbol manipulation is syntactic (borrowing a term from the study of grammar). The computer manipulates the symbols using a form of syntax rules, without any knowledge of the symbol's semantics (that is, their meaning).
To turn a computer into a true intelligence, it would have to be less programmed syntactically and driven more by semantic learning or understanding.
Does Google have true intelligence? Or is it just part of a system that makes someone with true intelligence smarter, or makes us think we are smarter based on the most common brute and institutional facts?
The speed at which human brains process information is (by some estimates) 100 billion operations per second.
The IBM Sequoia is currently the world's fastest computer, at 16.32 petaflops, or 1015 floating-point operations per second. That's about 15 quadrillion operations per second. 15,000,000,000,000,000 as opposed to 100,000,000,000.
Since the main arguments were first written in the late 70s and early 80s, does the concept of a Chinese Room as being the barrier for AI still hold up?
With the cost per Gigaflip currently sitting around $1.80 (perhaps a bit more due to parts shortages from flooding and earthquakes), is it just a matter of time before we have a truly learning computer?
Or do we need to look less at the concept of a syntatically-programmed computer, and more at a physically-created, self-sustaining semantic intelligence?
As I came into work today, someone had put a copy of a June 2012 Scientific American on my desk, with an article entitled Building a Machine Brain. Before I even opened it, I had put together this posting, after looking at a wikipedia article online mentioning someone with a surname like mine, while doing a query related to semantic data modeling.
I searched for id brain in Google, to see if I could find the name of what truly determines reasoning and thought outside of the physical brian, which are the concepts of id, ego, and super-ego. I found the Wikipedia article describing Freud's concepts, but further down another article from a May, 2012 Scentific American entitled The Brain's Highways, Mapping The Last Frontier.
It's funny the paths your brain in conjunction with the internet will take you.
John Searle - Wikipedia, the free encyclopedia
In 1980, Searle presented the "Chinese room" argument, which purports to prove the falsity of strong AI.[39] (Familiarity with the Turing test is useful for understanding the issue.) Assume you do not speak Chinese and imagine yourself in a room with two slits, a book, and some scratch paper. Someone slides you some Chinese characters through the first slit, you follow the instructions in the book, write what it says on the scratch paper, and slide the resulting sheet out the second slit. To people on the outside world, it appears the room speaks Chinese—they slide Chinese statements in one slit and get valid responses in return—yet you do not understand a word of Chinese. This suggests, according to Searle, that no computer can ever understand Chinese or English, because, as the thought experiment suggests, being able to 'translate' Chinese into English does not entail 'understanding' either Chinese or English: all which the person in the thought experiment, and hence a computer, is able to do is to execute certain syntactic manipulations
Basically it says that computers will never gain conciousness or understanding, though they can have the appearance of such and simulate enough of it to fool most people. This is the concept of Strong AI. This is due to the fact that they have no physical or chemical attributes that could replicate conciousness, as the brain has.
There is no physical law, Searle insists, that can see the equivalence between a personal computer, a series of ping-pong balls and beer cans, and a pipe-and-water system all implementing the same program.
He describes the concepts of Brute facts versus institutional facts. A Brute fact is that, according to standards of measures (and Google), the height of Mount Everest is 29, 029 feet. An institutional fact is that LeBron James has scored over 2,000 points in seven consecutive seasons.
The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.
The Chinese room (and all modern computers) manipulate physical objects in order to carry out calculations and do simulations. AI researchers Allen Newell and Herbert A. Simon called this kind of machine a physical symbol system. It is also equivalent to the formal systems used in the field of mathematical logic. Searle emphasizes the fact that this kind of symbol manipulation is syntactic (borrowing a term from the study of grammar). The computer manipulates the symbols using a form of syntax rules, without any knowledge of the symbol's semantics (that is, their meaning).
To turn a computer into a true intelligence, it would have to be less programmed syntactically and driven more by semantic learning or understanding.
Does Google have true intelligence? Or is it just part of a system that makes someone with true intelligence smarter, or makes us think we are smarter based on the most common brute and institutional facts?
The speed at which human brains process information is (by some estimates) 100 billion operations per second.
The IBM Sequoia is currently the world's fastest computer, at 16.32 petaflops, or 1015 floating-point operations per second. That's about 15 quadrillion operations per second. 15,000,000,000,000,000 as opposed to 100,000,000,000.
Since the main arguments were first written in the late 70s and early 80s, does the concept of a Chinese Room as being the barrier for AI still hold up?
With the cost per Gigaflip currently sitting around $1.80 (perhaps a bit more due to parts shortages from flooding and earthquakes), is it just a matter of time before we have a truly learning computer?
Or do we need to look less at the concept of a syntatically-programmed computer, and more at a physically-created, self-sustaining semantic intelligence?
As I came into work today, someone had put a copy of a June 2012 Scientific American on my desk, with an article entitled Building a Machine Brain. Before I even opened it, I had put together this posting, after looking at a wikipedia article online mentioning someone with a surname like mine, while doing a query related to semantic data modeling.
I searched for id brain in Google, to see if I could find the name of what truly determines reasoning and thought outside of the physical brian, which are the concepts of id, ego, and super-ego. I found the Wikipedia article describing Freud's concepts, but further down another article from a May, 2012 Scentific American entitled The Brain's Highways, Mapping The Last Frontier.
It's funny the paths your brain in conjunction with the internet will take you.
John Searle - Wikipedia, the free encyclopedia
Sunday, June 10, 2012
Zugzwang - Wikipedia, the free encyclopedia
Zugzwang - Wikipedia, the free encyclopedia:
Zugzwang (German for "compulsion to move", pronounced [ˈtsuːktsvaŋ]) is a term usually used in chess which also applies to various other games. The term finds its formal definition in combinatorial game theory, and it describes a situation where one player is put at a disadvantage because he has to make a move when he would prefer to pass and make no move. The fact that the player must make a move means that his position will be significantly weaker than the hypothetical one in which it was his opponent's turn to move.
In business, usually it's better that customers bring sales to you rather than you selling to customers. This works with referral web sites (Facebook, LinkedIn), where friends and colleagues refer each other and encourage a larger subscriber base. It works with pyramid/MLM schemes (Amway), where the more paying recruits you have, the better the kings and queens at the top of the pyramid fare. It works with retail, where the best possibility of a sale is when someone is actually on your web site or in your store, versus when you're forced to pay to publish a newspaper ad or commercial to drive traffic and possible sales.
Referrals (letting the other person move) are the best sales to get. Instead of a 75-1 cold call to sale ratio, it could be a 4-1 referral to sale ratio.
So instead of being forced to move, ask the other person to move first.
'via Blog this'
Zugzwang (German for "compulsion to move", pronounced [ˈtsuːktsvaŋ]) is a term usually used in chess which also applies to various other games. The term finds its formal definition in combinatorial game theory, and it describes a situation where one player is put at a disadvantage because he has to make a move when he would prefer to pass and make no move. The fact that the player must make a move means that his position will be significantly weaker than the hypothetical one in which it was his opponent's turn to move.
In business, usually it's better that customers bring sales to you rather than you selling to customers. This works with referral web sites (Facebook, LinkedIn), where friends and colleagues refer each other and encourage a larger subscriber base. It works with pyramid/MLM schemes (Amway), where the more paying recruits you have, the better the kings and queens at the top of the pyramid fare. It works with retail, where the best possibility of a sale is when someone is actually on your web site or in your store, versus when you're forced to pay to publish a newspaper ad or commercial to drive traffic and possible sales.
Referrals (letting the other person move) are the best sales to get. Instead of a 75-1 cold call to sale ratio, it could be a 4-1 referral to sale ratio.
So instead of being forced to move, ask the other person to move first.
'via Blog this'
Subscribe to:
Posts (Atom)