JREF Homepage Swift Blog Events Calendar $1 Million Paranormal Challenge The Amaz!ng Meeting Useful Links Support Us
James Randi Educational Foundation JREF Forum
Forum Index Register Members List Events Mark Forums Read Help

Go Back   JREF Forum » General Topics » Science, Mathematics, Medicine, and Technology
Click Here To Donate

Notices


Welcome to the JREF Forum, where we discuss skepticism, critical thinking, the paranormal and science in a friendly but lively way. You are currently viewing the forum as a guest, which means you are missing out on discussing matters that are of interest to you. Please consider registering so you can gain full use of the forum features and interact with other Members. Registration is simple, fast and free! Click here to register today.

Tags A.I. , artificial intelligence , consciousness

View Poll Results: Is consciousness physical or metaphysical?
Consciousness is a kind of data processing and the brain is a machine that can be replicated in other substrates, such as general purpose computers. 81 86.17%
Consciousness requires a second substance outside the physical material world, currently undetectable by scientific instruments 3 3.19%
On Planet X, unconscious biological beings have perfected conscious machines 10 10.64%
Voters: 94. You may not vote on this poll

Closed Thread
Old 8th May 2012, 01:29 PM   #321
quarky
Banned
 
Join Date: Oct 2007
Posts: 20,448
Originally Posted by Mr. Scott View Post
I know, though I asked you why consciousness studies in particular seemed to enrage you. What I hear from you is it's just one of many areas of scientific inquiry you'd prioritize lower than others, like feeding the hungry.

You may want to dis some areas of inquiry as conforming to a religious dogma of science. I'd argue that the yield of science, including past work where no yield was anticipated, resulted in unanticipated benefits. The track record of pure science is awesome. Compare that to the track record of tradition religions. I think it's the comparison is weak. Science works.

A conscious food distribution network might be very effective. Let the AI research continue. New tools will be used for good and bad purposes. It's always been so. (I'm not afraid of conscious sex robots, though unconscious sex robots don't seem that shabby. joking!
I totally dig your concerns and observations. For me and my various rage, its never been a question of science vs religion. its always been about science, and how can it become more sane and ethical.

of course, there are movements within science that lean this way.
Some of the most respected scientists would have very little issue with my spewage on this matter. Marrying ethics to science is a slippery slope, but the opposite (marrying science to business) turns out a lot of useless crap.
Or worse, harmful crap.

But fear not, my blows against the empire are pathetically ineffective.
My personal history, of course, flavors my attitude. I realize such anecdotes don't belong here, but here's one anyway:

My son in law, whom is a brilliant and highly sought after chemical engineer, was actually working for Bechtel, when they decided to privatize water in Bolivia. He met my daughter when she was working for those exploited peasants. She's not a scientist. He was brilliant enough to catch the wave of her innate wisdom.
quarky is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Back to Top
Old 8th May 2012, 02:06 PM   #322
Modified
Illuminator
 
Modified's Avatar
 
Join Date: Sep 2006
Location: SW Florida
Posts: 4,653
Originally Posted by !Kaggen View Post
No, his advice was not about how to be successful but how to be fulfilled.
And for that purpose it is a failure as well.
Modified is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Back to Top
Old 8th May 2012, 03:52 PM   #323
rocketdodger
Philosopher
 
rocketdodger's Avatar
 
Join Date: Jun 2005
Location: Hyperion
Posts: 6,884
Originally Posted by Zeuzzz View Post
Heard about this before, got a link? Conscious behaviours in what sense?
The presentations of page 26 and 50 in this symposium are some fairly simple ones from over half a decade ago: http://sacral.c.u-tokyo.ac.jp/pdf/Ik...sness_2005.pdf

Conscious behaviors in the sense of rehearsing possible actions in their mind using the same neural network pathways as actual perception, AKA imagination.
rocketdodger is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Back to Top
Old 8th May 2012, 06:15 PM   #324
Zeuzzz
Banned
 
Join Date: Dec 2007
Posts: 5,240
Originally Posted by Belz... View Post
That makes no sense. Where does this consciousness come from ?

Science shows us that it's the other way around, by the way.

Prove its the other way round then, i'm all ears.

Last edited by Zeuzzz; 8th May 2012 at 06:36 PM.
Zeuzzz is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Back to Top
Old 8th May 2012, 06:36 PM   #325
Zeuzzz
Banned
 
Join Date: Dec 2007
Posts: 5,240
Originally Posted by rocketdodger View Post
The presentations of page 26 and 50 in this symposium are some fairly simple ones from over half a decade ago: http://sacral.c.u-tokyo.ac.jp/pdf/Ik...sness_2005.pdf

Conscious behaviors in the sense of rehearsing possible actions in their mind using the same neural network pathways as actual perception, AKA imagination.

"4 Experimental Results
The implemented system currently runs on a
2.5 GHz Pentium 4 machine"

[Bolding added]

Really?

That first paper (p26) was well worded and written, theoretically fine, but totally lacked any sort of in depth analysis for me to comment on. They didn't even include any of the source code they used for the 'robot' in question for me to analyse. Again all I see with these artificial neural network based algorithms are people trying to model real biological neural networks with abstract models of information processing, which although may prove useful in a scientific sense still lack any sort of consciousness, any more than a cleverly coded super computer does.
Zeuzzz is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Back to Top
Old 8th May 2012, 06:45 PM   #326
Zeuzzz
Banned
 
Join Date: Dec 2007
Posts: 5,240
Originally Posted by Beelzebuddy View Post
There's not much point pursuing this line of reasoning. You could work him down to string theory and he could still argue there's demons plucking them.

Why do you say demons? I never mentioned anything even remotely relevant to that reference.

If you want to discuss the problems with string theory that's fine, but try to keep it to a relevant thread and not hijack this one. Or just buy this http://www.amazon.com/The-Trouble-Wi.../dp/0618551050

Last edited by Zeuzzz; 8th May 2012 at 06:55 PM.
Zeuzzz is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Back to Top
Old 8th May 2012, 09:54 PM   #327
quarky
Banned
 
Join Date: Oct 2007
Posts: 20,448
Its kind of cool how threads about consciousness are so easily hi-jacked and de-railed.
Its pretty hard to be off-topic.

Which is why I'd like to see kittens.
quarky is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Back to Top
Old 8th May 2012, 10:22 PM   #328
blobru
Philosopher
 
blobru's Avatar
 
Join Date: May 2007
Posts: 6,829
...from a recent Kitten Symposium on Unconsciousness:

__________________
"Say to them, 'I am Nobody!'" -- Ulysses to the Cyclops

"Never mind. I can't read." -- Hokulele to the Easter Bunny
blobru is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Back to Top
Old 9th May 2012, 08:04 AM   #329
Beelzebuddy
Master Poster
 
Beelzebuddy's Avatar
 
Join Date: Jun 2010
Posts: 2,384
Originally Posted by Zeuzzz View Post
Why do you say demons? I never mentioned anything even remotely relevant to that reference.
You never mentioned anything at all. But since an empirical viewpoint would lead one to conclude that consciousness (for most definitions of this terrible, horrible term) is a product of the brain's activity, and not the other way around, you must have something in mind.
Beelzebuddy is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Back to Top
Old 9th May 2012, 10:52 AM   #330
quarky
Banned
 
Join Date: Oct 2007
Posts: 20,448
Originally Posted by Beelzebuddy View Post
You never mentioned anything at all. But since an empirical viewpoint would lead one to conclude that consciousness (for most definitions of this terrible, horrible term) is a product of the brain's activity, and not the other way around, you must have something in mind.
funny about language...

What do you mean by "...in mind."?

Its hard to discuss a background sort of consciousness without sounding all wooed-out or religious fundamentalist. I'm neither; more of a fun, mental sort...

I'd try to give a go at it, though its a very strenuous hypothesis, and past attempts have garnered some mockery and nastiness.

But, I'm still here. My 'single quark" hypothesis is exhausting to my brain.
Maybe I'll give it a shot, after chores.
quarky is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Back to Top
Old 9th May 2012, 10:53 AM   #331
rocketdodger
Philosopher
 
rocketdodger's Avatar
 
Join Date: Jun 2005
Location: Hyperion
Posts: 6,884
Originally Posted by Zeuzzz View Post
"4 Experimental Results
The implemented system currently runs on a
2.5 GHz Pentium 4 machine"

[Bolding added]

Really?

That first paper (p26) was well worded and written, theoretically fine, but totally lacked any sort of in depth analysis for me to comment on. They didn't even include any of the source code they used for the 'robot' in question for me to analyse. Again all I see with these artificial neural network based algorithms are people trying to model real biological neural networks with abstract models of information processing, which although may prove useful in a scientific sense still lack any sort of consciousness, any more than a cleverly coded super computer does.
Why do you think this? Let me phrase the question in another way:

Why do you think the causal sequences of node activation in an artificial neural network different than the causal sequences of node activation in a biological neural network?

The essential property of a neural network is that one neuron's output leads to a change in the behavior of neurons downstream. If a conscious behavior arises due to the way a network functions, what difference does it make how or where that network is implemented?

As for the first presentation, you don't need source code. It wouldn't make sense even if you saw it, because there is no specific programming done that is relevant to the robot. That isn't how neural networks work. They trained the robot so that its goal is focusing on blue, and they trained the robot that when it turned one way it saw blue, and not blue if it turned the other way. They did *not* tell the robot to turn -- ever.

What the robot did was imagine the act of turning in either direction, and imagining a left turn led to the imagination of a blue percept, which caused the robot to then *want* to turn left -- it effectively decided to turn left because it imagined that if it turned left it would see blue, and seeing blue is its goal. And this was all done with trained neural networks -- no hard coding of any behavior.

This is true imagination, by any possible definition of the term. Genuine, real, authentic imagination. And imagination is one of the behaviors we normally attribute to consciousness. Yeah the robot can't write poetry, or even play Jeopardy as well as Watson, but Watson doesn't *imagine* things like we do. This robot did.
rocketdodger is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Back to Top
Old 9th May 2012, 10:54 AM   #332
quarky
Banned
 
Join Date: Oct 2007
Posts: 20,448
Originally Posted by blobru View Post
...from a recent Kitten Symposium on Unconsciousness:

http://i298.photobucket.com/albums/m...s-sleeping.jpg
You are a sick puppy.
That was over the top of the cuteness charts, and I must now wash off the gay.
quarky is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Back to Top
Old 9th May 2012, 11:02 PM   #333
Mr. Scott
Under the Amazing One's Wing
 
Mr. Scott's Avatar
 
Join Date: Nov 2005
Location: USA
Posts: 2,566
I was just thinking about how the first electronic computer, the ENIAC, was developed to feel the future, and it did so successfully.

It was developed to calculate artillery trajectories. In other words, given a specific weight and size of bullet, amount of gunpowder, the gun angle, wind speed and angle, air temperature and humidity, where will the bullet land?

Feeling the future is a fundamental application of mathematics and computers. Great things are happening as computers do it more and more like our brains do.
__________________
"Why is the world so different from what we thought it was?" Ting-Ting, from the 2000 film Yi Yi.
Mr. Scott is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Back to Top
Old 9th May 2012, 11:26 PM   #334
Mr. Scott
Under the Amazing One's Wing
 
Mr. Scott's Avatar
 
Join Date: Nov 2005
Location: USA
Posts: 2,566
Originally Posted by !Kaggen View Post
Oh dear you missed the whole point but confirmed the inspiration for the video.
The video was not about copying the successful outlier's history mechanically and unconsciously, but about living consciously through creativity in the now in an unpredictable world.
You falsely assume that because Jobs was successful his advice had something to do with it. No, his advice was not about how to be successful but how to be fulfilled.
Success at fulfillment perhaps?

"Follow your heart (gut, feelings, do what you love, etc.)" is too often bad advice for being successful in fulfillment or any other endeavor, because the "heart" (emotional part of the brain that purportedly feels the future) evolved through chaotic evolutionary processes only guaranteed, in the past, to have endowed enough success to proliferation of the genes responsible.
__________________
"Why is the world so different from what we thought it was?" Ting-Ting, from the 2000 film Yi Yi.
Mr. Scott is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Back to Top
Old 11th May 2012, 09:54 AM   #335
rocketdodger
Philosopher
 
rocketdodger's Avatar
 
Join Date: Jun 2005
Location: Hyperion
Posts: 6,884
This is pretty interesting:

http://www.iis.ee.ic.ac.uk/yiannis/DemirisJohnson03.pdf

Although this is almost 10 years old, it talks about a very important mechanism in biological brains -- re-using the same circuitry for both action and planning, and in some cases observation and learning ( both of those are supersets of planning, though ).

The basic idea is that the circuitry of the motor cortex is used not only for controlling muscles and decoding muscle position, but also for simulating the control of muscles and the effects of that control, I.E. imagining movement. And furthermore, that the imagining of movement is used during learning I.E. "if I do this, and my arm then moves up like so, I will be in the right position."

In this case the researchers use a sequence something like this:

1) robot observes goal configuration of arm, on another robot
2) the code modules that plan the robots arm movement are re-routed to internal locations ( they no longer control the arm, rather their output goes back into the robot brain )
3) those modules then control simulations of the arm I.E. if one would "raise" the arm the arm isn't actually "raised" yet portions of the robot brain are activated as if it was ( for example, imagine raising your arm -- you can also imagine what your arm feels like in the raised position )
4) the results of those simulations are evaluated to see if any of them bring the arm closer to the goal configuration
5) the simulated movements that are rated the best are reinforced, and more iterations of imagination are performed
6) eventually a sequence of movements that the robot imagined would put it in the goal configuration is found, and the arm control modules are re-routed back to the real arms
7) the action is performed

This seems very convoluted, but it is important to realize that this is the exact mechanism by which animals not only plan movements but also learn movements from observing others. In this case a neural network was not used, but the high level information flow is nevertheless the same ( or at least very similar ).

In a previous post I linked to some research that is like this but in that other case they actually *did* use neural networks.

Who said we don't know much about consciousness?

Last edited by rocketdodger; 11th May 2012 at 09:55 AM.
rocketdodger is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Back to Top
Old 11th May 2012, 04:00 PM   #336
Mr. Scott
Under the Amazing One's Wing
 
Mr. Scott's Avatar
 
Join Date: Nov 2005
Location: USA
Posts: 2,566
Originally Posted by rocketdodger View Post
This is pretty interesting:

http://www.iis.ee.ic.ac.uk/yiannis/DemirisJohnson03.pdf

Although this is almost 10 years old, it talks about a very important mechanism in biological brains -- re-using the same circuitry for both action and planning, and in some cases observation and learning ( both of those are supersets of planning, though ).

The basic idea is that the circuitry of the motor cortex is used not only for controlling muscles and decoding muscle position, but also for simulating the control of muscles and the effects of that control, I.E. imagining movement. And furthermore, that the imagining of movement is used during learning I.E. "if I do this, and my arm then moves up like so, I will be in the right position."

In this case the researchers use a sequence something like this:

1) robot observes goal configuration of arm, on another robot
2) the code modules that plan the robots arm movement are re-routed to internal locations ( they no longer control the arm, rather their output goes back into the robot brain )
3) those modules then control simulations of the arm I.E. if one would "raise" the arm the arm isn't actually "raised" yet portions of the robot brain are activated as if it was ( for example, imagine raising your arm -- you can also imagine what your arm feels like in the raised position )
4) the results of those simulations are evaluated to see if any of them bring the arm closer to the goal configuration
5) the simulated movements that are rated the best are reinforced, and more iterations of imagination are performed
6) eventually a sequence of movements that the robot imagined would put it in the goal configuration is found, and the arm control modules are re-routed back to the real arms
7) the action is performed

This seems very convoluted, but it is important to realize that this is the exact mechanism by which animals not only plan movements but also learn movements from observing others. In this case a neural network was not used, but the high level information flow is nevertheless the same ( or at least very similar ).

In a previous post I linked to some research that is like this but in that other case they actually *did* use neural networks.

Who said we don't know much about consciousness?
Yes, we know a LOT about consciousness already, and what you're describing is a type of "feeling the future."

What's cool is that when we rehearse movements in our minds, our minds are actually moving the limbs, but inhibitory impulses prevent the muscles from physically moving. When I think really deeply about playing piano, sometimes my fingers come to life and start to weakly play the notes in the air. Maybe it's because the inhibitory neurons become exhausted.
__________________
"Why is the world so different from what we thought it was?" Ting-Ting, from the 2000 film Yi Yi.

Last edited by Mr. Scott; 11th May 2012 at 04:20 PM.
Mr. Scott is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Back to Top
Old 11th May 2012, 04:18 PM   #337
Mr. Scott
Under the Amazing One's Wing
 
Mr. Scott's Avatar
 
Join Date: Nov 2005
Location: USA
Posts: 2,566
The Chinese Room Thought Experiment

When I first heard this thought experiment I was really intrigued. It seemed persuasive and made the hard problem of consciousness very tangible.

Now, I find the Chinese Room idea stupid. Searle is a smart guy, so why does he (and so many others) find it so compelling? Would someone explain to me why it's important or persuasive?

Chinese Room on Wiki

Video demo of the Chinese Room starts at 16:45 in this cool BBC program, "The Hunt for AI."

YouTube Video This video is not hosted by the JREF. The JREF can not be held responsible for the suitability or legality of this material. By clicking the link below you agree to view content from an external website.
I AGREE
__________________
"Why is the world so different from what we thought it was?" Ting-Ting, from the 2000 film Yi Yi.
Mr. Scott is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Back to Top
Old 11th May 2012, 10:25 PM   #338
rocketdodger
Philosopher
 
rocketdodger's Avatar
 
Join Date: Jun 2005
Location: Hyperion
Posts: 6,884
Originally Posted by Mr. Scott View Post
Yes, we know a LOT about consciousness already, and what you're describing is a type of "feeling the future."

What's cool is that when we rehearse movements in our minds, our minds are actually moving the limbs, but inhibitory impulses prevent the muscles from physically moving. When I think really deeply about playing piano, sometimes my fingers come to life and start to weakly play the notes in the air. Maybe it's because the inhibitory neurons become exhausted.
Yeah I have known about that first pathway for awhile. What I realized just recently, which is mentioned in the research, is the idea that not only are the outgoing motor impulses inhibited, but they also lead to the same downstream effects as incoming sensory percepts.

Apparently the motor networks are always recurrent and there is always a model of the results being generated from any outgoing movement signals, we just don't notice it because usually the real thing happens and the sensory percepts from actually moving a limb trump those from imagining moving a limb. Only when the real thing is inhibited do the simulated results become apparent.

This also nicely explains why a deviation from the expected is such an attention-getter for a conscious animal -- if the results of the model and the results of reality don't match up it would be trivial for a network to see it, especially since both results will arrive in the same location at approximately the same time.

Fascinating.

I wonder if we can start a sticky thread about "consciousness: the facts"

Last edited by rocketdodger; 11th May 2012 at 10:27 PM.
rocketdodger is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Back to Top
Old 11th May 2012, 10:41 PM   #339
rocketdodger
Philosopher
 
rocketdodger's Avatar
 
Join Date: Jun 2005
Location: Hyperion
Posts: 6,884
Originally Posted by Mr. Scott View Post
Would someone explain to me why it's important or persuasive?
No, because it isn't.

Searle formulated it in just about the stupidest way possible. He did that on purpose. He doesn't want people actually thinking about the issue, he wants them to be blinded with emotion and just give up.

Case in point -- why Chinese and not English? Why a man in the room, and not a robot? Why a room, and not the brain of a giant?

The whole thing is absurd.

Last edited by rocketdodger; 11th May 2012 at 11:05 PM.
rocketdodger is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Back to Top
Old 12th May 2012, 01:03 AM   #340
Beelzebuddy
Master Poster
 
Beelzebuddy's Avatar
 
Join Date: Jun 2010
Posts: 2,384
Originally Posted by Mr. Scott View Post
Now, I find the Chinese Room idea stupid. Searle is a smart guy, so why does he (and so many others) find it so compelling? Would someone explain to me why it's important or persuasive?
Natural language translation is a hard problem. We're talking "hard" with a capital Nobel.

But like many hard problems, it's theoretically possible to brute force it. To just make a giant-ass lookup table covering every possible circumstance. That's all Google does today, really. You'd be surprised how few unique phrases there actually are.

Incidentally, computer scientists around the time of this argument (1980) were all really excited about the possibility of making giant-ass lookup tables for absolutely everything. They called it "expert systems."

These computer scientists argued that a computer armed with enough of these lookup tables was intelligent. Not "indistinguishable from," not "might as well be considered," was. A computer with a sufficiently large Chinese-English dictionary would know how to translate between them.

But hold on, Searle said. Let's give this giant-ass lookup table to some jackass in a room instead. He don't know Chinese. He ain't gonna learn Chinese, not when he just looks up sentence indexes. He doesn't understand what you're asking him. Look at him, he gets paid to sit in a dark room and do whatever was the 1980 equivalent of filling out captchas all day.

So whatever we're looking for with this whole "intelligence" thing, whatever Derpy McBlackbox over there has that my pocket calculator don't, the dictionary alone doesn't have it either. Moreover, this is a general problem. Just because you have a big enough index to answer every question doesn't mean you can call it "thinking."
Beelzebuddy is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Back to Top
Old 12th May 2012, 01:09 AM   #341
Roboramma
Philosopher
 
Roboramma's Avatar
 
Join Date: Feb 2005
Location: Shanghai
Posts: 7,733
Originally Posted by Beelzebuddy View Post
So whatever we're looking for with this whole "intelligence" thing, whatever Derpy McBlackbox over there has that my pocket calculator don't, the dictionary alone doesn't have it either. Moreover, this is a general problem. Just because you have a big enough index to answer every question doesn't mean you can call it "thinking."
I don't see how this follows: the fact that one part of the machine can't be said to be intelligent doesn't mean that the machine as a whole isn't. And the fact that one part of the machine doesn't understand chinese doesn't mean that the machine as a whole doesn't.

I figure as long as it displays intelligent behavior, it's intelligent. I don't really understand what else "intelligent" could mean.
__________________
"... when people thought the Earth was flat, they were wrong. When people thought the Earth was spherical they were wrong. But if you think that thinking the Earth is spherical is just as wrong as thinking the Earth is flat, then your view is wronger than both of them put together."
Isaac Asimov
Roboramma is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Back to Top
Old 12th May 2012, 05:35 AM   #342
!Kaggen
Illuminator
 
!Kaggen's Avatar
 
Join Date: Jul 2009
Location: Cape Town
Posts: 3,736
Originally Posted by Roboramma View Post
I don't see how this follows: the fact that one part of the machine can't be said to be intelligent doesn't mean that the machine as a whole isn't. And the fact that one part of the machine doesn't understand chinese doesn't mean that the machine as a whole doesn't.

I figure as long as it displays intelligent behavior, it's intelligent. I don't really understand what else "intelligent" could mean.
It means that the concept intelligence has a history which gives it more meaning than what you want to credit it with.
__________________
"Anyway, why is a finely-engineered machine of wire and silicon less likely to be conscious than two pounds of warm meat?" Pixy Misa
"We live in a world of more and more information and less and less meaning" Jean Baudrillard
http://bokashiworld.wordpress.com/
!Kaggen is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Back to Top
Old 12th May 2012, 06:28 AM   #343
Roboramma
Philosopher
 
Roboramma's Avatar
 
Join Date: Feb 2005
Location: Shanghai
Posts: 7,733
Originally Posted by !Kaggen View Post
Quote:
I figure as long as it displays intelligent behavior, it's intelligent. I don't really understand what else "intelligent" could mean.
It means that the concept intelligence has a history which gives it more meaning than what you want to credit it with.
Um, "intelligent" means that the concept of intelligence has a history which gives it more meaning than what I want to credit it with? Huh?

Perhaps you could simply explain what "intelligent" means beyond displaying intelligent behavior.
__________________
"... when people thought the Earth was flat, they were wrong. When people thought the Earth was spherical they were wrong. But if you think that thinking the Earth is spherical is just as wrong as thinking the Earth is flat, then your view is wronger than both of them put together."
Isaac Asimov
Roboramma is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Back to Top
Old 12th May 2012, 11:29 AM   #344
!Kaggen
Illuminator
 
!Kaggen's Avatar
 
Join Date: Jul 2009
Location: Cape Town
Posts: 3,736
Originally Posted by Roboramma View Post
Um, "intelligent" means that the concept of intelligence has a history which gives it more meaning than what I want to credit it with? Huh?

Perhaps you could simply explain what "intelligent" means beyond displaying intelligent behavior.
Who said anything about beyond behavior?
Why bring behavior into the discussion?
The issue is, the meaning of intelligence.
As a word, a concept, it has a rich history of different meanings.
Which is the right one?
You got an answer?
__________________
"Anyway, why is a finely-engineered machine of wire and silicon less likely to be conscious than two pounds of warm meat?" Pixy Misa
"We live in a world of more and more information and less and less meaning" Jean Baudrillard
http://bokashiworld.wordpress.com/
!Kaggen is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Back to Top
Old 12th May 2012, 11:58 AM   #345
Beelzebuddy
Master Poster
 
Beelzebuddy's Avatar
 
Join Date: Jun 2010
Posts: 2,384
Originally Posted by Roboramma View Post
I figure as long as it displays intelligent behavior, it's intelligent. I don't really understand what else "intelligent" could mean.
Robustness. Ask the room something not in the phrasebook but which it can answer, a differently worded question perhaps. A strong AI which understands Chinese could answer you anyway. Weak AI, using the lookup table alone, could not. Both are common definitions of the word "intelligence," which previously had been far from clear.

I should probably add here that I don't actually support the Chinese Room argument. It's wrong. Not because of any semantic foolishness, but because he assumes the operator (human or machine) has no capacity to learn the semantics of the symbols it manipulates. This was a perfectly fair assumption for its time, because people were generally arguing such a learning capacity would not be needed.

Add in that capability, though, and with time and practice you end up with an agent with some fragmentary shard of strong AI. It may not know any of the concepts the questions or answers refer to, but it truly understands how the one should map to the other.

Originally Posted by !Kaggen View Post
Who said anything about beyond behavior?
Why bring behavior into the discussion?
The issue is, the meaning of intelligence.
As a word, a concept, it has a rich history of different meanings.
Which is the right one?
You got an answer?
They're all wrong. The word is a catch-all term for a large variety of behavioral and information processing steps, and these days is increasingly hijacked by people trying to push a "humans are special" agenda.

It's almost as bad as "consciousness."

Last edited by Beelzebuddy; 12th May 2012 at 12:04 PM.
Beelzebuddy is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Back to Top
Old 12th May 2012, 02:31 PM   #346
Mr. Scott
Under the Amazing One's Wing
 
Mr. Scott's Avatar
 
Join Date: Nov 2005
Location: USA
Posts: 2,566
Chess and the Chinese Room

A few years ago I was into playing chess on Yahoo. You set up a board and wait for a human opponent of similar rank to accept your game, and away you go.

Then one day, something disturbing happened. I was kicking someone's ass, and instantly after I won a piece, he started to play absolutely perfectly and and in very few moves destroyed me. I felt pretty sure that he was playing himself until I started to kill him, then started using a computer. I think he just didn't want to fall in the rankings.

The interesting things is the magic bean of my opponent's personality opponent went away, and I noticed it instantly. Something like playing tug of war with a person, feeling his living muscles through the rope, then it getting hitched to a bulldozer and getting pulled into the mud in one mechanical stroke.

Or, if there was a person who knew only a little Chinese in the room, then when they had to respond in a way over their heads, they switched to the book, complied by experts.

...but chess is not a look up table task for AI. There are too many possibilities. The table would have to be as big as the universe or something like that. I've worked on look-ahead games, and made one that had no such table. It "felt the future" by imagining every possible move its opponent might make, it's possible answers, etc. I also added emotion to it -- it put up a happy face when it expected a win, and a sad face when it saw it was losing. Unlike us, it didn't let it's emotions interfere with its intelligence.
__________________
"Why is the world so different from what we thought it was?" Ting-Ting, from the 2000 film Yi Yi.
Mr. Scott is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Back to Top
Old 12th May 2012, 02:41 PM   #347
Mr. Scott
Under the Amazing One's Wing
 
Mr. Scott's Avatar
 
Join Date: Nov 2005
Location: USA
Posts: 2,566
Originally Posted by rocketdodger View Post
No, because it isn't.

Searle formulated it in just about the stupidest way possible. He did that on purpose. He doesn't want people actually thinking about the issue, he wants them to be blinded with emotion and just give up.

Case in point -- why Chinese and not English? Why a man in the room, and not a robot? Why a room, and not the brain of a giant?

The whole thing is absurd.
Ah, I didn't know that, and it didn't seem like the narrator of the BBC show understood that either. Next time I watch it I'll see if I missed it.

(Chinese because it's often an example of a language that's so extremely cryptic to westerners. A man in a room because it brings home the point that the man has no understanding of the meaning of the messages he's transcribing. His magic bean of understanding is never engaged, yet the one outside the room feels it is.)

So, Searle was arguing that the Chinese Room, like expert systems, did not understand the subject, but were playing back the understanding of the experts that created the table. Funny how so many people misunderstand its point, like the point of Schrodinger's Cat.
__________________
"Why is the world so different from what we thought it was?" Ting-Ting, from the 2000 film Yi Yi.
Mr. Scott is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Back to Top
Old 12th May 2012, 04:30 PM   #348
Beelzebuddy
Master Poster
 
Beelzebuddy's Avatar
 
Join Date: Jun 2010
Posts: 2,384
Originally Posted by Mr. Scott View Post
...but chess is not a look up table task for AI. There are too many possibilities. The table would have to be as big as the universe or something like that.
Huh? We can almost do it now. It isn't even among the most computationally complicated discrete turn-based games by a long shot - that honor goes to Go and Arimaa.
Beelzebuddy is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Back to Top
Old 12th May 2012, 05:27 PM   #349
Roboramma
Philosopher
 
Roboramma's Avatar
 
Join Date: Feb 2005
Location: Shanghai
Posts: 7,733
Originally Posted by !Kaggen View Post
Who said anything about beyond behavior?
Why bring behavior into the discussion?
Please read what you respond to. I said that if it displays intelligent behavior, it's intelligent.

You seemed to disagree. I was hoping you'd explain why. If you didn't disagree with that, all you have to do is say so.
__________________
"... when people thought the Earth was flat, they were wrong. When people thought the Earth was spherical they were wrong. But if you think that thinking the Earth is spherical is just as wrong as thinking the Earth is flat, then your view is wronger than both of them put together."
Isaac Asimov
Roboramma is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Back to Top
Old 12th May 2012, 09:32 PM   #350
Mr. Scott
Under the Amazing One's Wing
 
Mr. Scott's Avatar
 
Join Date: Nov 2005
Location: USA
Posts: 2,566
Originally Posted by Beelzebuddy View Post
Huh? We can almost do it now. It isn't even among the most computationally complicated discrete turn-based games by a long shot - that honor goes to Go and Arimaa.
From Number of possible chess games:

Quote:
The number of legal chess positions is 10^40, the number of different possible games, 10^120.
There are only about 10^80 atoms in the universe.

But, whatever the number, lookup-table implementation is not feasible for chess playing machines, which need to feel the future to play well.
__________________
"Why is the world so different from what we thought it was?" Ting-Ting, from the 2000 film Yi Yi.

Last edited by Mr. Scott; 12th May 2012 at 09:36 PM.
Mr. Scott is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Back to Top
Old 12th May 2012, 10:13 PM   #351
rocketdodger
Philosopher
 
rocketdodger's Avatar
 
Join Date: Jun 2005
Location: Hyperion
Posts: 6,884
Originally Posted by Beelzebuddy View Post

These computer scientists argued that a computer armed with enough of these lookup tables was intelligent. Not "indistinguishable from," not "might as well be considered," was. A computer with a sufficiently large Chinese-English dictionary would know how to translate between them.

But hold on, Searle said. Let's give this giant-ass lookup table to some jackass in a room instead. He don't know Chinese. He ain't gonna learn Chinese, not when he just looks up sentence indexes. He doesn't understand what you're asking him. Look at him, he gets paid to sit in a dark room and do whatever was the 1980 equivalent of filling out captchas all day.
The problem is the thought experiment is using absurdness to extinguish absurdness.

It is absurd to think that a giant lookup table is relevant to *anything* when it comes to intelligence because by definition we consider intelligence the ability to do something other than reference pre-defined behavioral reactions.

The proper counter to this stupid argument by the old computer scientists is to just point out that they are idiots. Not formulate an even more bizarre scenario that is so unclear that every armchair philosopher on the internet has spun it into supporting their own uneducated opinions.
rocketdodger is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Back to Top
Old 12th May 2012, 10:22 PM   #352
rocketdodger
Philosopher
 
rocketdodger's Avatar
 
Join Date: Jun 2005
Location: Hyperion
Posts: 6,884
Originally Posted by Mr. Scott View Post
Ah, I didn't know that, and it didn't seem like the narrator of the BBC show understood that either. Next time I watch it I'll see if I missed it.
Heh, I just made that up. That is my own interpretation, based on the fact that I could argue why a lookup table is not equivalent to intelligence without referencing absurd scenarios, and it would be far more clear to everyone.

Hence, there must have been an ulterior motive, I tell myself. I am wary of any philosopher interested in consciousness and cognition who doesn't immerse themselves in programming, it seems very non-genuine. And Searle, like Penrose, is that type. ( Penrose isn't a philosopher, but he isn't a programmer either, so any notion he has about what an algorithm can or cannot do is amateur, and that is why I don't respect him at all when it comes to this issue ).

Note that I feel sort of the same way about all these types, regardless of which side they support. Dennet, Blackmore, etc. I can't stand listening to people quote Daniel Dennet or Susan Blackmore talking about how little we really know when it comes to consciousness, and saying "see they are even supportes of the computational model and they admit that we don't know much."

Originally Posted by Mr. Scott View Post

So, Searle was arguing that the Chinese Room, like expert systems, did not understand the subject, but were playing back the understanding of the experts that created the table. Funny how so many people misunderstand its point, like the point of Schrodinger's Cat.
Yeah but here is the thing -- was Searle clear that the instructions the guy in the room follows are merely some implementation of a lookup table? I don't recall that being explicitly part of the description, and if they are, he hasn't done a good job squashing all the bad versions of the chinese room that are crawling around.

Because all I ever hear from armchair philosophers is that the chinese room is supposed to show that *any* mechanical instructions the guy follows somehow invalidate any possible understanding of chinese that the room might have.

In other words, I see the most common interpretation to be a suggestion that the idea of machine consciousness is absurd.

But you and I and anyone who thinks about it knows this isn't the case -- if the instructions on the cards represent something more like CPU instructions and register values, meaning the guy is actually just implementing an algorithm that could be anything, it is less clear cut that the idea of the room understanding chinese is absurd. And if the instructions on the cards represent something like a neural network simulation, then it isn't clear at all that the room doesn't understand chinese. In that case it seems like the room *does* understand chinese.

This is just one of those cases -- like every other case in this discussion, actually -- where incorrectness stems primarily from a failure of being specific when it comes to what we are talking about.

Last edited by rocketdodger; 12th May 2012 at 10:27 PM.
rocketdodger is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Back to Top
Old 12th May 2012, 10:34 PM   #353
rocketdodger
Philosopher
 
rocketdodger's Avatar
 
Join Date: Jun 2005
Location: Hyperion
Posts: 6,884
Originally Posted by Beelzebuddy View Post
Huh? We can almost do it now. It isn't even among the most computationally complicated discrete turn-based games by a long shot - that honor goes to Go and Arimaa.
Yes, and actually the best chess engines use a huge amount of lookup table references in their logic. They call it "endgame tablebase" analysis.

However, that is *not* thinking. It is no different than you turning left out of your driveway because you are used to it. At some point, when you first bought your house, you had to *think* about which direction to turn, and the same at the next turn, etc, when you went to work in the morning. But after awhile it is burned into your memory, and you just do it without thinking.

It is also worth noting that endgame tablebases don't help win in games that aren't constrained by artificial rules, and they matter less and less in games with less artificial rules. They also don't really help that much in games where the tables can turn rapidly towards the end.
rocketdodger is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Back to Top
Old 13th May 2012, 01:23 AM   #354
!Kaggen
Illuminator
 
!Kaggen's Avatar
 
Join Date: Jul 2009
Location: Cape Town
Posts: 3,736
Originally Posted by Beelzebuddy View Post
They're all wrong. The word is a catch-all term for a large variety of behavioral and information processing steps, and these days is increasingly hijacked by people trying to push a "humans are special" agenda.

It's almost as bad as "consciousness."
They are all wrong ?
I see, so what's your agenda' less right definition of special, intelligence and consciousness?
__________________
"Anyway, why is a finely-engineered machine of wire and silicon less likely to be conscious than two pounds of warm meat?" Pixy Misa
"We live in a world of more and more information and less and less meaning" Jean Baudrillard
http://bokashiworld.wordpress.com/
!Kaggen is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Back to Top
Old 13th May 2012, 01:26 AM   #355
!Kaggen
Illuminator
 
!Kaggen's Avatar
 
Join Date: Jul 2009
Location: Cape Town
Posts: 3,736
Originally Posted by Roboramma View Post
Please read what you respond to. I said that if it displays intelligent behavior, it's intelligent.

You seemed to disagree. I was hoping you'd explain why. If you didn't disagree with that, all you have to do is say so.
Will so what ? If it moves it moves.
Completely uninteresting.
The issue is what behavior is intelligent?
__________________
"Anyway, why is a finely-engineered machine of wire and silicon less likely to be conscious than two pounds of warm meat?" Pixy Misa
"We live in a world of more and more information and less and less meaning" Jean Baudrillard
http://bokashiworld.wordpress.com/
!Kaggen is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Back to Top
Old 13th May 2012, 01:50 AM   #356
!Kaggen
Illuminator
 
!Kaggen's Avatar
 
Join Date: Jul 2009
Location: Cape Town
Posts: 3,736
No Dodger, if you want to study Consciousness you need to study human behavior. Reducing human behavior to neuron behavior and then trying to build models of neuron behavior which becomes human behavior is useless unless we know what human behavior is.

Your continually making the false assumption that since humans brains are built of neurons if we study the behavior of neurons we will be able to create brains.
It may be the way you right comp. games, by building models from basic logical procedures, but it is useless if you don't know what the model is supposed to model.
Taking the PM approach of defining a complex human behavior such as consciousness as a simple behavior may make the idea of modeling from basic switch behavior easier, but that's irrelevant if we have yet to define the behaviors which make up consciousness.

An economist may define a human as a unit with x spending power for their economic model, but this definition is useless for a doctor who is modeling the spread of TB in a population.

Again, if your selling games to children and you want them to be convinced the behavior they are seeing is "real" then your skill relates to there ability to be fooled. Attempting to get everyone to accept a limited definition of consciousness so that they can be fooled into believing your programming leads to consciousness is not exactly scientific. The idea of getting everyone to learn programming so they also learn how to trick people and become convinced that tricking people is the way the real world works is also not scientific.
The agenda amongst computationalists is clearly to justify there ability to trick people by claiming that's how the real world also works.
Remind you of priests anyone?
__________________
"Anyway, why is a finely-engineered machine of wire and silicon less likely to be conscious than two pounds of warm meat?" Pixy Misa
"We live in a world of more and more information and less and less meaning" Jean Baudrillard
http://bokashiworld.wordpress.com/

Last edited by !Kaggen; 13th May 2012 at 01:58 AM.
!Kaggen is online now   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Back to Top
Old 13th May 2012, 04:23 AM   #357
Roboramma
Philosopher
 
Roboramma's Avatar
 
Join Date: Feb 2005
Location: Shanghai
Posts: 7,733
Originally Posted by !Kaggen View Post
Will so what ? If it moves it moves.
Completely uninteresting.
The issue is what behavior is intelligent?
It's fine that you don't find it interesting. It was simply my comment on the "chinese room", which seems to be an argument that intelligent behavior is not necessarily a product of intelligence.
__________________
"... when people thought the Earth was flat, they were wrong. When people thought the Earth was spherical they were wrong. But if you think that thinking the Earth is spherical is just as wrong as thinking the Earth is flat, then your view is wronger than both of them put together."
Isaac Asimov
Roboramma is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Back to Top
Old 13th May 2012, 11:46 AM   #358
rocketdodger
Philosopher
 
rocketdodger's Avatar
 
Join Date: Jun 2005
Location: Hyperion
Posts: 6,884
Originally Posted by !Kaggen View Post
No Dodger, if you want to study Consciousness you need to study human behavior. Reducing human behavior to neuron behavior and then trying to build models of neuron behavior which becomes human behavior is useless unless we know what human behavior is.
I completely agree.

What is the issue you are complaining about? I am not pixy, and neither are any of the very smart people doing research on machine consciousness. Understanding human behavior is the first step in all of the research that I familiarize myself with.

For example, in the paper I just discussed with mr. scott a few posts ago, the research was done according to known information about primate behavior, namely the way we plan and initiate movements in the context of learning by imitation. Furthermore the information includes things like MRI results so it isn't just pie in the sky either. This is very factual stuff.

Originally Posted by !Kaggen View Post
The idea of getting everyone to learn programming so they also learn how to trick people and become convinced that tricking people is the way the real world works is also not scientific.
The agenda amongst computationalists is clearly to justify there ability to trick people by claiming that's how the real world also works.
Remind you of priests anyone?
That isn't the idea. The idea is to get everyone to learn programming because it is almost unique among human endeavors in that it *forces* the practitioner to think logically about something in order to see any results at all. And it is certainly the *only* such endeavor, from the already small set, that is so easily accessible to anyone -- anyone with a computer can start since there are thousands of free compilers and interpreters for whatever language one cares to use.

The fact is, computer science is really about wrapping your brain around algorithms, which are just sequences of events. It is about seeing how to get from point A to point B in reality, a skill far too few people have learned. I wish more scientists of all types were familiar with that skillset, I think the world would progress much faster. I can't tell you how many biology grad students I have worked with when I was a lab assistant who spent far too much effort trying to figure out why this or that cellular process or pathway worked the way it did when if they had taken some courses on programming they might easily see how the steps of the process fit together to lead to the results they were seeing.

So why should cognition be any different? It shouldn't. Our brains are made of stuff that behaves according to the laws of nature, and to figure out the ways that stuff might do stuff that leads to things like me typing a response to you simply requires an understanding of how sequences of events lead to results.

Computer science doesn't have to have anything to do with either computers or science. In fact I wish it wasn't named computer science because it is so misleading. It has to do with the study of step by step processes. The advantage I have over people who don't know how to program is that at this point I have an almost intuitive understanding of how step by step processes might lead to this or that result. If you had the same understanding, we wouldn't even be having this argument, because you would see the whole consciousness issue in a completely different light.
rocketdodger is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Back to Top
Old 13th May 2012, 01:58 PM   #359
Beelzebuddy
Master Poster
 
Beelzebuddy's Avatar
 
Join Date: Jun 2010
Posts: 2,384
Originally Posted by rocketdodger View Post
The proper counter to this stupid argument by the old computer scientists is to just point out that they are idiots. Not formulate an even more bizarre scenario that is so unclear that every armchair philosopher on the internet has spun it into supporting their own uneducated opinions.
Thanks for clarifying. Maybe insulting his audience to their faces would have been a more satisfying response to their assertions, but I doubt it would have had the same impact. Like it or not, his argument was very effective at its intended purpose, and if it's been hijacked these days by true believers, well, so what? They'd have just latched on to something else otherwise.
Beelzebuddy is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Back to Top
Old 13th May 2012, 03:58 PM   #360
AlBell
Philosopher
 
AlBell's Avatar
 
Join Date: Mar 2009
Posts: 6,362
Originally Posted by rocketdodger View Post
What you don't take into consideration is that you know nothing about 1) computing, 2) the brain, 3) advances in A.I.

If that *is* taken into consideration, it becomes clear that you are wrong.

In particular, there has been an amazing amount of progress when it comes to neural network models demonstrating fundamentally conscious behaviors in the last 10 years.
How did the program demonstrate it was fundamentally conscious. Ask for more RAM? Faster clock speed? More cd read/write space? Larger power supply? More pixels? 132 character highspeed printer? Other requests?
AlBell is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Back to Top
Closed Thread

JREF Forum » General Topics » Science, Mathematics, Medicine, and Technology

Bookmarks

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -7. The time now is 02:39 AM.
Powered by vBulletin. Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
2001-2013, James Randi Educational Foundation. All Rights Reserved.

Disclaimer: Messages posted in the Forum are solely the opinion of their authors.