JREF Homepage Swift Blog Events Calendar $1 Million Paranormal Challenge The Amaz!ng Meeting Useful Links Support Us
James Randi Educational Foundation JREF Forum
Forum Index Register Members List Events Mark Forums Read Help

Go Back   JREF Forum » General Topics » Science, Mathematics, Medicine, and Technology
Click Here To Donate

Notices


Welcome to the JREF Forum, where we discuss skepticism, critical thinking, the paranormal and science in a friendly but lively way. You are currently viewing the forum as a guest, which means you are missing out on discussing matters that are of interest to you. Please consider registering so you can gain full use of the forum features and interact with other Members. Registration is simple, fast and free! Click here to register today.

Reply
Old 7th January 2013, 09:37 PM   #681
!Kaggen
Illuminator
 
!Kaggen's Avatar
 
Join Date: Jul 2009
Location: Cape Town
Posts: 3,734
Originally Posted by AlBell View Post
As I mentioned earlier, back to noumena vs phenomena.
I said it before you
__________________
"Anyway, why is a finely-engineered machine of wire and silicon less likely to be conscious than two pounds of warm meat?" Pixy Misa
"We live in a world of more and more information and less and less meaning" Jean Baudrillard
http://bokashiworld.wordpress.com/
!Kaggen is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th January 2013, 12:38 AM   #682
punshhh
Illuminator
 
punshhh's Avatar
 
Join Date: Jul 2010
Location: Rural England
Posts: 4,820
Originally Posted by rocketdodger View Post
Well, if that is true, you could take my proposed solution and explain why it is invalid.

Thus far you haven't. Thus far you've made a dozen posts that can be summed up with "no no no, you don't understand the question."

Care to explain why the simple and elegant answer that the subjective experience of red is merely the red brain state observed from its own perspective is somehow inconsistent with any known science?
The confusion is over the transition of the signal from the neural state to the mind.

Your position is that the mind is simply the experience over time of a succession of brain states and the subjective experience of red is how it is to the person when this happens.

Piggy is saying that there is an additional layer of subjective interpretation in which a "personal world" with colours is generated which is known as the mind. And that this personal world is not as yet understood/explained in terms of the biochemistry of the brain.

From your perspective there is no hard problem, From Piggy's perspective there is a hard problem which is not being addressed.

Last edited by punshhh; 8th January 2013 at 12:39 AM.
punshhh is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th January 2013, 01:52 AM   #683
shuttlt
Illuminator
 
Join Date: Aug 2008
Posts: 4,700
Is Piggy claiming an extra layer? I thought the issue is that even if it is as rocketdodger says it is, there isn't any obvious means to deduce from objective observation what the subjective experience will be like, or even that there will be one. The only solution to the problem that I've seen so far is "because it is".
shuttlt is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th January 2013, 02:21 AM   #684
Anders Lindman
Penultimate Amazing
 
Anders Lindman's Avatar
 
Join Date: Sep 2010
Posts: 13,833
Originally Posted by Piggy View Post
Exactly.

The physics of "light hits retina" to "brain state"... no problem.

But from "brain state" to "I see red"... we have no theory.
The Integrated Information Theory can sort of explain that:

"The theory is based on two key observations. The first is that every observable conscious state contains a massive amount of information. A common example of this is every frame in a movie. Upon seeing a single frame of a movie you have watched you instantly associate it with a "specific conscious percept."[2] That is to say you can discriminate a single frame from a film with any other single frame, including a blank, black screen. The mind, therefore, can discriminate amongst a massive number of possible visual states. This is a tremendous amount of information being represented. Compare our visual awareness to a simple photodiode which only can discriminate the presence of light from dark. It doesn't matter if the light is a lightbulb, a scene from Ben Hur or the bright light of noon on a summer day, the photodiode represents only minimal information." -- http://en.wikipedia.org/wiki/Integra...rmation_Theory

For example when a photo diode detects red light it has only the "awareness" of light of a certain frequency being present. The photo diode can only distinguish between light being on or off (and perhaps the intensity and frequency of the light). Compare that to for instance a red apple being experienced in consciousness. The mind holds massive information about what is NOT "red apple" simultaneously with the pattern recognition of the red apple and in this way the mind is able to distinguish objects in complex ways.
Anders Lindman is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th January 2013, 03:11 AM   #685
Clive
Critical Thinker
 
Join Date: Dec 2008
Posts: 349
Originally Posted by shuttlt View Post
I thought the issue is that even if it is as rocketdodger says it is, there isn't any obvious means to deduce from objective observation what the subjective experience will be like, or even that there will be one. The only solution to the problem that I've seen so far is "because it is".
I agree.

I happened to watch this video a few hours ago where David Chalmers talks about emergence. He characterises consciousness as "strong emergence" and suggests it is the only example of this kind of emergence that we've seen so far. He basically uses "strong emergence" to describe the situation where there is apparently no way to predict the emergent phenomenon even given complete knowledge of the underlying physical reality. If you want to cut to the chase, he starts talking about this in terms of Laplace's Demon at about 2:40 into the video.

Rocketdodger's "it just is" idea appears to be an example of what Chalmers categorises as "type B materialism" in the "Moving Forward" paper where he analyses various responses to his earlier "Facing Up to Consciousness".

The idea that we would need something like a massive look-up table embedded in any eventual complete "Theory of Consciousness" (because there really was no more compact description available) in order to "explain" how to map any particular set of brain (or equivalent?) states onto its corresponding subjective experience doesn't gel well with our desire (expectation?) for something far more parsimonious.
Clive is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th January 2013, 03:34 AM   #686
punshhh
Illuminator
 
punshhh's Avatar
 
Join Date: Jul 2010
Location: Rural England
Posts: 4,820
Originally Posted by shuttlt View Post
Is Piggy claiming an extra layer? I thought the issue is that even if it is as rocketdodger says it is, there isn't any obvious means to deduce from objective observation what the subjective experience will be like, or even that there will be one. The only solution to the problem that I've seen so far is "because it is".
I can't speak for Piggy, but he is suggesting a mind of sorts, which does the labeling when the impulse is labeled as red. Also that the world perceived by a person is a mental construct and that it is our "mind" which is experiencing this world as reality.

Rocketdodger appears to be denying such an isolated inner world as a distinct subjective space. But rather a more direct experience and interaction with the environment.

"because it is" is a way of considering that the subjective "just is" what it is like to be a biological human. Hence if you reproduce all the biological processes in the body, perhaps in a virtual space it will be fully conscious as a human.

To me the missing link between these two positions is the living conscious processes in the cells of the body. If rocketdoger reproduces this in his virtual human, I agree it would be conscious.

However unfortunately due to our evolutionary limitations such a feat is light years away.
punshhh is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th January 2013, 04:40 AM   #687
shuttlt
Illuminator
 
Join Date: Aug 2008
Posts: 4,700
Originally Posted by punshhh View Post
To me the missing link between these two positions is the living conscious processes in the cells of the body. If rocketdoger reproduces this in his virtual human, I agree it would be conscious.
Would there be any way, even theoretically, of telling that this was indeed a requirement? It sounds a bit like you are saying rocketdodgers AI experiments are p-zombies. If so, I predict the word "dualism" will get used again pretty soon.
shuttlt is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th January 2013, 08:41 AM   #688
tsig
a carbon based life-form
 
tsig's Avatar
 
Join Date: Nov 2005
Posts: 33,318
Quote:
Originally Posted by Piggy View Post

You can scream "They're the same thing!" all you want, but that doesn't answer the question of why they're the same thing.
Originally Posted by rocketdodger View Post
Lol, if I bothered to have quotes in my signature, this gem would definitely be a candidate.

Congratulations!
Amazing ain't it?

This seems to be the philosophers view of color vision:

red wavelength hits eyeball > nerve signal sent to brain > magic happens > we see red.
tsig is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th January 2013, 08:47 AM   #689
tsig
a carbon based life-form
 
tsig's Avatar
 
Join Date: Nov 2005
Posts: 33,318
Originally Posted by rocketdodger View Post
*Any* perspective yields a particular set of observations !!

You're asking why we see a circle instead of a square when we look at a sphere. I've got news for you -- its an easy answer. If you think this is a "hard problem" then maybe some math courses are in order?

And guess what -- everyone who looks at a sphere sees a circle. Everyone. Except ... anyone who is inside the sphere. Then your perspective seems qualitatively different from everyone else.

So ... what's the hard problem again?
Teh "hard problem"

When light of the red wavelength hits the eyeball why do we see red instead of smelling bacon?

Easy answer:

We're wired that way.


It looks to me like they're asking why the bedroom light goes on when I turn on the bedroom light switch instead of the toilet flushing.
tsig is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th January 2013, 08:51 AM   #690
rocketdodger
Philosopher
 
rocketdodger's Avatar
 
Join Date: Jun 2005
Location: Hyperion
Posts: 6,884
Originally Posted by shuttlt View Post
Would there be any way, even theoretically, of telling that this was indeed a requirement? It sounds a bit like you are saying rocketdodgers AI experiments are p-zombies. If so, I predict the word "dualism" will get used again pretty soon.
No, there is no way, even theoretically.

The only way of knowing whether it is a requirement is to just trust the subjective self-reporting of the entity in question. We do that for humans -- I trust that if you claim you are experiencing red, you are very probably experiencing red, which is something like the red I experience.

If we have a robot, or a virtual human, or whatever, that is wired up very closely to how the brain is wired up, and it starts telling us "hey, seriously, I'm seeing red. No, seriously." then our only option is to trust it nor not trust it.

The usual counter to this is to say "but couldn't you just program the robot to say it saw red" and of course yes, you can, but that's not what we are talking about. That isn't artificial intelligence, that's a lookup table.
rocketdodger is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th January 2013, 10:10 AM   #691
shuttlt
Illuminator
 
Join Date: Aug 2008
Posts: 4,700
But why would the subjective "red" that it experiences be anything like the subjective "red" that we experience? The robot couldn't tell that it's subjective "red" was in fact much more similar to our "green". Externally we all agree to label the same external objects as being "red" and "green", but internally there would be no way to tell that the robots subjective experience was the same as ours, or even that it was having a subjective experience at all?

If our only option is to trust the robot or not to trust the robot, then surely you agree with what Piggy has been saying all along? We have no theory to predict what the robots subjective experience will be, or whether it is having one at all. If you can't even predict that the robot is having a subjective experience at all, then something significant is missing from the theory.
shuttlt is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th January 2013, 11:16 AM   #692
rocketdodger
Philosopher
 
rocketdodger's Avatar
 
Join Date: Jun 2005
Location: Hyperion
Posts: 6,884
Originally Posted by shuttlt View Post
But why would the subjective "red" that it experiences be anything like the subjective "red" that we experience? The robot couldn't tell that it's subjective "red" was in fact much more similar to our "green". Externally we all agree to label the same external objects as being "red" and "green", but internally there would be no way to tell that the robots subjective experience was the same as ours, or even that it was having a subjective experience at all?

If our only option is to trust the robot or not to trust the robot, then surely you agree with what Piggy has been saying all along? We have no theory to predict what the robots subjective experience will be, or whether it is having one at all. If you can't even predict that the robot is having a subjective experience at all, then something significant is missing from the theory.
But doesn't that all apply to other humans as well?

Why do I think your subjective red is anything like mine? And how can I predict that you are having one at all?

If your answer is the typical "because we are so similar" then I would say the same for the robot. If the robot is something like Number5 from "Short Circuit" then I wouldn't have much confidence that its red is like my red. If the robot is like Data then I might have a little more, but not much, since his brain wasn't the same structure as ours. If the robot has a brain with almost identical pathways for information to flow along, then I would probably trust that its red is fairly similar to mine.
rocketdodger is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th January 2013, 11:28 AM   #693
dlorde
Philosopher
 
dlorde's Avatar
 
Join Date: Apr 2007
Posts: 6,216
Originally Posted by shuttlt View Post
Originally Posted by punshhh
To me the missing link between these two positions is the living conscious processes in the cells of the body. If rocketdoger reproduces this in his virtual human, I agree it would be conscious.
Would there be any way, even theoretically, of telling that this was indeed a requirement? It sounds a bit like you are saying rocketdodgers AI experiments are p-zombies. If so, I predict the word "dualism" will get used again pretty soon.
It's not clear what punshhh means by 'living conscious processes' in cells. Individual cells are not conscious by common definitions of the word (i.e. a function of mind, and therefore of multicellular organisms with a complex nervous system). Many cell processes combine to support the metabolic system activities we call life; I don't think it's possible to identify any particular cellular process as 'living', let alone conscious.
__________________
Simple probability tells us that we should expect coincidences, and simple psychology tells us that we'll remember the ones we notice...

Last edited by dlorde; 8th January 2013 at 11:34 AM.
dlorde is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th January 2013, 11:29 AM   #694
rocketdodger
Philosopher
 
rocketdodger's Avatar
 
Join Date: Jun 2005
Location: Hyperion
Posts: 6,884
Also, I need to add that this claim of piggy's regarding no theory of "red" is just false.

We have very detailed theories of exactly why the color distribution we see is the way it is. In particular, science has known for decades how the different photoreceptors in the eye respond to light of various wavelengths.

Why do we see red, green, and blue light? Not all of us do. If you are color blind, you can look at a picture and see nothing but dots where the rest of us see a clear alphanumeric message. If you had only one color receptor, you would see the world in grayscale. If your eye was structured different, the perspective you see would be different.

So work backwards from what is known in science and try to figure it out. Why, when we look at an object, is it the shape it is? We can use math to fully explain how a 3d object is projected onto a 2d surface. Why is there the color distribution there is? We can use science to fully explain why the 3 different photoreceptors in the eye lead to that distribution when light of any wavelength, and any combination of wavelengths, strikes the eye.

All of that is known, and easy to understand. So the only question left is why one part of that distribution is assigned to "red" and another to "green" and so forth. Well, because we assigned it. If our brain recognizes different colors in the spectrum that our eyes are capable of detecting, then obviously we will be able to ... recognize those different colors.

You guys are sort of answering your own question when you say repeatedly "but there isn't any red light, there is only light of a certain frequency, that we call red." That's correct. There isn't any red light. But you can say the same for experience. Instead of saying "I'm experiencing red" I can say "I'm experiencing the sensation of light of X frequency hitting my eye," can I not?
rocketdodger is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th January 2013, 11:43 AM   #695
Mijin
Thinker
 
Mijin's Avatar
 
Join Date: Apr 2012
Posts: 167
Originally Posted by tsig View Post
Amazing ain't it?

This seems to be the philosophers view of color vision:

red wavelength hits eyeball > nerve signal sent to brain > magic happens > we see red.
If by "magic" you mean "a process we fundamentally don't understand yet" then I agree with that. As would most philosophers. As would most neuroscientists.
Mijin is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th January 2013, 12:17 PM   #696
Mijin
Thinker
 
Mijin's Avatar
 
Join Date: Apr 2012
Posts: 167
Originally Posted by rocketdodger View Post
If we have a robot, or a virtual human, or whatever, that is wired up very closely to how the brain is wired up, and it starts telling us "hey, seriously, I'm seeing red. No, seriously." then our only option is to trust it nor not trust it.

The usual counter to this is to say "but couldn't you just program the robot to say it saw red" and of course yes, you can, but that's not what we are talking about. That isn't artificial intelligence, that's a lookup table.
Ok in this post you seem like you're closing in on what the problem is. This difference between what you call AI, and a LUT...run with that.

Consider physical pain. In a typical situation involving a stimulus that causes pain, there are 3 stages:

1. Nervous system detects stimulus, relays data to the brain
2. The (unpleasant) feeling of pain
3. Behaviour in response

Now, some people may try to argue that step 2 is somehow intrinsic to step 1 or 3 but I think they are wrong. There are many stimuli that the brain responds to that do not result in a feeling of any kind.

Now, given this, we return to AI. Could I write a program that does step 1 and 3? Sure I could (in crude form).

Could I write a program that crudely does step 2? No, and nor can anyone else. We simply don't understand how to make a machine experience a feeling.

Now you might argue that this is the argument from incredulity, but I am not claiming that a machine can not have a feeling.
The brain is a machine, it has feelings.
I'm saying we don't know yet how it does this, and that's the hard problem of consciousness.
Mijin is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th January 2013, 12:26 PM   #697
shuttlt
Illuminator
 
Join Date: Aug 2008
Posts: 4,700
Originally Posted by rocketdodger View Post
Also, I need to add that this claim of piggy's regarding no theory of "red" is just false.

We have very detailed theories of exactly why the color distribution we see is the way it is. In particular, science has known for decades how the different photoreceptors in the eye respond to light of various wavelengths.

Why do we see red, green, and blue light? Not all of us do. If you are color blind, you can look at a picture and see nothing but dots where the rest of us see a clear alphanumeric message. If you had only one color receptor, you would see the world in grayscale. If your eye was structured different, the perspective you see would be different.

So work backwards from what is known in science and try to figure it out. Why, when we look at an object, is it the shape it is? We can use math to fully explain how a 3d object is projected onto a 2d surface. Why is there the color distribution there is? We can use science to fully explain why the 3 different photoreceptors in the eye lead to that distribution when light of any wavelength, and any combination of wavelengths, strikes the eye.

All of that is known, and easy to understand. So the only question left is why one part of that distribution is assigned to "red" and another to "green" and so forth. Well, because we assigned it. If our brain recognizes different colors in the spectrum that our eyes are capable of detecting, then obviously we will be able to ... recognize those different colors.

You guys are sort of answering your own question when you say repeatedly "but there isn't any red light, there is only light of a certain frequency, that we call red." That's correct. There isn't any red light. But you can say the same for experience. Instead of saying "I'm experiencing red" I can say "I'm experiencing the sensation of light of X frequency hitting my eye," can I not?
If there is no way, even in theory, to tell what the robot, or anyone else is actually experiencing as a subjective experience except to ask them and take it on trust, how can you also be saying it's all perfectly deducable?

We can tell that there is some kind of brain state associated with light in the frequencies we call red. Piggy has said this a load of times. Whether you, I or this robot that was mentioned experience similar subjective experiences and what those subjective experiences are and why they should be how they are are not obviously answerable questions, we just have to assume it, or not as our intuition leads. I thought you were accepting this when you said that it had to be taken on trust. If we have a theory that accounts for it that doesn't have a big whole where all the important stuff happens, surely it doesn't have to be taken on trust?

Last edited by shuttlt; 8th January 2013 at 12:35 PM.
shuttlt is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th January 2013, 12:31 PM   #698
shuttlt
Illuminator
 
Join Date: Aug 2008
Posts: 4,700
It's not an important quibble, but formally, surely any computer running a program and taking input is equivalent to some finite (though perhaps insanely large) lookup table? I've known AI students in my time who I think would have claimed a lookup table could in that case be as conscious as anything else.

Last edited by shuttlt; 8th January 2013 at 12:33 PM.
shuttlt is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th January 2013, 12:44 PM   #699
ctamblyn
Data Ghost
 
ctamblyn's Avatar
 
Join Date: Nov 2009
Location: The Library
Posts: 2,006
Originally Posted by shuttlt View Post
It's not an important quibble, but formally, surely any computer running a program and taking input is equivalent to some finite (though perhaps insanely large) lookup table? I've known AI students in my time who I think would have claimed a lookup table could in that case be as conscious as anything else.
In general you'd also need some internal state, which could also be insanely large. In that case, the AI student's claim doesn't look so insane to me - at least, not on the face of it.

ETA: I mean the program would typically be a function taking the current internal state and current external inputs, and mapping those to the new internal state and outputs. You could of course imagine implementing that function as a lookup table, but the externally visible behaviour of the system wouldn't be a one-to-one input-to-output mapping. The required lookup table could be so large as to make "astronomical" look like peanuts - literally too large to fit in the observable universe and way beyond anything I'd be able to grasp intuitively. For me personally it is hard to say the AI student would be obviously mistaken.

ETA2: What a poor job I've done of explaining my point. I don't mean to claim that any lookup table is conscious, but that the mere fact that any computer program can be (in theory, at least) implemented using a large lookup table doesn't by itself rule out the possibility of a conscious computer program.
__________________
Join Team 13232 for science!

Last edited by ctamblyn; 8th January 2013 at 12:58 PM. Reason: ETA
ctamblyn is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th January 2013, 01:04 PM   #700
shuttlt
Illuminator
 
Join Date: Aug 2008
Posts: 4,700
Originally Posted by ctamblyn View Post
In general you'd also need some internal state, which could also be insanely large. In that case, the AI student's claim doesn't look so insane to me - at least, not on the face of it.

ETA: I mean the program would typically be a function taking the current internal state and current external inputs, and mapping those to the new internal state and outputs. You could of course imagine implementing that function as a lookup table, but the externally visible behaviour of the system wouldn't be a one-to-one input-to-output mapping. The required lookup table could be so large as to make "astronomical" look like peanuts - literally too large to fit in the observable universe and way beyond anything I'd be able to grasp intuitively. For me personally it is hard to say the AI student would be obviously mistaken.
No, I'm pretty confident that doing the whole thing as a lookup table would be just as "possible" as the method you suggest.
shuttlt is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th January 2013, 01:07 PM   #701
shuttlt
Illuminator
 
Join Date: Aug 2008
Posts: 4,700
Originally Posted by ctamblyn View Post
ETA2: What a poor job I've done of explaining my point. I don't mean to claim that any lookup table is conscious, but that the mere fact that any computer program can be (in theory, at least) implemented using a large lookup table doesn't by itself rule out the possibility of a conscious computer program.
But any such program would have a 1-1 mapping, somebody correct me if I'm wrong, with such a lookup table. It's the Chinese room all over again. ;-)

Last edited by shuttlt; 8th January 2013 at 01:10 PM.
shuttlt is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th January 2013, 01:12 PM   #702
ctamblyn
Data Ghost
 
ctamblyn's Avatar
 
Join Date: Nov 2009
Location: The Library
Posts: 2,006
Originally Posted by shuttlt View Post
No, I'm pretty confident that doing the whole thing as a lookup table would be just as "possible" as the method you suggest.
Do you mean without any internal state?

If so, how would the following algorithm be modelled as a lookup table without referring to the internal state held in a?
1. Initialise the value of a to zero.
2. Read a number from the input and store it in b.
3. Output the sum of the current values of a and b.
4. Set the new value of a to the current value of b.
5. Go back to step 2.
__________________
Join Team 13232 for science!
ctamblyn is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th January 2013, 01:15 PM   #703
ctamblyn
Data Ghost
 
ctamblyn's Avatar
 
Join Date: Nov 2009
Location: The Library
Posts: 2,006
Originally Posted by shuttlt View Post
But any such program would have a 1-1 mapping, somebody correct me if I'm wrong, with such a lookup table. It's the Chinese room all over again. ;-)
To be honest I never found the Chinese Room argument convincing. The operator does not understand Chinese, but the operator is not the whole system.

ETA:

It isn't merely a lookup table, it's a lookup table combined with an algorithm such as the following:

1. Initialise the internal state.
2. Get some inputs.
3. Use the lookup table to determine what the new internal state and outputs should be, given the current internal state and inputs.
4. Go back to step 2.
__________________
Join Team 13232 for science!

Last edited by ctamblyn; 8th January 2013 at 01:20 PM. Reason: ETA
ctamblyn is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th January 2013, 01:17 PM   #704
rocketdodger
Philosopher
 
rocketdodger's Avatar
 
Join Date: Jun 2005
Location: Hyperion
Posts: 6,884
Originally Posted by shuttlt View Post
It's not an important quibble, but formally, surely any computer running a program and taking input is equivalent to some finite (though perhaps insanely large) lookup table? I've known AI students in my time who I think would have claimed a lookup table could in that case be as conscious as anything else.
Not a single lookup table, no.

The sequence of causal events is important. As in, A --> B -- > C. You can't claim that is equivalent to A --> C.
rocketdodger is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th January 2013, 01:21 PM   #705
rocketdodger
Philosopher
 
rocketdodger's Avatar
 
Join Date: Jun 2005
Location: Hyperion
Posts: 6,884
Originally Posted by shuttlt View Post
If there is no way, even in theory, to tell what the robot, or anyone else is actually experiencing as a subjective experience except to ask them and take it on trust, how can you also be saying it's all perfectly deducable?

We can tell that there is some kind of brain state associated with light in the frequencies we call red. Piggy has said this a load of times. Whether you, I or this robot that was mentioned experience similar subjective experiences and what those subjective experiences are and why they should be how they are are not obviously answerable questions, we just have to assume it, or not as our intuition leads. I thought you were accepting this when you said that it had to be taken on trust. If we have a theory that accounts for it that doesn't have a big whole where all the important stuff happens, surely it doesn't have to be taken on trust?
Everything has to be taken on trust. Don't kid yourself. The only reason you don't consider stuff like "1 + 1 == 2," "the sun comes up in the morning," and "tacos taste great" as "taken on trust" is because you are so confident of them. But its still trust in your own sanity and senses.

Do you trust your own self reporting of your subjective experience? Think about it. You think to yourself and you are sure you are conscious. That's a form of trust -- you trust your own perception. It is as self-evident as 1 + 1 == 2. So it isn't a big deal at all to project the same amount of trust on another human -- after all you are human too. I completely trust that you are conscious when you tell me you are, as much as I trust 1 + 1 == 2.
rocketdodger is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th January 2013, 01:31 PM   #706
rocketdodger
Philosopher
 
rocketdodger's Avatar
 
Join Date: Jun 2005
Location: Hyperion
Posts: 6,884
Originally Posted by Mijin View Post
Could I write a program that crudely does step 2? No, and nor can anyone else. We simply don't understand how to make a machine experience a feeling.
But that isn't true. I for one have very good ideas on how to do it. Many others do as well.

The essential factor you're missing is the extremely dense content level of human pain. If you analytically distill each facet down into the fundamentals you see that pain is an aggregation of many sensory inputs, plus some emotional content which impacts thought processes.

And the sensory aspect can be distilled pretty low. We know pain signals typically come from the same neurons that normal sensory information comes from. Look into an extremely bright light, and it hurts. But you don't normally associate pain with light! It isn't that hard ( for me, at least ) to imagine that pain might just be a learned response to sensory neurons firing stronger than some threshold that is suggested by evolution and development.

Ask yourself what happens in your head when you feel pain. You focus on it, right? That's sort of the first step in the distillation. You can't ignore it, you can't think about other stuff, that sensory input is front and foremost in your thoughts. Have you ever felt pain when you weren't focused on it? I'm sure you have, and I'm sure you remember it didn't feel the same. In fact you might not have felt it -- it is well known that people often don't feel pain until they realize they are injured, then it hits them like a ton of bricks.

By simply paying attention to all these little subtleties you can start to build up a pretty good definition of what pain actually *is,* and how it arises becomes apparent once the definition forms.

Last edited by rocketdodger; 8th January 2013 at 01:33 PM.
rocketdodger is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th January 2013, 01:38 PM   #707
rocketdodger
Philosopher
 
rocketdodger's Avatar
 
Join Date: Jun 2005
Location: Hyperion
Posts: 6,884
Originally Posted by ctamblyn View Post
To be honest I never found the Chinese Room argument convincing.
The only people that do are those who have no clue about programming.

You don't even need the experiment to be in Chinese. Just a simple lookup table of English phrases to English phrases.

"Man the weather is horrible" -- > "Yep, my backyard got flooded."

THE OPERATOR DOESN'T NEED TO UNDERSTAND ENGLISH ZOMGWTF

It only takes a 5th grade level of thinking to figure out that the person who wrote the lookup table is the one who understands English, and they are conscious. So you can't get away from a conscious entity no matter how hard you try.
rocketdodger is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th January 2013, 01:49 PM   #708
shuttlt
Illuminator
 
Join Date: Aug 2008
Posts: 4,700
Originally Posted by ctamblyn View Post
Do you mean without any internal state?

If so, how would the following algorithm be modelled as a lookup table without referring to the internal state held in a?
1. Initialise the value of a to zero.
2. Read a number from the input and store it in b.
3. Output the sum of the current values of a and b.
4. Set the new value of a to the current value of b.
5. Go back to step 2.
Memory in any computer system is finite. It can be represented in the lookup table. It makes it somewhat larger or course, but these are just different levels of unfeasible.
shuttlt is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th January 2013, 01:53 PM   #709
shuttlt
Illuminator
 
Join Date: Aug 2008
Posts: 4,700
{delete} :-)

Last edited by shuttlt; 8th January 2013 at 02:04 PM.
shuttlt is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th January 2013, 01:54 PM   #710
ctamblyn
Data Ghost
 
ctamblyn's Avatar
 
Join Date: Nov 2009
Location: The Library
Posts: 2,006
Originally Posted by rocketdodger View Post
The only people that do are those who have no clue about programming.

You don't even need the experiment to be in Chinese. Just a simple lookup table of English phrases to English phrases.

"Man the weather is horrible" -- > "Yep, my backyard got flooded."

THE OPERATOR DOESN'T NEED TO UNDERSTAND ENGLISH ZOMGWTF

It only takes a 5th grade level of thinking to figure out that the person who wrote the lookup table is the one who understands English, and they are conscious. So you can't get away from a conscious entity no matter how hard you try.
Not wishing to derail too far, but for me I don't have any problem with the possibility of a suitable "Chinese Room" being conscious in itself. It would of course require some internal state, so it isn't merely a one-one mapping of inputs to outputs (I can't remember if internal state was present in the original formulation). If it quacks like a duck, etc.
__________________
Join Team 13232 for science!
ctamblyn is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th January 2013, 01:57 PM   #711
Piggy
Unlicensed street skeptic
 
Piggy's Avatar
 
Join Date: Mar 2006
Location: Ralph's side of the island
Posts: 15,924
Originally Posted by rocketdodger View Post
Originally Posted by Piggy
The "hard problem" ask why the subjective perspective yields the particular observations that it does, rather than a completely different set of observations.
*Any* perspective yields a particular set of observations !!

You're asking why we see a circle instead of a square when we look at a sphere. I've got news for you -- its an easy answer. If you think this is a "hard problem" then maybe some math courses are in order?

And guess what -- everyone who looks at a sphere sees a circle. Everyone. Except ... anyone who is inside the sphere. Then your perspective seems qualitatively different from everyone else.

So ... what's the hard problem again?
I will try to type slowly so that you can follow....

The hard problem is this: To develop a theory which explains why the particular conscious experiences we observe are correlated specifically with the particular neural states to which we observe they correspond.

This is not a difficult question to grasp, as long as you don't have wrong ideas stuck in your head.

The fact that you cannot even repeat the question back correctly when it's stated to you, even after multiple attempts to do so, speaks volumes... but I'll get to that in a minute.

The question is certainly NOT "Why do we see a circle when we look at a sphere?" Nor is anyone unaware of the fact that all people with normal brains will see a circle when they look at one, or that a sphere looks different from the inside than it does from the outside -- in fact, the hard problem arises precisely from these facts, as has been clearly stated.

As has already been explained, the hard problem arises because we have two sets of observations which are so closely correlated.

But here's why your sphere example doesn't apply: We know the math that lets us translate from the view outside to the view inside.

You can do this math yourself, in fact.

From the outside of the sphere, we can use the information we have from our own observations to construct a model of what it will look like from the inside.

This cannot be done between observations of neural activity and conscious experience.

The thing is, we do have a theory that explains the relationship between observations inside and outside of a sphere. We have no such theory for conscious experience.


Your own example shows why you're wrong!

The hard problem actually isn't hard to see when you're familiar with neurobiology and you don't have a mental defense built up that requires you to deny that a theory is needed.

Which brings us to why you can't see the hard problem even well enough to paraphrase it after it's been explained to you.

Your entire (wrong) view of consciousness depends on consciousness "just happening" whenever information processing is going on. If you admit that this is simply false, and that we do need an explanatory theory for it just like every other phenomenon in the world, all your misguided thinking about consciousness falls apart.

You openly dismiss and disparage neurobiology (even though you can't tell the names of top researchers from those of pop science writers) while insisting that you're an expert because you're an AI programmer.

In fact, no one knows how to build a conscious computer or machine, or even design one, so not only does your ignorance of neurobiology mean that you have no clue what the real research is, your delusion that you're working on conscious systems means that you're basing your (wrong) ideas on an irrelevant dataset.

You have to believe that consciousness "just happens" when info processing is going on, because it's the only way that you can fool yourself into believing that your work has anything to do with consciousness. (When, in fact, it doesn't.)

This would be fine if it were just you being wrong, but I have to call you out on it because you -- and PixyMisa and a few other AI fellow travelers -- insist on descending on every thread about consciousness and clogging it up with ridiculous notions based on flawed assumptions and irrelevant data.

By now, though, I think everyone can see that you have no idea what you're talking about.

Let's be clear here....

1. Your assumption that conscious experiences "just happen", and that we need no explanation for why the human brain responds to certain wavelengths of light by producing colors, rather than some other experience, or no experience, is not only wrong, it's a mistake of basic science that no freshman should make. But you have to believe this in order to maintain the illusion that your work on robot visual systems, for example, is somehow relevant to consciousness, because in your fantasy world setting up a response to light means setting up a conscious experience because, hey, it just happens!

2. Your claim that consciousness is caused by "self-referential information processing" is false, and has long been proven -- yes, proven -- false in the lab, because it cannot be used to distinguish conscious processes from non-conscious ones. It yields too many false positives to be useful. But again, you ignore this fact (and it is a fact) because if you keep believing that SRIP=consciousness, you can continue to tell yourself that your work on machines is work on consciousness.

3. Because you ignore neurobiology (and even sneer at it) and focus instead on studies of non-conscious systems such as computers and robots, the data sets you refer to are irrelevant.

4. Because of all the above, almost everything you say about consciousness is absolutely 180 degrees wrong, from "you can build a conscious brain out of rope" to "we might be living in a computer simulation" to "you can make a computer conscious by simulating a brain on it" to "a computer could be conscious at an arbitrarily slow operating speed" and on and on and on.

Your assumptions are false, your data is irrelevant, and your conclusions are wrong.

If you say anything correct about consciousness, it's by accident.

That's why I get so tired of seeing you and Pixy et al clog up these threads.

I'm sorry to have to call you out like this -- not that your ego will allow you to admit that you're claiming to be an expert on something about which you're profoundly ignorant -- but y'all are doing a tremendous disservice to other people on this board by trotting out these arguments which are attractive at first glance but totally false, so a public debunking has to be done.

As I said, the fact that you cannot even correctly phrase the hard problem when it's been clearly outlined to you speaks volumes.

The hard problem is a threat to your fantasy world, in which you get to tinker with machines and claim that you know something about consciousness.

In reality, you don't know the first thing about it.
__________________
.
How can you expect to be rescued if you donít put first things first and act proper?

Last edited by Piggy; 8th January 2013 at 02:00 PM.
Piggy is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th January 2013, 01:59 PM   #712
shuttlt
Illuminator
 
Join Date: Aug 2008
Posts: 4,700
So something appearing to be conscious might not mean that it is conscious, just that something else is? Would the same apply to the robot?
shuttlt is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th January 2013, 02:02 PM   #713
shuttlt
Illuminator
 
Join Date: Aug 2008
Posts: 4,700
Originally Posted by ctamblyn View Post
Not wishing to derail too far, but for me I don't have any problem with the possibility of a suitable "Chinese Room" being conscious in itself. It would of course require some internal state, so it isn't merely a one-one mapping of inputs to outputs (I can't remember if internal state was present in the original formulation). If it quacks like a duck, etc.
You would have to have some state, I suppose, even if it just moving between pages of the lookup table.... :-( I think I'd been considering state as just another input to the table.

Last edited by shuttlt; 8th January 2013 at 02:05 PM.
shuttlt is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th January 2013, 02:10 PM   #714
Piggy
Unlicensed street skeptic
 
Piggy's Avatar
 
Join Date: Mar 2006
Location: Ralph's side of the island
Posts: 15,924
Originally Posted by shuttlt View Post
Is Piggy claiming an extra layer?
No, I'm not. Punshhh misunderstands.

Originally Posted by shuttlt View Post
I thought the issue is that even if it is as rocketdodger says it is, there isn't any obvious means to deduce from objective observation what the subjective experience will be like, or even that there will be one.
Yes, that's correct. We only know what the correlates are by observation. We have no theory that allows us to describe the transformations between neural states and conscious states.

We do have such a theory for the transformations from physical inputs to neural states, but not for that second leap.
__________________
.
How can you expect to be rescued if you donít put first things first and act proper?
Piggy is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th January 2013, 02:12 PM   #715
ctamblyn
Data Ghost
 
ctamblyn's Avatar
 
Join Date: Nov 2009
Location: The Library
Posts: 2,006
Originally Posted by shuttlt View Post
Memory in any computer system is finite. It can be represented in the lookup table. It makes it somewhat larger or course, but these are just different levels of unfeasible.
I'm not insisting that the memory be infinite, just that there be some internal state. Otherwise software would just be be a one-one mapping of inputs to outputs, and of course most software is not (including my toy example above) - the same input can yield different outputs on different occasions.

Incidentally - and purely out of interest - if I assume 32-bit integers in my first toy example above, the lookup table mapping the pair (acurrent, bcurrent) to the new internal state anext and the output value, would need 2(32+32) entries, each being 4+4 bytes in size. That's a total of 137,438,953,472 gigabytes of storage (over 1021 bits) for the lookup table, if we chose to implement it in that way, despite the fact that the code as written (without the table) could be as small as a few dozens of bytes. And of course, you'd still need some code something like my second example to actually make use of the table (i.e. to produce the same observable behaviour as before).

ETA: I see I cross posted with you, due to my slow typing.
__________________
Join Team 13232 for science!

Last edited by ctamblyn; 8th January 2013 at 02:15 PM.
ctamblyn is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th January 2013, 02:15 PM   #716
Piggy
Unlicensed street skeptic
 
Piggy's Avatar
 
Join Date: Mar 2006
Location: Ralph's side of the island
Posts: 15,924
Originally Posted by Anders Lindman View Post
The Integrated Information Theory can sort of explain that
Actually, Tononi and Balduzzi admit that IIT does not address the hard problem.

I think IIT is promising, but I see a big potential Achilles heel.

T&B have not established that all systems which contain integrated information must be conscious. It may turn out that there is "extra phi" generated from, say, choruses singing in unison.

If that's the case, then IIT will not be able to distinguish between conscious and non-conscious systems.

Interestingly, though, even if that turns out to be the case, it may still be true that IIT is able to quantify consciousness anyway, as one type of integrated information system.

In any case, the developers of IIT do not see it -- at least in its current form -- as a solution to the hard problem.
__________________
.
How can you expect to be rescued if you donít put first things first and act proper?
Piggy is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th January 2013, 02:16 PM   #717
Anders Lindman
Penultimate Amazing
 
Anders Lindman's Avatar
 
Join Date: Sep 2010
Posts: 13,833
Here is a presentation about the Integrated Information Theory:

Integrated Information Theory of Consciousness, Giulio Tononi [full lecture] -- http://www.youtube.com/watch?v=dfv_uZEkUPg

One thing that was shocking to me was that the theory may show that computers can have real consciousness! That it is the integration of massive information into a SINGLE experience that gives rise to consciousness. I think of consciousness as the awareness of information, and that awareness is a STATE. A state is not "made" of something and instead is an emergent property of a system.

Does the Internet have consciousness for example? No, not likely. Not because it has a lack of amount of information but because it isn't (at least not yet) connected in such a way that makes the function Phi according to the Integrated Information Theory high enough to bring about consciousness.
Anders Lindman is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th January 2013, 02:19 PM   #718
Piggy
Unlicensed street skeptic
 
Piggy's Avatar
 
Join Date: Mar 2006
Location: Ralph's side of the island
Posts: 15,924
Originally Posted by tsig View Post
Amazing ain't it?

This seems to be the philosophers view of color vision:

red wavelength hits eyeball > nerve signal sent to brain > magic happens > we see red.
So you still believe in a "red wavelength" of light?

Amazing indeed.

Do you also believe that there is pain in hypodermic needles?

I have no idea about philosophers, but the neurobiologist's view goes like this:

Light of wavelength X and frequency Y hits eyeball > Nerve signal sent to brain > Transformation for which we have no math or any other sort of theory > We see red.

And that is accurate.
__________________
.
How can you expect to be rescued if you donít put first things first and act proper?
Piggy is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th January 2013, 02:24 PM   #719
Piggy
Unlicensed street skeptic
 
Piggy's Avatar
 
Join Date: Mar 2006
Location: Ralph's side of the island
Posts: 15,924
Originally Posted by tsig View Post
Teh "hard problem"

When light of the red wavelength hits the eyeball why do we see red instead of smelling bacon?

Easy answer:

We're wired that way.
1. There is no such thing as a "red wavelength".

2. Wired what way, exactly?
__________________
.
How can you expect to be rescued if you donít put first things first and act proper?
Piggy is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Old 8th January 2013, 02:30 PM   #720
punshhh
Illuminator
 
punshhh's Avatar
 
Join Date: Jul 2010
Location: Rural England
Posts: 4,820
Originally Posted by dlorde View Post
It's not clear what punshhh means by 'living conscious processes' in cells. Individual cells are not conscious by common definitions of the word (i.e. a function of mind, and therefore of multicellular organisms with a complex nervous system). Many cell processes combine to support the metabolic system activities we call life; I don't think it's possible to identify any particular cellular process as 'living', let alone conscious.
What I am referring to is the notion that the basis of the rich experience we have of being conscious as human beings is a result of the accumulated chemical activity throughout the entire body( more specifically the electrical activity). Not the higher mental functions of the brain, although this higher mental activity is necessary too for our self conscious awareness and thinking (subjective world).

Thus the higher mental functions on their own would not produce consciousness, or self consciousness, or the subjective world.

My reason for this notion is observations of the evolution of life. Life forms have been evolving the various aspects of awareness for many millions of years. The higher mind functions are only a recent phenomena, The icing on the cake.

The icing is not the cake.
punshhh is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top
Reply

JREF Forum » General Topics » Science, Mathematics, Medicine, and Technology

Bookmarks

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -7. The time now is 12:01 PM.
Powered by vBulletin. Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
© 2001-2013, James Randi Educational Foundation. All Rights Reserved.

Disclaimer: Messages posted in the Forum are solely the opinion of their authors.