PDA

View Full Version : On Consciousness


Pages : 1 2 3 4 5 6 [7] 8 9 10 11 12 13 14 15 16 17 18 19 20

tsig
2nd July 2012, 04:13 PM
No I said exactly what I said, and you've just interpreted it though your ego and materialistic filter to mean the opposite of what I meant.

Consciousness and subconsciousness precludes the brain and gives rise to it. And you can't prove otherwise.

Neurochemistry and our brain is a great example of consciousness manifesting an extremely complex material system that we're doing a great job researching.

The only person that evoked magic was you, just then.

Your magic is hilited.

AlBell
2nd July 2012, 04:14 PM
If what happens in our brains doesn't reflect reality then how can we do physics?
It's been asked before, and I've seen no answer:

""How can the third-person requirements of the scientific method be reconciled with the first-person nature of consciousness?" Win "

comp.lits seem to think avering there is no first-person consciousness does it.

tsig
2nd July 2012, 04:16 PM
Yes I know your position on this. But your evasive perspective and syntactic smoke and mirrors do not provide you with an ontological foundation to your philosophy. Which I told you right at the outset of our discourse on this forum.

It is all very smooth, slippery and disingenuous and leaves a gapping hole in your logic. Which you try to cover over with appeals to meaninglessness, incoherence and an assertion that logically we should (can) only consider what has been verified scientifically or mathematically.

Now are we going to go around the mulberry bush a fourth time (only this time on the issue of personal philosophies)? Or are you going to accept that humanity cannot make the assumption that physical matter is the sum total of what exists and is involved in reality as we know it. But that we should realise our limitations and state that such assumptions are tentative and provisional on further discoveries and realisations and that we are barely scratching the surface of the subtle reality we find ourselves in.



I know precisely. Do you, does your philosophy have sound foundations?

Nice example of evasive perspective and syntactic smoke and mirrors.

tsig
2nd July 2012, 04:19 PM
It would be awesome, it is what I came to the jref to do, but what did I find?

Yourself.

tsig
2nd July 2012, 04:21 PM
Oh... very well, what is your methodology? As long as it does not include statements about robots being conscious or sensations being the same as abstract models, I am all for discussing whatever you want.

So we can discuss anything we want as long as it's what you want to discuss.

tsig
2nd July 2012, 04:23 PM
For someone so masterful as yourself, you are leaving yourself vulnerable by wrongly describing my motives.

Reality! Look at my avatar, what you see is just a random arrangement of twigs and moss?

We're back to the fairies now?

tsig
2nd July 2012, 04:35 PM
Please keep in mind that when I refer to all conscious experience as "hallucination", I'm using somewhat metaphorical language to demonstrate something which is true, but not intuitive.

Kindof like you sometimes have to do when explaining other weird scientific truths such as universal expansion, particle-wave duality, and so forth.

But in truth, indeed, all of your conscious experiences are "false", in that they exist only in your experience, and nowhere outside of it.

Also, it is not true that hallucinations are "not related to sensation"... they can be distortions of sensations.

So it really is accurate to say that everything we experience is a hallucination -- as was demonstrated in the thought experiment out heat, poison, and the drug.

If we were to take a drug that made us feel nauseated by fire, or that made us feel like we were being burned when we approached a toxin, we would consider that a hallucination.

And yet our actual experience is absolutely no different, because our experience of burning when we get to close to heat, or nausea when exposed to the smell of rotting meat, is an arbitrary one determined by evolution.

If evolution had taken a different turn, had solved the problem a different way, we wouldn't think twice about the fact that, of course, getting near a fire makes you violently ill, and rotting food produces a burning sensation.

The feeling you get that you open your eyes and are somehow looking out at a world that's distant from you... that's all illusion. That's not how vision happens at all.

Light bounces against your eyeballs, which causes patterns of neural firing, which (in part) cause your body to produce colors and sensations of brightness and so forth... none of which exists anywhere in the world outside your body.

Your regular experience is 100% hallucination... it's just that it is a functional hallucination, not a dysfunctional one.

What happens in the brain is part of a cause-effect chain which often involves causes outside the brain.

This does not make it a "reflection of reality".

If you drop a rock in a pond, are the ripples a "reflection of the stone"?

If you say no, then you have no grounds to say that what happens in the brain is a "reflection" of what happens outside of it.

How does dropping a rock in a pool have anything to do with reality being an hallucination?

If you think that what happens in the brain does not represent the real world how does anyone survive since they might see a cliff and hallucinate solid ground.

tsig
2nd July 2012, 04:39 PM
Actually, it's not a tautology, because in our everyday lives we have the impression that our conscious experience is not only in our experience... we have the sensation of looking out at the world which is out there, for example... when we look at a tree, it seems to us that we are somehow opening our eyes and experiencing something outside of us... but we're not.


You can't tell the difference between yourself and a tree?:eye-poppi

quarky
2nd July 2012, 05:08 PM
If what happens in our brains doesn't reflect reality then how can we do physics?

Damn good retort, I must say.

If I was actually trying to defend my position, I'd answer "We can't".

However, physics is analogous to police work, on steroids.
We hope that it has no agenda.
It will see beyond the false positives and the poor witnesses.
It will dig deeper for the truth. Dna.

Sure, that's biology...but physics would be even more perverse about getting to the crux of the matter.

Which, more likely than not, there isn't one.

Piggy
2nd July 2012, 06:05 PM
"A square is a rectangle."

Piggy: " As you can't help but know, this idea is demonstrably false because there are rectangles that are not squares."

*sigh*

Sorry, that's not analogous.

If PM means to say "SRIP is necessary for consciousness" that's fine. I don't know of anyone who would argue against that, although some might want to preface that with "it seems at the moment that".

But it doesn't seem to me that this is what he has been claiming.

Piggy
2nd July 2012, 06:12 PM
That is about a robot some researchers got to move itself based on "imagining" the state that the resulting movement would lead to, using nothing but neural networks wired up somewhat similar to a known pathway in the mammalian brain. If you want me to explain in more detail I can, but suffice to say the research is 7 years old and yet it pretty much invalidates every single point you have ever tried to make in these discussions. In particular, it is 1) neurons that exist only in code controlling a real robot and 2) doing it in a manner that is similar to how our neurons might control us, at least on a very limited and simple scale.

If you don't mind my saying, this is where your blind spot shows up again.

Our brains imagine unconsciously all the time. That's how we navigate our daily lives, and that's why we're surprised by a looming shape in the hall or a missing elevator.

We can see this now. Brains use the same circuits for imagination as experience, and researchers have been able to observe some activity there as we (nonconsciously) imagine what's going to happen next at every waking moment of the day. (When we're dreaming, on the other hand, we don't seem to care what's going to happen next.)

But we're not in any way consciously aware of this. It's not a brain process which our brain bothers to hallucinate about. And given the inevitable results if it did, it's no wonder that evolution didn't select for such an arrangement.

So that's really cool work, it's just off-topic here.

It's amazing to me that you don't seem to understand that this research, while important for understanding the brain, no argument there, is nonetheless not about consciousness, because consciousness is not necessary here.

Piggy
2nd July 2012, 06:16 PM
I just told you. Neither you or Pixy are giving anything besides quotes and opinions, but both of you speak of actual science. Yet we don't see actual science beign shown.

Dude, on the issue you raised, i.e. Pixy's "central claims", I had no choice but to give you those quotes because it's the best possible evidence to show that one of those claims is false.

The only other option I can think of is to offer as evidence the cognitive neuroscience section of your nearest university library -- reading that should demonstrate my point as well, so that's my other citation.

If you want to discuss the science, you'll have to ask a different question, because there is no science dealing with Pixy's central claims, because they're false.

Belz...
2nd July 2012, 06:19 PM
There is the entire field of computational neuroscience, which Piggy seems to believe can't possibly work.

Great ! Help me, Pixy. Give me some links and references.

Belz...
2nd July 2012, 06:20 PM
Consciousness exists independent of the brain?

Yeah, tell that to the guy who just woke up from a 12 year coma !

Belz...
2nd July 2012, 06:21 PM
Dude, on the issue you raised, i.e. Pixy's "central claims", I had no choice but to give you those quotes because it's the best possible evidence to show that one of those claims is false.

Ok, then. I'll chalk that up as "consciousness is mysterious". I hope Pixy answers my challenge with more substance, but if he does I won't have any opposing evidence to judge both your claims by.

Solitaire
2nd July 2012, 06:34 PM
I've never heard anyone suggest that c. could be replicated easily, or out of old junk.

But, why not a general purpose computer with some very sophisticated programming?


It turns out to have a fatal flaw. (http://www.nytimes.com/2012/06/26/technology/in-a-big-network-of-computers-evidence-of-machine-learning.html?_r=1&ref=technology)

Lolcats don't really rule the internets. Do they? :covereyes

Piggy
2nd July 2012, 06:46 PM
Oh, I see. "The brain is not like a computer" is not an actual claim, then ?

Let's see. What does a computer do ? What does a brain do ? See the resemblance ? I'm not asking you how brains and computers are different. But considering the definition of "computation" and "computer", I'm asking you how you can claim that this is not what a brain does.

That's right, "the brain is not a computer" is not a claim, because they are obviously different things (which is why we have different words for them) and if someone wants to say one's the other, then we need an explanation.

Brains are bits of organic goo that are great at Frisbre and bad at math, computers (real ones we have now) are quite the opposite.

What if I said your brain was a rocking chair? Would "My brain is not a rocking chair" then become a position that demands evidence? No, you'd want to know why something thinks they're equivalent.

And if you step back and look at what brains and computers-- real ones -- actually do, well, it's not all that similar.

If you want to say that two objects are the same thing, then you've got to show either structural of functional equivalence.

A prosthetic isn't a leg structurally. It's made of plastic and steel and it has springs, but not muscles or bones or nerves. But it's a leg functionally because it can do what a biological leg actually does.

Objects that differ wildly in structure can all be brushes, because they all do the things that brushes do.

Now if we start by comparing real brains to real computers, as they exist today -- and that's the safest place to begin -- we find that they're not similar in structure or in function. (See Frisbee/math above.)

Researchers are developing more and more advanced machines -- and I use that term advisedly -- which can perofrm an expanding range of tasks overlapping with what our bodies can do, but it's slow-going, very early, and more importantly they haven't begun to develop any experiments dealing with consciousness.

So if we say "the brain is a computer" in a broader sense, that they belong to the same category even if the ones we have today are not (yet) functionally or structurally equivalent -- perhaps technology will one day make them functionally equivalent -- then what do we mean?

It means we have to say "What is a computer" and explain why the brain and my PC are both computers, but my desk and the ocean and the stars are not.

Without that, the claim "the brain is a computer" doesn't really mean anything.

And frankly, that's what I've been waiting for: What is this "computer" which a brain and a PC both are, but my heart is not? And what insight about consciousness do we gain from the identification?

You raise the point of computation. If brains perform computations, doesn't that mean they are computers?

Actually, no. If you mean "computation" in the sense that Wolfram means it. In that sense, the laws of physics are viewed as the rule set determining the changes between any state and the next state of the system.

So you can talk about the computational capacity of hurricanes, for instance. And Wolfram does.

If you mean symbolic computation, however, then you have a big problem on your hands, because you necessarily invoke a programmer/user into the system when you do that, and we don't have any such thing in any of the organs of our body, including the brain.

The neurobio approach simply deems symbolic computation unnecessary, and that's been working just fine.

It is interesting, to say the least, that computers have allowed us to make machines that behave in some ways like us precisely because they have computers and not brains. That's damn impressive.

But you'll notice that in the case of the leg we conclude that prosthetics are legs, not that legs are prosthetics.

Similarly, why not say that computers are brains, or can be brains? Hard to argue against that.

But what would it really mean so say that a brain is a computer?

Piggy
2nd July 2012, 07:02 PM
I would say it would have something to do with the definition of the word:

Consciousness is the quality or state of being aware of an external object or something within oneself.

Of course, by such a simple definition, any computer is conscious, but the "self-aware" part of the definition pretty much screams "introspective", no ?

That's why neurobio uses a functional definition.

I mean, if your body can respond in ways that clearly take something outside of it into account, then you can always argue that you are "aware" of it in some way... I mean, how could you say you're totally "unaware" of it?

But we know for a fact that the body can interact successfully with all sorts of things without involving consciousness. That's been found in a very wide range of experiments.

Computationalists seem to want to then jump to "self-awareness" as the next possible requisite for consciousness, perhaps because they can fit "data about the self" into their model.

But in the brain, it doesn't work that way.

Core consciousness is handled in the brain stem, and is evolutionarily early. Self-awareness, which we seem to share with dolphins and chimpanzees and gorillas at the moment but who knows what we'll find, is higher-order in the brain and later in evolutionary terms, so it's not a requirement for consciousness.

Consciousness is what happens when you wake up or dream, and what isn't happening when you're deeply asleep. It's color and sound and odor and texture and hot and cold and all that.

We're investigating the neural correlates, but we don't know anywhere near enough to even think about designing a machine that we have any reason to believe could generate any of that.

Piggy
2nd July 2012, 07:06 PM
Actually, the question of whether or not we can make conscious machines or whether our idea of consciousness applies to more things than we initially thought is crucial in understanding consciousness in general, and in people in particular.

I know, but we can't work backward.

We must understand what the brain is doing first.

Without that, we can't design conscious machines, or say definitively which other things are conscious and which are not.

Right now, we have no conscious machines to study, no designs, no way to think about designs.

So sure, threads about conscious machines can be interesting.

But if we want to discuss what's actually known about consciousness, I don't see any reason to spend any time hypothesizing about machines we can't even imagine how to design.

Piggy
2nd July 2012, 07:19 PM
How does dropping a rock in a pool have anything to do with reality being an hallucination?

If you think that what happens in the brain does not represent the real world how does anyone survive since they might see a cliff and hallucinate solid ground.

As your highlight notes, I'm using "hallucinate" in a not-entirely-literal sense here.

But in a very real way, I mean what I say.

Someone upthread defined a hallucination as a perception of something that's not real.

And that's fine for everyday use, but when we're trying to be accurate about what happens in the brain during conscious experience, we have to be more careful and more precise.

To illustrate why this definition won't serve, let's imagine you're sitting out on your back porch with a friend who decides, for some reason, to take a very mild dose of LSD. Just enough to change the quality of some of the colors and sounds, things like that, so he could still have a conversation but his experience was in some ways different.

If he experienced a yellow sky, you'd call that a hallucination.

But hold on. How can you do that?

You can't claim he's hallucinating on the grounds that the sky isn't really yellow... because it's not really blue either!

All you can say is that his experience is not normative.

As long as he can interact with his yellow sky as successfully as you can with your blue one, neither your view nor his is "skewed" they are just different.

The whims of evolution could have given us a yellow sky, after all. Why not? In that case we'd consider a blue sky to be a "hallucination".

So what we're really saying in common usage is that a hallucination is a non-normative experience (at the least).

My point here is that our everyday experience is essentially a normative hallucination, because it allows our bodies to get along in the world, but it bears no resemblance to that world whatsoever.

It is an entirely different thing.

And it is indeed like the rock.

You bounce light off a tree and onto an eyeball, the light doesn't at all resemble the tree.

The eyeball reacts with a cascade of electrochemical activity that doesn't resemble the light or the tree.

The brain produces colors and brightness which don't resemble the electrochemical activity or the light or the tree.

We think it does, because we only have access to the last bit, and because it does work for us as we navigate spacetime, so we believe that the stuff out there actually does somehow resemble our conscious experiences.

But of course it doesn't.

That's how it's like the rock and the water.

The rock causes the water to ripple, but the ripples don't resemble the rock, and they are not representations of the rock.

The light and the neural activity have the same relationship.

Ditto the ripples and the patterns on the bank, and the neural activity and stuff like color and odor and pain and sorrow.

Piggy
2nd July 2012, 07:23 PM
You can't tell the difference between yourself and a tree?:eye-poppi

And neither can you, because "your experience of a tree" is simply part of what you are at that moment, if by "you" we mean the person I'm corresponding with at this moment.

When I'm looking at a tree, the green is me, the sound is me, the brightness is me....

When those things are, I am. When they stop, I stop.

Piggy
2nd July 2012, 07:31 PM
Ok, then. I'll chalk that up as "consciousness is mysterious". I hope Pixy answers my challenge with more substance, but if he does I won't have any opposing evidence to judge both your claims by.

Please don't misunderstand me.

You asked a question to which the only rational answer was to cite neurobiologists saying that the mechanism of consciousness is not known.

I can't help that, this is what you asked.

If you want to discuss the science, I'd love to.

What would you like to talk about?

rocketdodger
2nd July 2012, 09:00 PM
If you don't mind my saying, this is where your blind spot shows up again. If you don't mind my saying, this is where you start off your response like every other post you make -- explicitly declaring everyone else to be wrong.

Our brains imagine unconsciously all the time. That's how we navigate our daily lives, and that's why we're surprised by a looming shape in the hall or a missing elevator.

So you are saying imagination plays no part in consciousness. Kind of a non-starter, since it is such an absurd proposition...

We can see this now. Brains use the same circuits for imagination as experience, and researchers have been able to observe some activity there as we (nonconsciously) imagine what's going to happen next at every waking moment of the day. (When we're dreaming, on the other hand, we don't seem to care what's going to happen next.)

Again -- you are claiming that this implies those circuits have nothing to do with consciousness ...

But we're not in any way consciously aware of this. It's not a brain process which our brain bothers to hallucinate about. And given the inevitable results if it did, it's no wonder that evolution didn't select for such an arrangement. You know what is funny, in my last post I had actually made a sarcastic comment about how you will run back to some vague "evolution selecting for consciousness" argument the first chance you get, but I edited it out to be polite.

So that's really cool work, it's just off-topic here. You didn't even read the paper, and you wouldn't understand it if you did, so don't patronize me.

It's amazing to me that you don't seem to understand that this research, while important for understanding the brain, no argument there, is nonetheless not about consciousness, because consciousness is not necessary here. I think you misunderstand my intentions. I didn't expect you to change your position, or even think of changing your position, because I know you don't do that. In fact I expected you to just write off this real research as somehow irrelevant in your quest for the magic bean. Thanks for being so predictable !

The reason I posted that was so any observers would see that you are consistently dishonest when it comes to your representation of the opposition, and furthermore to show that there is real research that illustrates that the majority of your arguments don't hold much water.

I'm not really interested in a long discussion because I would rather read *actual research* than get bogged down in a typical piggy-esque merry-go-round of walls-of-text that, once fully parsed, contain zero new information for the reader. I also don't really care to be told, after I spend the time understanding some research that some very smart person has done, that "I don't understand why it is irrelevant." Hey, piggy, I'm the AI programmer, I'm the one that actually read the paper, I get to decide whether it is relevant or not.

PixyMisa
2nd July 2012, 10:46 PM
Great ! Help me, Pixy. Give me some links and references.
Sure.

Wikipedia gives a decent overview of computational neuroscience.

The idea is that - since (a) mind is what brain does, and (b) the brain is a computer - if you model the activity of the brain at different levels of abstraction, you will get similar results.

So the Blue Brain Project, for example, which started out with modelling a rat neocortical column at the molecular level and intends to continue scaling up until it models an entire brain (still in molecular detail). To quote project lead Henry Markram, "If we build it correctly it should speak and have an intelligence and behave very much as a human does." Which quote does nothing to establish my position, but does point out that researchers in the field believe that this sort of simulation will produce a conscious mind. (That's not Markram's primary reason for studying the brain, but it's his position.)

Here he is talking at a supercomputing conference if you're interested: _rPH1Abuu9M.

And here's an article in Seed magazine (http://seedmagazine.com/content/article/out_of_the_blue/?utm_source=SB-bottom&utm_medium=linklist&utm_content=magazine&utm_campaign=internal%252Blinkshare) about the project.

I also just discovered this: http://www.scholarpedia.org/article/Encyclopedia_of_computational_neuroscience

I haven't had a chance to do more than skim it yet, but it looks reasonable so far.

PixyMisa
2nd July 2012, 11:27 PM
About 12 minutes into that talk, Markram covers the point I was stressing earlier: That computational neuroscience is about building computational models at varying levels of granularity and testing them against each other, against real behaviour and against real biology.

Simulating a human brain at the molecular level will be enormously expensive, but it will allow us to derive more abstract models that accurately represent particular functions - to test and confirm what behaviours can be abstracted out without affecting the resulting behaviour.

Mr. Scott
2nd July 2012, 11:33 PM
If you want to discuss the science, you'll have to ask a different question, because there is no science dealing with Pixy's central claims, because they're false.

I get a whiff of potential circular logic there, but I think it's just wrong. Maybe computer science is not really science.

Is pixy's central claim that the brain is a computer?

I love it when tasks that are notoriously difficult for computers are cracked by computers simulating neural networks.

Please don't suggest that not knowing how to do something means it can't be done.

I just read How Many Computers to Identify a Cat? 16,000 (http://www.nytimes.com/2012/06/26/technology/in-a-big-network-of-computers-evidence-of-machine-learning.html?pagewanted=1&_r=2&ref=technology). Thanks for the link, guys!

The Google brain assembled a dreamlike digital image of a cat by employing a hierarchy of memory locations to successively cull out general features after being exposed to millions of images. The scientists said, however, that it appeared they had developed a cybernetic cousin to what takes place in the brain’s visual cortex.

One cool thing about computers is that any type of universal computer can simulate/emulate any other type. For example, an Apple can emulate a PC and vice versa, a PC can emulate a Nintendo, etc. Indeed, general purpose computers can emulate special purpose computers without difficulty.

Would someone explain again why a computer couldn't emulate a brain? Is it just because, if one could, we'd feel less special?

punshhh
3rd July 2012, 01:28 AM
This statement is meaningless, because there isn't a real distinction between "computational literalism" and the "neurobiological approach."

In fact I don't even know what "computational literalism" means. I just googled it and I don't see anything that looks relevant. Google doesn't lie, at least not these days.

Furthermore I don't even know what you mean to mean, because even after years of reading your rants I still honestly don't understand what your overall point is. People doing research in neurobiology and people doing research in computing deal with the same fundamental concepts of causation as every other scientist.

EDIT -- just so you can't say otherwise, here is a link: http://www.aist-pain.it/en/files/CONSCIOUSNESS/Consciousness,_Emotion_and_Imagination.pdf

That is about a robot some researchers got to move itself based on "imagining" the state that the resulting movement would lead to, using nothing but neural networks wired up somewhat similar to a known pathway in the mammalian brain. If you want me to explain in more detail I can, but suffice to say the research is 7 years old and yet it pretty much invalidates every single point you have ever tried to make in these discussions. In particular, it is 1) neurons that exist only in code controlling a real robot and 2) doing it in a manner that is similar to how our neurons might control us, at least on a very limited and simple scale.

I agree in principle with your position, accepting my lack of knowledge of the detail.

However I return to my principle point, that you are proposing an intelligent entity, which behaves the same as a conscious entity, is essentially identical in terms of performance.
Why would it ever need to be conscious?
Are you assuming or hoping that it would somehow become conscious as a sum of its parts?

In the examples we have of conscious entities, consciousness precedes intelligence. The evidence suggests that the principle requirement for consciousness is life and that consciousness is a consequence of life. An evolutionarily advantageous accident. Occurring in a chemical system or soup in which there is little intelligence and computation only in the broadest sense of physical causality in action.

I see no reason why AI would ever require consciousness (not the next few hundreds or thousands of years anyway).

PixyMisa
3rd July 2012, 01:34 AM
However I return to my principle point, that you are proposing an intelligent entity, which behaves the same as a conscious entity, is essentially identical in terms of performance.
Why would it ever need to be conscious?
Because otherwise you can't joots. You'd be perpetually sphexicised.

PixyMisa
3rd July 2012, 01:37 AM
I get a whiff of potential circular logic there, but I think it's just wrong. Maybe computer science is not really science.

Is pixy's central claim that the brain is a computer?
Which would be a very odd thing for anyone to dispute.

A computer is any system that performs computation.
The brain is a system that performs computation.
Therefore the brain is a computer.

dlorde
3rd July 2012, 02:37 AM
Name a process which is not an event.
It seems to me that a process is generally taken to be a causally connected sequence of events. But ISTR this has already been discussed, and it's Pixy you should address this to.

But yeah, consciousness does not have to be introspective at all.

Just out of curiosity, why would you think that it would have to be?

I don't think introspection is the right word - in my experience introspection refers to redirecting the focus of attention from the sensorium to the internal model of self, i.e. it's a higher level of abstraction, consciousness reflecting on itself.

The self-referencing that I see as the basis of consciousness involves feedback - feeding back the evaluated results of behavioural responses to environmental stimuli into the process that selects or generates behavioural responses. This permits adjustment of behavioural response based on the results of previous behaviour, leading to behavioural flexibility. At a minimally conscious level, this might just be a positive or negative association between environmental stimulus and behavioural response that biases subsequent behaviour selection. At a higher level of complexity, an accumulated history of such information can be used as an internal model of the organism's interaction with the environment, allowing the development of an ongoing sense of self, predictive 'what if' modeling, and at the highest levels, theory of mind and introspective self-awareness, self-referencing at a higher level of abstraction.

It's the amount and detail of the information accumulated and the sophistication of the evaluations involved that determine the level of consciousness, but it's all based on self-referential feedback.

I suppose the fly in all this ointment is the definition of consciousness itself. You may agree or disagree with some or all of the model I outlined, but if we each mean something different by consciousness, we're going to talk past each other anyway.

dlorde
3rd July 2012, 02:46 AM
Computational literalism (or opportunistic minimalism, to use Dennett's term) is the notion that your brain literally is a computer, and that consciousness may be reproduced in machines by reproducing only the syntax but not the semantics of the process... in other words, timing doesn't matter, the shape of the brain doesn't matter, you can make a conscious brain out of rope, etc.
OK. Well, for the record, I think timing is likely to be essential, the architecture of the system is crucial, and I can't conceive of a conscious brain made out of rope.

Having said that, timing and architecture are crucial to the effective function of many computer systems, and syntax or semantics, it seems to me that computational equivalence is the essential issue.

dlorde
3rd July 2012, 02:54 AM
When one of the aspects of waking existence that helps to define everything else is attempted to be defined, you will often encounter similar types of statements. Sensation, for whatever reason, is fundamental to how we go about figuring things out. Sensation as a part of our models in terms of it's ontological status, need not be as fundamental (although there is nothing precluding it either).
I understand, sometimes it's hard not to end up explaining or defining things in terms of themselves; but then it's not really a useful explanation or definition. But I got the gist of it.

Btw I meant 'circular' rather that 'tautologous' - it was late, I'd finished the wine ;)

dlorde
3rd July 2012, 03:00 AM
Of course, by such a simple definition, any computer is conscious, but the "self-aware" part of the definition pretty much screams "introspective", no ?

I dunno. Does being aware of 'something within oneself' equate to what we generally mean by 'self-awareness'? I think of self-awareness as being aware of oneself [as an entity], i.e. more than being aware of 'something within'.

Maybe it is just ambiguous phrasing.

Belz...
3rd July 2012, 03:08 AM
That's right, "the brain is not a computer" is not a claim, because they are obviously different things (which is why we have different words for them) and if someone wants to say one's the other, then we need an explanation.

Rocketdodger has already mentioned the square vs rectangle thing. When someone says the brain is a computer, he or she is not saying that it's a pentium, and you know it.

Brains are bits of organic goo that are great at Frisbre and bad at math, computers (real ones we have now) are quite the opposite.

And yet they both do the same kind of work: computing.

If you want to say that two objects are the same thing, then you've got to show either structural of functional equivalence.

No. See, if I want to say that helicopters are aircraft, I don't have to show you that they are identical to planes.

Now if we start by comparing real brains to real computers, as they exist today -- and that's the safest place to begin -- we find that they're not similar in structure or in function. (See Frisbee/math above.)

Again, how ? They do the same kind of work.

It means we have to say "What is a computer" and explain why the brain and my PC are both computers, but my desk and the ocean and the stars are not.

According to wiki, computation is a form of calculation, which is "a deliberate process for transforming one or more inputs into one or more results, with variable change." Personally I'd add that it has to include some form of abstraction.

You raise the point of computation. If brains perform computations, doesn't that mean they are computers?

Actually, no.

Well, then, I see where our disagreement lies.

So you can talk about the computational capacity of hurricanes, for instance. And Wolfram does.

We've been there before. It's like saying running is a process, and then some philosopher jerk comes in and point out that everything, down to elementary particles, is a process. We know that, but we can use the narrower definition for the purposes of talking about running, can't we ? I don't know why we can't do it with computing.

Similarly, why not say that computers are brains, or can be brains? Hard to argue against that.

I wouldn't have a problem against that, but that would be using a definition of computation that specifically excludes brains, which would be weird.

PixyMisa
3rd July 2012, 03:08 AM
OK. Well, for the record, I think timing is likely to be essential, the architecture of the system is crucial, and I can't conceive of a conscious brain made out of rope.
You can't build a computer out of nothing but rope, so Piggy's objection there is nonsensical.

But timing is syntax. Or if that's too hard to swallow all at once, timing can be expressed as syntax. So that objection is also null.

Belz...
3rd July 2012, 03:09 AM
Computationalists seem to want to then jump to "self-awareness" as the next possible requisite for consciousness, perhaps because they can fit "data about the self" into their model.

Makes sense to me, since it's what we usually mean with the word.

Core consciousness is handled in the brain stem, and is evolutionarily early. Self-awareness, which we seem to share with dolphins and chimpanzees and gorillas at the moment but who knows what we'll find, is higher-order in the brain and later in evolutionary terms, so it's not a requirement for consciousness.

Sorry, it doesn't seem to follow, for me. Who cares if it's late development ?

Consciousness is what happens when you wake up or dream, and what isn't happening when you're deeply asleep. It's color and sound and odor and texture and hot and cold and all that.

Is it ?

Belz...
3rd July 2012, 03:10 AM
Please don't misunderstand me.

You asked a question to which the only rational answer was to cite neurobiologists saying that the mechanism of consciousness is not known.

I can't help that, this is what you asked.

If you want to discuss the science, I'd love to.

What would you like to talk about?

I'd like to do the bolded part. Saying that the mechanism is not known is fine, but not quite true because we do know quite a bit about it, even if we don't understand it completely.

dlorde
3rd July 2012, 03:22 AM
So what we're really saying in common usage is that a hallucination is a non-normative experience (at the least).

My point here is that our everyday experience is essentially a normative hallucination...

:confused:

1. An hallucination is a non-normative experience.
2. Everyday experience is a normative hallucination.

Therefore, everyday experience is a normative non-normative experience...

This really doesn't work for me.

Nor does this:The brain produces colors and brightness...

Jeff Corey
3rd July 2012, 04:04 AM
... Sensation as a part of our models in terms of it's ontological status, need not be as fundamental (although there is nothing precluding it either).

The sensation of reading that led to a complete lack of perception on my part.
I'm not kidding.
I thought I knew what all the words meant, apart from each other, but the gestalt is somehow less than the sum of itz parts.

dlorde
3rd July 2012, 04:20 AM
The sensation of reading that led to a complete lack of perception on my part.
I'm not kidding.
I thought I knew what all the words meant, apart from each other, but the gestalt is somehow less than the sum of itz parts.

My feelings exactly.

Mr. Scott
3rd July 2012, 07:11 AM
OK. Well, for the record, I think timing is likely to be essential, the architecture of the system is crucial, and I can't conceive of a conscious brain made out of rope.

Having said that, timing and architecture are crucial to the effective function of many computer systems, and syntax or semantics, it seems to me that computational equivalence is the essential issue.

You just need some 'magination to picture a conscious rope machine. Not knowing how something can be done does not imply it can't be done.

The original meaning of "computer," before Babbage, was a person who, given numbers and algorithms, generated more numbers. This is what modern computer hardware and software does, in nested, recursive structures.

(!Kaggen would be interested to know that Babbage's automated computer was intended to replace human computers who made calculating tables to help ocean navigators "feel the future" of where their ships would end up)

Today's thought experiment is:

Making a conscious computer, with pre-Babbage components. Start with a computing collective, a building (not unlike Gilliam's Crimson Permanent Assurance -- let's call it Crimson Conscious Assurance or CCA) of people sitting at desks doing simple, repetitive tasks. Each plays a neuron, and has connections (synapses) to other desks, near and far, to receive and send messages (action potential spikes), and formulae to establish new connections and retire old ones. Some desks receive messages from outside CCA (sensory), some deliver to the outside (muscle control). Some generate global messages (emotion hormones) and some receive them. No one at any desk understands the purpose of any of their inputs or outputs. Now, scale it down so it fits in a brain case and speed it up so they're as fast as real neurons.

I bet it could wing dive, but would someone explain how my CCA could not be conscious?

Jeff Corey
3rd July 2012, 07:17 AM
It doesn't exist.

rocketdodger
3rd July 2012, 08:30 AM
I agree in principle with your position, accepting my lack of knowledge of the detail.

However I return to my principle point, that you are proposing an intelligent entity, which behaves the same as a conscious entity, is essentially identical in terms of performance.
Why would it ever need to be conscious?
Are you assuming or hoping that it would somehow become conscious as a sum of its parts?

In the examples we have of conscious entities, consciousness precedes intelligence. The evidence suggests that the principle requirement for consciousness is life and that consciousness is a consequence of life. An evolutionarily advantageous accident. Occurring in a chemical system or soup in which there is little intelligence and computation only in the broadest sense of physical causality in action.

I see no reason why AI would ever require consciousness (not the next few hundreds or thousands of years anyway).

It isn't entirely clear that it is possible to get the behavior of a conscious entity without the entity actually being conscious.

That statement is certainly true in a practical sense but probably also in a theoretical sense due to possible self reference issues ( for example, try imagining how to program an AI to answer questions about the current conversation you are having with it -- such an approach just explodes with complexity that no human or even AI programmer could even hope to cope with ).

It isn't that I propose the entity will "become" conscious as the sum of its parts. Rather I propose that the sum of its parts simply is consciousness.

As to the latter portion of your post, whether or not consciousness precedes intelligence depends on what you mean by intelligence. Adding two numbers, sure. But how to move your arm to get at a fruit, no -- that kind of ability stretches back as far as animals themselves.

And as far as computation in life -- life itself is fundamentally a system that uses computation to increase its chances of existence into the future. Thus computation is more intrinsic to life than anything else. The living cell is a computer that is constantly changing it's behavior as a response to internal and external conditions. Too much salt? Not enough nutrients? Cell division timer elapsed? Molecules from neighbors?

When we speak about "computation" it isn't in the computer-ish sense of "1 + 1 == 2" it is in the sense of "a system goes from one attractor to another attractor based on some input." An "attractor" is a thermodynamics term that is sometimes a local energy minima, but always a set of states that the system will converge to if left alone.

In that sense there is a ton of computation going on, but if you think about it almost all of it is somehow part of either life or the stuff that life builds.

Belz...
3rd July 2012, 08:40 AM
It doesn't exist.

Well, you should tell architects that it's useless to try and check if the buildings they are planning could stand, since they don't exist yet.

Jeff Corey
3rd July 2012, 08:42 AM
Are they conscious? Because you said they don't exist yet.

Mr. Scott
3rd July 2012, 10:54 AM
It doesn't exist.

Is that your answer to all thought experiments?

They are designed to stretch the mind and expose ideas, concepts, and principles. If you don't want to play, then STFU and let someone who wants to play answer.

The question is, if it existed, would it be conscious? If not, why not?

Belz...
3rd July 2012, 11:20 AM
Are they conscious? Because you said they don't exist yet.

Analogy 1, Jeff 0.

quarky
3rd July 2012, 11:48 AM
I just had this flash about consciousness...

It could go on forever, via this thread, which is why god invented death and other forms of not-consciousness.

Jeff Corey
3rd July 2012, 05:42 PM
oops

Jeff Corey
3rd July 2012, 05:43 PM
Is that your answer to all thought experiments?

They are designed to stretch the mind and expose ideas, concepts, and principles. If you don't want to play, then STFU and let someone who wants to play answer.

The question is, if it existed, would it be conscious? If not, why not?

Does a P-zombie eat braaaain, or perform Ein Gedankenversuch and dream of P-zombie ovines?

And Mr. Scott, please don't ever tell me to STFU again,or I shall be forced to graphically abuse you. www.youtube.com/watch?v=WdS7ffB-usY
They left out, "Stupid git." at the end.

Mr. Scott
3rd July 2012, 07:52 PM
Does a P-zombie eat braaaain, or perform Ein Gedankenversuch and dream of P-zombie ovines?

And Mr. Scott, please don't ever tell me to STFU again,or I shall be forced to graphically abuse you. www.youtube.com/watch?v=WdS7ffB-usY
They left out, "Stupid git." at the end.

OK, sorry.

rocketdodger
3rd July 2012, 08:59 PM
heh I love that STFU acronym...

punshhh
4th July 2012, 02:46 AM
It isn't entirely clear that it is possible to get the behavior of a conscious entity without the entity actually being conscious.Can you offer a situation in which consciousness is a requirement? I realise that in this discussion there is no precise definition of consciousness.

That statement is certainly true in a practical sense but probably also in a theoretical sense due to possible self reference issues ( for example, try imagining how to program an AI to answer questions about the current conversation you are having with it -- such an approach just explodes with complexity that no human or even AI programmer could even hope to cope with ).I was considering AI self programming and learning from its environment and past results.

It isn't that I propose the entity will "become" conscious as the sum of its parts. Rather I propose that the sum of its parts simply is consciousness.This needs looking at more closely I feel. So if one assembles a system which performs a certain set of computations, it would necessarily be conscious provided the computational requirements for consciousness are met?

Do we know what these requirements are?

As to the latter portion of your post, whether or not consciousness precedes intelligence depends on what you mean by intelligence. Adding two numbers, sure. But how to move your arm to get at a fruit, no -- that kind of ability stretches back as far as animals themselves.I am of the opinion that the earliest forms of life are conscious of their environment. To keep it simple, I am saying that consciousness began with the first single cellular organism.

And as far as computation in life -- life itself is fundamentally a system that uses computation to increase its chances of existence into the future. Thus computation is more intrinsic to life than anything else. The living cell is a computer that is constantly changing it's behavior as a response to internal and external conditions. Too much salt? Not enough nutrients? Cell division timer elapsed? Molecules from neighbors?Yes, this bring up two issues the first of which was pointed out by piggy. Namely that with such a broad definition of computation, a hurricane or a waterfall would be conscious. As a result of exhibiting computation of some form.

Secondly, which I mentioned in the previous thread, there may be something about matter (some property) which results in consciousness when the correct combination of chemical reactions take place. Surely this property of matter would be a requirement for your position. If matter did not exhibit this property any consciousness may well be impossible.

We do not know what this property is, or distinguish a form of matter with the property from one without the property. Therefore we cannot predict that electrically mimicking those chemical reactions would result in the same effects, ie consciousness.

When we speak about "computation" it isn't in the computer-ish sense of "1 + 1 == 2" it is in the sense of "a system goes from one attractor to another attractor based on some input." An "attractor" is a thermodynamics term that is sometimes a local energy minima, but always a set of states that the system will converge to if left alone.Agreed

In that sense there is a ton of computation going on, but if you think about it almost all of it is somehow part of either life or the stuff that life builds.Yes, my position is that life has manipulated matter or systems of matter in the distant past or outside our timespace system and that the property mentioned in my previous paragraph was introduced or manipulated by life in the first place.

PixyMisa
4th July 2012, 03:02 AM
Can you offer a situation in which consciousness is a requirement? I realise that in this discussion there is no precise definition of consciousness.
Asphexishness. (http://en.wikipedia.org/wiki/Digger_wasp#Uses_in_philosophy)

Mr. Scott
4th July 2012, 04:36 AM
Somehow we got distracted from my thought experiment (http://plato.stanford.edu/entries/thought-experiment/) (a variation on the Chinese Room (http://en.wikipedia.org/wiki/Chinese_room))


Making a conscious computer, with pre-Babbage components. Start with a computing collective, a building (not unlike Gilliam's Crimson Permanent Assurance -- let's call it Crimson Conscious Assurance or CCA) of people sitting at desks doing simple, repetitive tasks. Each plays a neuron of a human brain, and has connections (synapses) to other desks, near and far, to receive and send messages (action potential spikes), and formulae to establish new connections and retire old ones. Some desks receive messages from outside CCA (sensory), some deliver to the outside (muscle control). Some generate global messages (emotion hormones) and some receive them. No one at any desk understands the purpose of any of their inputs or outputs. Now, shrink it down so it fits in a brain case and speed it up so they're the speed of real neurons.

Would it be an unconscious p-zombie? Why?

Would a p-zombie insist that, if it were implemented as a non-biological machine, it would be, unlike itself, a p-zombie?

Bodhi Dharma Zen
4th July 2012, 06:58 AM
Which would be a very odd thing for anyone to dispute.

A computer is any system that performs computation.
The brain is a system that performs computation.
Therefore the brain is a computer.


:rolleyes:

A radiator is any system that performs cooling.
The brain is a system that performs cooling.
Therefore the brain is a radiator.

http://www.philosophicalmisadventures.com/?p=13

Statements in the form of "X is Y" are not valuable assertions. What we need are statements of the form "X does Y in these circumstances but not those".

punshhh
4th July 2012, 07:21 AM
Asphexishness. (http://en.wikipedia.org/wiki/Digger_wasp#Uses_in_philosophy)

Ah that pesky wasp. Can you link to Dennett talking about it?

Jeff Corey
4th July 2012, 10:09 AM
OK, sorry.

Apology accepted and the same to you if I gave offense.

Jeff Corey
4th July 2012, 10:12 AM
:rolleyes:

A radiator is any system that performs cooling.
The brain is a system that performs cooling.
Therefore the brain is a radiator.

http://www.philosophicalmisadventures.com/?p=13

Statements in the form of "X is Y" are not valuable assertions. What we need are statements of the form "X does Y in these circumstances but not those".

I like the way you think.

rocketdodger
4th July 2012, 01:30 PM
Can you offer a situation in which consciousness is a requirement? I realise that in this discussion there is no precise definition of consciousness. I think the behavior of something like a squirrel would probably require the same level of consciousness that a real animal squirrel possesses. There is actually a ton of complexity that needs to go into the simple animal act of being able to move around, look at stuff, and touch stuff. We haven't been able to get robots, or even simulated squirrels, to move even close to as efficiently and elegantly as a real squirrel, and that's because the approaches used thus far ( non-conscious approaches ) just don't scale.

Certainly passing the Turing test will require a level of consciousness similar to that of a human. To have a robot sitting in front of you at a table, and ask it questions about stuff, and have it respond at least at the level of a teenage human, is simply never going to happen unless the robot is genuinely conscious like a teenage human. You can accept my authority as a professional AI programmer when I say that.

I was considering AI self programming and learning from its environment and past results. Ok, well there you go -- at a sufficient complexity level, that kind of AI will implicitly be conscious.

As soon as the things it "learns" about include "itself," it is conscious on some level. Whether or not it is conscious like a human then depends on what exactly its internal representations of itself are like, how those representations are used, and what sorts of sensory input it has access to.

This needs looking at more closely I feel. So if one assembles a system which performs a certain set of computations, it would necessarily be conscious provided the computational requirements for consciousness are met?

Yes.

Do we know what these requirements are?

We have a pretty good idea. I will make a dedicated post soon to answer this in more detail.

I am of the opinion that the earliest forms of life are conscious of their environment. To keep it simple, I am saying that consciousness began with the first single cellular organism.

There isn't anything wrong with that position, since even the first single cellular organism was vastly different than anything else in the universe ( due to how it used a sequence of computations to "live" ). Don't listen to anyone that tries to tell you cells are just like any other system of molecules, it is absolute rubbish.

Yes, this bring up two issues the first of which was pointed out by piggy. Namely that with such a broad definition of computation, a hurricane or a waterfall would be conscious. As a result of exhibiting computation of some form.

Don't worry about what piggy says, it isn't relevant. I never said all computation leads to consciousness. It is only a sequence of computations that satisfy a very strict set of constraints that lead to consciousness ( or rather, are conscious ).

A hurricane can be said to perform maybe a few basic computations, since it does settle into an attractor state if the temperature and humidity are above the threshold required to keep it's thermodynamic engine going. But that's all -- a few computations. That is orders of magnitude less than what you need for even the most basic idea of consciousness, even PixyMisa's SRIP definition requires more than 10 computations ( since a transistor is roughly one computation ), he has said so himself.

And it isn't clear to me that a waterfall performs any computation at all. What are the attractor states of the waterfall? But even if it did, it wouldn't be anything near a sequence that would lead to consciousness.

Secondly, which I mentioned in the previous thread, there may be something about matter (some property) which results in consciousness when the correct combination of chemical reactions take place. Surely this property of matter would be a requirement for your position. If matter did not exhibit this property any consciousness may well be impossible.

We do not know what this property is, or distinguish a form of matter with the property from one without the property. Therefore we cannot predict that electrically mimicking those chemical reactions would result in the same effects, ie consciousness.

Well I completely agree -- the property required is the ability to settle into stable or meta-stable attractor states. It's called computation. Anything that has that property can be used to build something that is conscious. But remember, it isn't merely computation that leads to consciousness, it is a sequence of computations that satisfy a strict set of constraints.

The position that there might be something essential to the chemical makeup of neurons isn't logically consistent because there are many cases where neurons are alive yet the brain is not conscious at all. The fundamentals of consciousness must be related to the difference between a living firing neuron in a non-conscious brain and a living firing neuron in a conscious brain. That difference, as far as anyone can tell, is merely the coordinated causal sequence of firings. Random firings lead to nothing, organized sequences of firings lead to consciousness. This strongly suggests that if you have any other medium that can support such causal sequences you have a medium that also supports consciousness.

Yes, my position is that life has manipulated matter or systems of matter in the distant past or outside our timespace system and that the property mentioned in my previous paragraph was introduced or manipulated by life in the first place.

Life does one fundamental thing that no other system does, and it is easy to understand if you think hard on it:

The computations in Life result in an increased likelihood of the system remaining in a configuration where the same class of computations can be repeated. Thus "Life" is really just a class of configurations of a system that behave in a way that statistically prolongs the existence of that class of configurations. It isn't magical ( well it is, but not like people think of magic ), it isn't special ( well, it is, but not like people think of special ), and it certainly isn't divine. In fact, it is sort of inevitable -- if you have a universe where systems with stable or meta-stable attractor states exist, and those systems can interact with each other, given enough time those systems will configure themselves to form a greater system that does what life does. It is nothing but statistics.

Life is insanely good at it, and it is really easy to prove: take an average cell and stop the processes of life within it. It decomposes into chemical mush almost instantly. Now look at yourself -- each cell in your body has been in a similar configuration for billions of years. Find me anything in the universe -- anything -- that is as fragile in its environment as a living cell yet has survived in the same configuration that long.

However, this does take a non-trivial amount of coordinated computation. I haven't bothered to estimate the number of coordinated computations that take place in a living cell, even a prokaryotic one, but it is certainly in the thousands. Far more than what takes place in any non-living system ( excluding the systems life has created, of course, for example computers ).

PS -- to head off any irrelevant responses from the peanut gallery, I should add that a hurricane is not alive, either. The computations carried out by a hurricane lead to nothing. It doesn't change it's behavior as a result of any of them. If the hurricane runs out of moisture, and starts to die, that itself is the computation -- some other system could use the hurricane to determine "there isn't enough moisture here" but the hurricane itself makes no use of it. Living cells, on the other hand, change their behavior as a result of the computations that take place inside them.

Belz...
4th July 2012, 02:51 PM
Excellent post, Dodger. Can you give me a short definition of what you mean by "attractor" ?

PixyMisa
4th July 2012, 08:56 PM
:rolleyes:

A radiator is any system that performs cooling.
The brain is a system that performs cooling.
Therefore the brain is a radiator.
The brain is indeed a radiator.

Statements in the form of "X is Y" are not valuable assertions. What we need are statements of the form "X does Y in these circumstances but not those".
Agreed; precision is good, and X does Y in general is more precise than X is Y.

But I was making a narrow point: The assertion that the brain is not a computer is simply false.

PixyMisa
4th July 2012, 08:57 PM
Ah that pesky wasp. Can you link to Dennett talking about it?
I'll see what I can find.

It's a great example of the differences in behaviour between conscious and unconscious systems; that's why it keeps popping up.

dlorde
5th July 2012, 04:57 AM
The brain is indeed a radiator.
Quite. In general, anything that is warmer than its environment is a radiator.

rocketdodger
5th July 2012, 07:02 AM
Excellent post, Dodger. Can you give me a short definition of what you mean by "attractor" ?

From http://en.wikipedia.org/wiki/Attractor:

An attractor is a set towards which a variable, moving according to the dictates of a dynamical system, evolves over time. That is, points that get close enough to the attractor remain close even if slightly disturbed.

The important thing about the concept of an "attractor" is that it lets us objectively define certain classes of states in a system. Meaning, if a system is in any state that is within the "attraction" space of an attractor, it *will* converge back to the attractor if left alone.

The "objectivity" is important because since these classes of states ( I call them "meta-states" ) are objectively defined, any other system in the universe that is suitably configured is capable of recognizing whether a system is in such a meta-state. In other words, it doesn't take a human to recognize that a rock is in the "solid" meta-state because the attractor for every state in that meta-state is the same -- something like a solid, crystalline rock. And it doesn't take a human to recognize when it is in a "liquid" meta-state because the attractor for that meta-state is different from the one for the "solid" meta-state. Those are objectively defined meta-states.

To be specific, when I say "recognize" I really mean the behavior of changing from one attractor to another somewhere in the recognizing system as well. For example, if a rock melted, going from one attractor to another, and there was some external system interacting with the rock, and inside that other system there was also a change from one attractor to another due to the change in rock meta-states, that other system "recognized" the melting of the rock.

And that, in a nutshell, is how you start with stardust and eventually get life.

Belz...
5th July 2012, 08:44 AM
That's a bit too technical for my small brain. :(

Fudbucker
5th July 2012, 09:00 AM
The brain is indeed a radiator.

This is wrong. The use of "is" here makes an equivalence claim that you are forced into making (more on that in a second).

The brain is clearly NOT a radiator anymore than the brain is a flash drive, or calculator.

As Bodhi points out, just because two things share a commonality does not mean you can make an equivalence claim about them. Dogs also have a mechanism for getting rid of heat, but I don't think anyone's ever told their neighbor to keep his radiator from crapping on the lawn. There's so much more to a dog than "radiating heat", just as there's so much more to a brain than "radiating heat".

Agreed; precision is good, and X does Y in general is more precise than X is Y.

The precise way of saying what you want to say is "The brain IS LIKE a radiator". This is, of course, true and avoids an absurd equivalence claim. The reason I don't think you said this from the start is it implies that the brain is also NOT like a radiator. This is also true. Radiators don't think, feel, store memories, etc.

You are stuck having to claim the brain IS a radiator because your computationalist phlilosophy depends on an earlier equivalence claim you made: "the brain IS a computer". If the brain is only LIKE a computer, that that raises the obvious question: "How is it NOT like a computer?" Which leads to "Are those non-computer attributes important to consciousness?". That is a direction, from lurking in this thread, I can tell you don't want to go. It's a shame too, because as Piggy pointed out, that's where the current science is. The people who are doing the grunt work on the brain are very hesitant in making any strong claims about consciousness. Have you asked yourself why that might be, Pixy?

But I was making a narrow point: The assertion that the brain is not a computer is simply false.

Not at all. Can computers think? Feel? Are they conscious? Can they appreciate a bottle of two-buck Chuck? Get mad at you and cause the hard drive to seize up? No to all of those.

So no, it's not "simply false" the brain isn't a computer. It's actually "simply true". Brains are LIKE computers, but they are NOT computers.

Also, hello to the forum!

Bodhi Dharma Zen
5th July 2012, 09:25 AM
The brain is indeed a radiator.

Agreed; precision is good, and X does Y in general is more precise than X is Y.

But I was making a narrow point: The assertion that the brain is not a computer is simply false.


And you still fail to see the point. If it is equally correct to assert that the brain is "a radiator" as it is to claim that it is "a computer", then it is neither of those.

In other words, for the convenience of a particular description we can say "X does Y under Z circumstances", but not claim an identity. Claiming "X is Y" is not just narrow, it is also obtuse.

Bodhi Dharma Zen
5th July 2012, 09:27 AM
This is wrong. The use of "is" here makes an equivalence claim that you are forced into making (more on that in a second).

The brain is clearly NOT a radiator anymore than the brain is a flash drive, or calculator.

As Bodhi points out, just because two things share a commonality does not mean you can make an equivalence claim about them. Dogs also have a mechanism for getting rid of heat, but I don't think anyone's ever told their neighbor to keep his radiator from crapping on the lawn. There's so much more to a dog than "radiating heat", just as there's so much more to a brain than "radiating heat".



The precise way of saying what you want to say is "The brain IS LIKE a radiator". This is, of course, true and avoids an absurd equivalence claim. The reason I don't think you said this from the start is it implies that the brain is also NOT like a radiator. This is also true. Radiators don't think, feel, store memories, etc.

You are stuck having to claim the brain IS a radiator because your computationalist phlilosophy depends on an earlier equivalence claim you made: "the brain IS a computer". If the brain is only LIKE a computer, that that raises the obvious question: "How is it NOT like a computer?" Which leads to "Are those non-computer attributes important to consciousness?". That is a direction, from lurking in this thread, I can tell you don't want to go. It's a shame too, because as Piggy pointed out, that's where the current science is. The people who are doing the grunt work on the brain are very hesitant in making any strong claims about consciousness. Have you asked yourself why that might be, Pixy?



Not at all. Can computers think? Feel? Are they conscious? Can they appreciate a bottle of two-buck Chuck? Get mad at you and cause the hard drive to seize up? No to all of those.

So no, it's not "simply false" the brain isn't a computer. It's actually "simply true". Brains are LIKE computers, but they are NOT computers.

Also, hello to the forum!

Excellent post, and welcome to the forum!! :)

PixyMisa
5th July 2012, 11:10 AM
This is wrong. The use of "is" here makes an equivalence claim that you are forced into making (more on that in a second).
Bzzzt! Sorry, but thank you for playing!

A horse is a mammal.
A horse is a quadruped.
A horse is a domestic animal.

All factual statements, but mammal, quadruped, and domestic animal, while they are overlapping sets, are merely that.

The precise way of saying what you want to say is "The brain IS LIKE a radiator".
No. That is wrong.

The precise way of saying what I want to say is the brain radiates.

You are stuck having to claim the brain IS a radiator because your computationalist phlilosophy depends on an earlier equivalence claim you made: "the brain IS a computer".
The brain is a computer.

Not at all. Can computers think?
Of course.

Feel?
Certainly.

Are they conscious?
Many are.

Can they appreciate a bottle of two-buck Chuck?
Only poetically, unless you're talking about a gas spectrometer.

Get mad at you and cause the hard drive to seize up?
They can try, but they can cause it only to the degree that you can cause your legs to fall off.

Oh, and welcome! You're wrong on every single point, but welcome anyway!

PixyMisa
5th July 2012, 11:12 AM
And you still fail to see the point. If it is equally correct to assert that the brain is "a radiator" as it is to claim that it is "a computer", then it is neither of those.
Ah, another lost soul who needs to read The Relativity of Wrong (http://chem.tufts.edu/AnswersInScience/RelativityofWrong.htm).

Belz...
5th July 2012, 11:16 AM
And you still fail to see the point. If it is equally correct to assert that the brain is "a radiator" as it is to claim that it is "a computer", then it is neither of those.

That's odd, because the iPhone is a GPS, a phone, a personal computer, and a host of other things without being neither of them.

Bodhi Dharma Zen
5th July 2012, 11:19 AM
Ah, another lost soul who needs to read The Relativity of Wrong (http://chem.tufts.edu/AnswersInScience/RelativityofWrong.htm).

Funny, you talk about souls. Even more fun that you now appeal to relativity of knowledge while (if I remember correctly) you used to being a naive materialist. Oh wait, you still are.

Bodhi Dharma Zen
5th July 2012, 11:20 AM
That's odd, because the iPhone is a GPS, a phone, a personal computer, and a host of other things without being neither of them.

The iPhone can perform such functions, without it being anything else but an iPhone. What's strange about that?

Bodhi Dharma Zen
5th July 2012, 11:52 AM
Ah, another lost soul who needs to read The Relativity of Wrong (http://chem.tufts.edu/AnswersInScience/RelativityofWrong.htm).

Oh, and BTW, as much as I love Asimov, his views are not necessarily accurate. For instance, his conclusion:

"Naturally, the theories we now have might be considered wrong in the simplistic sense of my English Lit correspondent, but in a much truer and subtler sense, they need only be considered incomplete."It is nonsense. There is not such thing as "an universal objective truth" as he (appears) to claim, and so knowledge is not complete nor incomplete. Our claims either represent facts or they do not. Objective ontological claims are one of the mistakes of naive materialists.

Oh, and please do not resort again to appeal to authority, I believe you are fairly aware of logical fallacies. Argue with your own tools, can you?

rocketdodger
5th July 2012, 12:07 PM
That's a bit too technical for my small brain. :(

Ok here is a simple example: Suppose you have a marble and a flat pan with a dent in it. Call this system S.

There are ( effectively ) an infinite amount of states that this system can be in -- the marble can be positioned at an infinite amount of translations and rotations somewhere on the surface of the pan.

However, a certain subset of that infinite set of states is special -- in each of these states, the marble's translation is exactly the same -- located smack in the middle of the dent. This subset is also infinite, because the marble can still take on any rotation, but it is a smaller "infinity" than the set of all states ( that doesn't really matter, I'm just pointing it out ).

That special subset is called an "attractor." It is called this because if the marble is anywhere "near" the dent, or already "in" the dent at all, it will roll right into the middle of the dent once left alone. And certainly if it starts smack in the middle of the dent, any small change will not break it free of the "attraction" of the dent -- it will just roll right back in.

Thus there is another special subset of states -- all the potential states where the marble is close enough to the attractor to be "attracted." If the system is in one of these states, we know it will eventually go back to one of the states of the attractor subset. Meaning, we know if the marble is close to the dent, it will roll into the dent, assuming it is left alone.

That set of states -- when the marble is close enough to the dent to be pulled into it by gravity -- is objectively different from all the other states the system might be in, because they all share the same attractor. The marble *will* roll into the dent if it is near it. It won't magically roll out, it won't magically jump from one dent to another, etc.

To make it even more simple, think of it this way:

< marble is near dent > ----> given enough time -----> < marble is settled in dent >

contrast this with

< marble is not near dent > -----> given enough time -----> < marble is not near dent >

Belz...
5th July 2012, 02:22 PM
The iPhone can perform such functions, without it being anything else but an iPhone. What's strange about that?

Nice try. The iPhone IS a GPS. It doesn't act like one. It IS one. It IS also a phone. That seems to negate the argument that the brain cannot BE a computer and BE a radiator at the same time.

Belz...
5th July 2012, 02:23 PM
Ah, ok that's much clearer, now, Dodger. Thanks for taking some of your time. :)

Bodhi Dharma Zen
5th July 2012, 03:45 PM
Nice try. The iPhone IS a GPS. It doesn't act like one. It IS one. It IS also a phone. That seems to negate the argument that the brain cannot BE a computer and BE a radiator at the same time.


Err... No. The iPhone includes a GPS unit.

rocketdodger
5th July 2012, 03:59 PM
Err... No. The iPhone includes a GPS unit.

To be specific, an iPhone is a programmable hardware device that includes a sensor for locating GPS satellites and is capable of running software that can make use of that input.

However so is any other GPS device. There is no such thing as a GPS device that can't do anything but GPS -- they are all programmable as well, and they call merely run software that makes use of their GPS sensor's input.

So if we call one of those old-school GPS locator thingy's that people had to lug around 10 years ago a "GPS unit" then so might we call an iPhone a "GPS unit"

Fudbucker
5th July 2012, 04:37 PM
Bzzzt! Sorry, but thank you for playing!

A horse is a mammal.
A horse is a quadruped.
A horse is a domestic animal.


Again, you are making equivalency errors. If you want to be precise, you should say the following:

Bzzzt! Sorry, but thank you for playing!

A horse is a KIND of mammal.
A horse is a KIND of quadruped.
A horse is a KIND of domestic animal.


Otherwise you run into the problem Bodhi demonstrated:

Your mom is a mammal.
A dog is a mammal.

Your mom is a dog???

Stop playing fast and loose with language.


All factual statements, but mammal, quadruped, and domestic animal, while they are overlapping sets, are merely that.

Factually, a horse is Equus ferus caballus. Do you think a horse is a radiator?

More to the point, when you claim the Brain IS a radiator, you are not using precise language. So let's introduce some precision. Is your claim the following:

For any X, if X is a brain, X is a radiator?

How about

For any X, if X is a radiator X is a brain?

I'm guessing you'll say "yes" to the first and "no" to the second. The reason the second doesn't follow from the first is because brains are LIKE radiators (that is, they belong to at least one overlapping set: things that shed heat). Brains are not EQUIVALENT to radiators. If they were, you would say "yes" to both.

This is why your claim that "brains are computers" falls apart:

For any X, if X is a brain, X is a computer.

You are claiming an equivalence between brains and computers. But if that were true, we should be able to make the following claim:

For any X, if X is a computer, X is a...brain? :boggled:

You see the problem now? Brains and computers are not equivalent.

Brains also belong to many overlapping sets (can think, is made of neurons, is mostly water, etc.) Know what else is mostly water? A diet coke. So a brain is a diet coke? :boggled:

No, a brain is a brain. That's why we have a name for it. It's not a computer, not a radiator, not a hard drive.


No. That is wrong.

The precise way of saying what I want to say is the brain radiates.

And a car engine vibrates when it runs. Is a car engine a vibrator? :rolleyes:


The brain is a computer.

You countinually assert this, but have yet to back it up with anything, or respond to the obvious point that brains do things computers can't even hope to accomplish. I'd like to see the science behind this. Do any neurologists agree with this? If so, who? Let's have some citations. Your opponent in this debate has cited many authorities. I don't remember you citing anything except some wiki articles. So step up to the plate!


Of course.


Certainly.


Many are.

The motherlode! Rather than modify your claim that "brains are computers", you go down the path of reductio ad absurdum: computers can think, feel, and are conscious! I'm curious: who won the Nobel for creating the first thinking feeling conscious computer? Because I missed that particular one. Or, far more likely, you are wedded to a pet theory to such an extreme that rather than shy away from the absurd conclusions of said theory, you embrace them with the zeal of a religious fanatic.

What, things evolve over time?? Certainly not! Take this banana, which has been perfectly designed for us to eat!

Well, played Kirk- I mean Pixy ;)

Seriously, does anyone else think that any of the current computers we have now are thinking feeling conscious machines?

Anyone at all?

Modified
5th July 2012, 05:49 PM
Stop playing fast and loose with language.

Is English your native language? I don't think you know what "is a" means. It does not mean "is equivalent to", it means "is an instance of" or "is a type of". A dog is a mammal. A mammal is an animal. This is the way "is a" is used in both logic and formal CS lingo, as well as in normal conversation.

Belz...
5th July 2012, 05:49 PM
Err... No. The iPhone includes a GPS unit.

The brain includes a radiator and a computer. :rolleyes:

Belz...
5th July 2012, 05:51 PM
Otherwise you run into the problem Bodhi demonstrated:

Your mom is a mammal.
A dog is a mammal.

Your mom is a dog???

Stop playing fast and loose with language.

Says the guy who plays fast and loose with language. Wow 2 posts in and already you're into logical fallacies.

You know full well that "the brain is a computer" meant "the brain is part of the set of computers".

Piggy
5th July 2012, 05:58 PM
If you don't mind my saying, this is where you start off your response like every other post you make -- explicitly declaring everyone else to be wrong.

We disagree. We're talking about the things we disagree on. But I'm not just "declaring" you wrong, I'm explaining why that's so.

So you are saying imagination plays no part in consciousness. Kind of a non-starter, since it is such an absurd proposition...

That's not at all what I said.

What I said, quite clearly, was this: Imagination operates in our brains independently of consciousness. In fact, we now know that our brains are unconsciously imagining the next moment all the time.

We can consciously imagine -- which looks a little different neurally, just as conscious and non-conscious attention look a little different neurally -- but just because a brain (or machine) is imagining doesn't mean it is conscious.

So if you've designed a machine to imagine, that's important, but unless you've also designed it to be conscious, the experiment has nothing directly to do with the study of consciousness.

Again -- you are claiming that this implies those circuits have nothing to do with consciousness ...

Nothing has to be implied. We can observe brains while this is going on, and we can talk to those brains to get information about the experience, and we can watch how the body responds to everything that's happening.

So we know that imagination acts independently of consciousness, and that consciousness is a process that sometimes is involved with imagination and sometimes not.

Which stands to reason, because if you experienced your brain's predictions about what's going to happen in the next moment, the world would look very weird.

But the upshot is that re-creating imagination in a machine doesn't have anything to do with consciousness unless you've also designed the machine to be conscious and you can show that the two processes are synched up in your machine.

You know what is funny, in my last post I had actually made a sarcastic comment about how you will run back to some vague "evolution selecting for consciousness" argument the first chance you get, but I edited it out to be polite.

So you think that is odd that evolution has produced animals whose brains constantly imagine the next moment, but fail to make that process available to the animal's conscious experience?

You didn't even read the paper, and you wouldn't understand it if you did, so don't patronize me.

First of all, I'm not patronizing you. It really is cool, I wasn't being sarcastic.

But unless your description is wrong, I don't need to read the paper to know that getting a machine to imagine isn't the same as getting it to be conscious. We know this because we've observed objects we know to be conscious (human brains) performing that task and it doesn't require consciousness.

Of course maybe I'm wrong, maybe there's something about that in there. If there is, point me to it, please.

I think you misunderstand my intentions. I didn't expect you to change your position, or even think of changing your position, because I know you don't do that. In fact I expected you to just write off this real research as somehow irrelevant in your quest for the magic bean. Thanks for being so predictable !

The reason I posted that was so any observers would see that you are consistently dishonest when it comes to your representation of the opposition, and furthermore to show that there is real research that illustrates that the majority of your arguments don't hold much water.

I'm not really interested in a long discussion because I would rather read *actual research* than get bogged down in a typical piggy-esque merry-go-round of walls-of-text that, once fully parsed, contain zero new information for the reader. I also don't really care to be told, after I spend the time understanding some research that some very smart person has done, that "I don't understand why it is irrelevant." Hey, piggy, I'm the AI programmer, I'm the one that actually read the paper, I get to decide whether it is relevant or not.

I'm sorry, but research on non-conscious machines is not research on consciousness.

Until and unless we figure out what the brain's doing when it performs consciousness, we can't even imagine designing conscious machines to study, much less building them.

Yes, you're the AI programmer. But you are not a cognitive neurobiologist. And intelligence is not the same thing as consciousness.

When you make a claim that getting a machine to do something is relevant to studies of consciousness, I'll take the word of the neurobiologists regarding whether that particular task requires consciousness or not.

Bodhi Dharma Zen
5th July 2012, 06:17 PM
The brain includes a radiator and a computer. :rolleyes:

Hilarious!!!!!!!!!!!!!!!!!!!!!


wait... you are not joking.

Piggy
5th July 2012, 06:21 PM
So the Blue Brain Project, for example, which started out with modelling a rat neocortical column at the molecular level and intends to continue scaling up until it models an entire brain (still in molecular detail). To quote project lead Henry Markram, "If we build it correctly it should speak and have an intelligence and behave very much as a human does." Which quote does nothing to establish my position, but does point out that researchers in the field believe that this sort of simulation will produce a conscious mind. (That's not Markram's primary reason for studying the brain, but it's his position.)

Thanks for the video post.

My understanding is that this is not his position, actually, although I can't swear to that.

Gazzaniga discusses Blue Brain in "Human", and he says this (italics from original):

Markram and his institute, collaborating with IBM and their Blue Gene/L supercomputer, have now taken on the task of reverse engineering the mammalian brain. This project has been dubbed the Blue Brain Project, and it rivals the human genome project in its complexity. To begin with, they are creating a 3-D replica of a rat brain with the intention of eventually being able to create on of a human brain. "The aims of this ambitious initiative are to simulate the brains of mammals with a high degree of biological accuracy and, ultimately, to study the steps involved in the emergence of biological intelligence." It is not an attempt to create a brain or artificial intelligence, but an attempt to represent the biological system. From this, insights about intelligence and even consciousness may be drawn.

It's interesting that your quote from Markram, which is probably later, expresses the opinion that you would have "intelligence" from such a machine, as well as behavior.

I would agree with him here against Gazzaniga... but then again, I can't be sure how Gazzaniga was using the word "intelligence".

I'd have to see a direct quote from Markram, tho, to accept that he would posit that the simulator would become conscious.

After all, we can imagine all sorts of machines (biological or synthetic) which could respond in adaptive ways to the sky without producing blue... or any other type of experience.

Piggy
5th July 2012, 06:23 PM
About 12 minutes into that talk, Markram covers the point I was stressing earlier: That computational neuroscience is about building computational models at varying levels of granularity and testing them against each other, against real behaviour and against real biology.

Simulating a human brain at the molecular level will be enormously expensive, but it will allow us to derive more abstract models that accurately represent particular functions - to test and confirm what behaviours can be abstracted out without affecting the resulting behaviour.

And this is where you and I firmly agree.

It's nice that there are some points like that.

I Ratant
5th July 2012, 06:27 PM
You can't build a computer out of nothing but rope, so Piggy's objection there is nonsensical.

But timing is syntax. Or if that's too hard to swallow all at once, timing can be expressed as syntax. So that objection is also null.
.
Build a computer out of rope.
The nautical term "knots" comes from timing the movement of a ship relative to knots on a rope. The knots/rope are the computer.
Anything pretty much can be a computer.
Need to measure a distance now, but lack any scale or ruler?
Use a dollar bill, then at your leisure convert the dollars into the usual units of length.

I Ratant
5th July 2012, 06:30 PM
heh I love that STFU acronym...
.
I posted that as "stfu,a" on The Big Bang Theory forum, describing how normal people would react to a long-term exposure to Sheldon.

Piggy
5th July 2012, 06:33 PM
I get a whiff of potential circular logic there, but I think it's just wrong. Maybe computer science is not really science.

Is pixy's central claim that the brain is a computer?

I love it when tasks that are notoriously difficult for computers are cracked by computers simulating neural networks.

Please don't suggest that not knowing how to do something means it can't be done.

I just read How Many Computers to Identify a Cat? 16,000 (http://www.nytimes.com/2012/06/26/technology/in-a-big-network-of-computers-evidence-of-machine-learning.html?pagewanted=1&_r=2&ref=technology). Thanks for the link, guys!

One cool thing about computers is that any type of universal computer can simulate/emulate any other type. For example, an Apple can emulate a PC and vice versa, a PC can emulate a Nintendo, etc. Indeed, general purpose computers can emulate special purpose computers without difficulty.

Would someone explain again why a computer couldn't emulate a brain? Is it just because, if one could, we'd feel less special?

I wasn't sure exactly what Belz meant by "PixyMisa's central claim", so I started with the claim that it was widely accepted that the brain is a computer.

I've been frustrated with that claim because I've never gotten a satisfactory answer to the question "What exactly does that mean?"

I accept that it's a computer in the Wolfram sense, but in that case so are oceans and trees and stars.

But I haven't seen a definition yet that disallows everything like that, but includes my brain and my PC.

And even if computers are brains -- the inverse claim -- what would it mean, for example, to say that your leg is a prosthetic? So I want to know precisely what this means and what the implications are for consciousness.

But I can't cite any research demonstrating that the brain is not a computer, for the same reason I can't cite any research demonstrating that the brain is not a rocking chair.

Nobody in neurobiology is trying to disprove the claim that the brain is a computer.

And regarding those tough tasks cracked by computers... do you think it makes them more functionally similar to brains if they are very good at tasks which brains are not that good at?

Regarding a computer emulating a brain, why not ask me why a computer couldn't emulate my truck and get me to work every morning?

Piggy
5th July 2012, 06:38 PM
A computer is any system that performs computation.
The brain is a system that performs computation.
Therefore the brain is a computer.

Well, this brings us back to Wolfram.

The entire universe performs computations, and so does everything inside it.

Everything moves from state to state according to the laws of physics, so you have rule-governed transformations between states.

The brain is a physical object which obeys the laws of physics, so it's performing computations all the time, and these are the computations that make the difference between, say, recognizing that it's not dark and recognizing the moon.

If you want to say, however, that the lump of matter in your skull is performing symbolic calculations -- that is, that the actual nerve tissue is doing this -- that's a whole nother ball of wax.

The difference between the type of computer which my PC is, and the type of computer which a brain or a spleen or a star is, is that in order for the PC to be an "information processor" a user/programmer must also be part of the system.

This is not true of the brain, so it's in the other category, as far as is known.

I Ratant
5th July 2012, 06:41 PM
Nice try. The iPhone IS a GPS. It doesn't act like one. It IS one. It IS also a phone. That seems to negate the argument that the brain cannot BE a computer and BE a radiator at the same time.
.
It's a hammer.
A door-stop.
Not much of a floatation device.. (voids the warranty)
It isn't -only- one of these attributes, and these don't make anything with the use an iPhone.
Like a shoe. It can be a hammer and a door stop. And a toy.
The brain radiates.
But that's not all it does (for most people).

Piggy
5th July 2012, 06:51 PM
I don't think introspection is the right word - in my experience introspection refers to redirecting the focus of attention from the sensorium to the internal model of self, i.e. it's a higher level of abstraction, consciousness reflecting on itself.

The self-referencing that I see as the basis of consciousness involves feedback - feeding back the evaluated results of behavioural responses to environmental stimuli into the process that selects or generates behavioural responses. This permits adjustment of behavioural response based on the results of previous behaviour, leading to behavioural flexibility. At a minimally conscious level, this might just be a positive or negative association between environmental stimulus and behavioural response that biases subsequent behaviour selection. At a higher level of complexity, an accumulated history of such information can be used as an internal model of the organism's interaction with the environment, allowing the development of an ongoing sense of self, predictive 'what if' modeling, and at the highest levels, theory of mind and introspective self-awareness, self-referencing at a higher level of abstraction.

It's the amount and detail of the information accumulated and the sophistication of the evaluations involved that determine the level of consciousness, but it's all based on self-referential feedback.

I suppose the fly in all this ointment is the definition of consciousness itself. You may agree or disagree with some or all of the model I outlined, but if we each mean something different by consciousness, we're going to talk past each other anyway.

I agree with that last point.

But maybe we can use your example there to tease that out.

Let's look at this bit:

The self-referencing that I see as the basis of consciousness involves feedback - feeding back the evaluated results of behavioural responses to environmental stimuli into the process that selects or generates behavioural responses. This permits adjustment of behavioural response based on the results of previous behaviour, leading to behavioural flexibility.

Essentially, we're talking about responding and learning here.

We now know, from studies on subliminal learning, that our brain cells are capable of reshaping themselves into new behavioral patterns in response to interaction with the world, without bothering to involve consciousness.

This conclusion is drawn from research such as studies on people who are exposed to series of images, including some that did not appear long enough to be consciously perceived, and yet they manage to alter their responses to the images in various ways by learning the associations, such as how to predict that a giraffe would come up next, for example, after a subliminal red square.

Consciousness is built on that, but at the end of the day, what defines consciousness is the generation of phenomena/behavior such as color, sound, texture, and the like.

All these things that are often called qualia are simply behaviors of one of our bodily organs.

We can't accept any substitutes. That's the message we have to take away from all the failed attempts to make consciousness equal anything else, whether that's perception, memory, attention, learning, or imagination.

Consciousness is what happens when we dream, and when we wake up, and it consists of brightness and color and sound and smell and hot and cold and pain and emotions.

We don't know how that happens yet.

We do know that it happens, though.

But we don't know how, and today we know even less than ever before why.

We know consciousness is on time-delay, so it can't be influencing responses in real-time, so I agree that it must be involved in planning, at the least.

Piggy
5th July 2012, 06:54 PM
Having said that, timing and architecture are crucial to the effective function of many computer systems, and syntax or semantics, it seems to me that computational equivalence is the essential issue.

Well, I've always said that I don't see why only biological machines can be conscious.

In fact, I suspect that the answer will turn out to be simpler than we suspect -- that the answer will make the apparent chaos much less chaotic.

So when you say "computational equivalence", what exactly do you mean?

I agree, as long as we're using Wolfram's definition of "computation".

If you're using another, I think we might begin to differ in how we think about it.

Piggy
5th July 2012, 07:12 PM
Rocketdodger has already mentioned the square vs rectangle thing. When someone says the brain is a computer, he or she is not saying that it's a pentium, and you know it.

Yes, I do know that. Been quite clear about it, in fact.

To say "the brain is a computer" cannot mean "your brain is a PC". It can only mean that your brain and your PC are both computers.

I don't think there's any doubt about that.

The problem is, we've been asking for quite some time for a reasonable definition of "computer" which will include my brain and my PC, yet exclude stars and oceans and rocks.

Church-Turing doesn't provide it.

Wolfram doesn't provide it.

So what is it?

And yet they both do the same kind of work: computing.

Be specific, please.

If you mean they both change states according to a set of rules, well, everything in the universe does that.

If you mean they both allow information to be encoded by a programmer and decoded by a user, then no, they don't both do that.

No. See, if I want to say that helicopters are aircraft, I don't have to show you that they are identical to planes.

That's right. The "aircraft" category only requires the functional equivalence of being flying (air) machines (craft). Those 2 subcategories each have more narrow functional equivalences (if they have any... you might only be dealing with structural equivalences there).

Again, how ? They do the same kind of work.

Do they really?

Not in the world I live in.

In the world I live in, they do very different types of work, or when they do the same work they do it differently.

According to wiki, computation is a form of calculation, which is "a deliberate process for transforming one or more inputs into one or more results, with variable change." Personally I'd add that it has to include some form of abstraction.

Well, minus your addition, then the entire universe is a computer and so is everything in it.

And I'm not sure how your addition would function, so I can't comment on it.

We've been there before. It's like saying running is a process, and then some philosopher jerk comes in and point out that everything, down to elementary particles, is a process. We know that, but we can use the narrower definition for the purposes of talking about running, can't we ? I don't know why we can't do it with computing.

Wolfram (http://en.wikipedia.org/wiki/Stephen_Wolfram) is not a "philosopher jerk", btw.

And this point is actually crucial.

If you're going to say, as you do, that brains and PCs are both computers because they both compute, then you're going to have to use a definition other than Wolfram's, and other than the definition which applies to PCs as info-processors (because that demands a programmer/user in the system who understands the symbology).

And if you're going to do that, then you need to say what this definition is.

Piggy
5th July 2012, 07:22 PM
You can't build a computer out of nothing but rope, so Piggy's objection there is nonsensical.

But timing is syntax. Or if that's too hard to swallow all at once, timing can be expressed as syntax. So that objection is also null.

Rocket Dodger claimed, in the Robot Consciousness thread, I think, that you could build a conscious brain out of rope.

Dunno if he's reconsidered that position.

As for timing being expressed as syntax, that's fine, but for real-world physical processes like consciousness (or any other bodily function) you have to reproduce the timing in a sufficiently robust medium in order to make the same thing happen.

Piggy
5th July 2012, 07:26 PM
Makes sense to me, since it's what we usually mean with the word.

You can study self-awareness, and that's very interesting, but it's not a useful definition of consciousness itself.

At the moment, we have no reason to believe -- and every reason not to believe -- that self-awareness is necessary for the production of color, sound, pain, and so forth.

And indeed it does matter if you know which parts of the brain handle what, and when these areas developed in the process of evolution.

It's not an accident that core consciousness is handled in the physical core of the brain, an evolutionarily early area, and self-awareness involves real-estate that developed much later.

Piggy
5th July 2012, 07:29 PM
I'd like to do the bolded part. Saying that the mechanism is not known is fine, but not quite true because we do know quite a bit about it, even if we don't understand it completely.

Actually, that's not accurate.

We have no clue what the mechanism is, and currently no theory on which to base hypotheses.

We have learned quite a bit about the neural correlates due to neurobio research, but as Ned Block says, we have no way even to "fantasize" about why any given neural state is associated with any given conscious state, and not some other, or none.

The mechanism is indeed utterly mysterious, and it is important that we recognize that this is the current state of the science.

Fudbucker
5th July 2012, 07:32 PM
Says the guy who plays fast and loose with language. Wow 2 posts in and already you're into logical fallacies.

You know full well that "the brain is a computer" meant "the brain is part of the set of computers".

It's not clear at all. There are three ways (maybe more) "is" can be used:

Definitional:

Water IS H20.
They are one and the same. All instances of water are instances of H2O and vice-versa.

Property:

Water IS clear.
One of the properties of water is that it is clear.

Member of a set:

Water IS a liquid.
Water belongs to the set of all things that are liquid.

So, no it's not clear what "A brain is a computer" means. However, it can't be an equivalence claim. I think you agree with that. Computers aren't brains.

The only thing that makes even remote sense is "Brains are a kind of computer", like you suggested. So let's look at that claim.

First of all, what does "computer" even mean? Is an abacus a computer? What about an electronic calculator? Are my fingers a computer (I can count on them)? Is everything a computer? There's been no clear definition of "computer" that's been offered. If there's been one, I apologize for missing it and would like the post number (as long as it's more robust than "something that computes"!)

Second of all, what KIND of a computer is the brain? If I say a cow is a KIND of mammal, you can rightly ask "What kind of mammal?" and I can tell you: the kind with four legs, multiple stomachs, herbivore, etc. It's not clear what KIND of computer the brain is. Is it like an abacus, a Babbage machine, a Turing machine, a Dell tower?

Third, it's just wrong. A functioning brain really is nothing like a computer:
- is made of neurons
- utilizes trinary
cbcl.mit.edu/cbcl/news/files/liu-tp-picower.
- communicates in digital AND ANALOGUE sciencedaily.com/releases/2006/04/060412223937
- is self aware
- is conscious
- can imagine
- can feel
- thinks
- no software/hardware distinction

And probably a bunch of other differences that are more sophisticated. Google has many results.

Saying a brain is a kind of computer is like saying a Cheetah is a kind of car because they both can go highway speeds.

Unless you think thinking feeling conscious machines already exist. Do you think that? Are you preparing for the Butlerian Jihad!!;)

Fudbucker
5th July 2012, 07:34 PM
The brain includes a radiator and a computer. :rolleyes:

How can the brain INCLUDE a computer if you think the brain is a KIND of computer? Does a Mustang include a car???

Piggy
5th July 2012, 07:44 PM
:confused:

1. An hallucination is a non-normative experience.
2. Everyday experience is a normative hallucination.

Therefore, everyday experience is a normative non-normative experience...

This really doesn't work for me.

Nor does this:

Well, let me put it this way...

We think of "hallucinations" as deviations from the accurate perception of reality.

The problem with this notion is that our everyday experience does not reproduce reality, so we must be wrong about hallucinations if we think of them that way.

I mentioned the case of your hypothetical hippie buddy who decides to take a small hit of LSD, so that he sees a yellow sky.

But if aliens examined your brains, and the sky, and inquired about your perceptions, then unless they coincidentally had brains which produced blue or yellow when they looked at the sky and therefore made them biased (an unlikely scenario we'll simply ignore) they would have no way to determine which alternative was the norm for humans and which was a "hallucination".

That's what I mean when I say that our everyday experience is a "normative hallucination".

Other animals hallucinate differently.

What it means to be human, as opposed to a cat or dog or bird, is (at least in part) to hallucinate the particular way we do.

So onto the other part, "the brain produces colors and brightness"....

Colors, smells, pain, these exist nowhere but in the brain.

Where else could these things possibly exist?

Probably the most difficult reality for me to accept is that my brain produces brightness. When you walk into a blinding afternoon sun from a dark matinee, it sure feels like something from outside is piercing you like a spear.

But it's not, of course.

The excitation of neurons produces not only the pain, but also the brightness itself.

There is no other possible source for it.

Radical materialism is often harsh and counterintuitive in its conclusions. And the investigation into consciousness is not spared.

Piggy
5th July 2012, 07:48 PM
Also, hello to the forum!

And hello to you!

Nice to have you here.

PixyMisa
5th July 2012, 08:35 PM
Again, you are making equivalency errors.
No. That's just you.

Stop playing fast and loose with language.
I'm using her how she is spoke.

More to the point, when you claim the Brain IS a radiator, you are not using precise language. So let's introduce some precision.
Sure: The brain radiates. We can also express this by the factually and gramatically correct sentence "The brain is a radiator."

This is why your claim that "brains are computers" falls apart
No, that's just you.

Brains and computers are not equivalent.
Only if you don't know what a computer does.

You countinually assert this, but have yet to back it up with anything, or respond to the obvious point that brains do things computers can't even hope to accomplish.
Evidence?

The motherlode! Rather than modify your claim that "brains are computers", you go down the path of reductio ad absurdum: computers can think, feel, and are conscious!
Computers can think and feel, and many of them are conscious.

I'm curious: who won the Nobel for creating the first thinking feeling conscious computer?
All computers, by definition, think. All computers with real-world sensors feel. All computers programmed using reflective techniques (that is, many of them, and any complex real-time system with proper monitoring and failsafes) is conscious.

No-one won a Nobel prize for this because (a) there's no relevant prize and (b) it's not at all remarkable.

It's not that they did something magical, it's that thinking, feeling, and consciousness aren't magical either.

PixyMisa
5th July 2012, 08:38 PM
.
Build a computer out of rope.
The nautical term "knots" comes from timing the movement of a ship relative to knots on a rope. The knots/rope are the computer.
Anything pretty much can be a computer.
Need to measure a distance now, but lack any scale or ruler?
Use a dollar bill, then at your leisure convert the dollars into the usual units of length.
The rope then is a ruler. You can consider the rope to be a component in a speed-calculating computer, but by itself the rope just lies there on the deck of the ship.

Fudbucker
5th July 2012, 08:48 PM
No. That's just you.


I'm using her the way she is spoke.


Sure: The brain radiates. We can also express this by the factually and gramatically correct sentence "The brain is a radiator."


No, that's just you.


Only if you don't know what a computer does.


Evidence?


Computers can think and feel, and many of them are conscious.


All computers, by definition, think. All computers with real-world sensors feel. All computers programmed using reflective techniques (that is, many of them, and any complex real-time system with proper monitoring and failsafes) is conscious.

No-one won a Nobel prize for this because (a) there's no such prize and (b) it's not at all remarkable.

It's not that they did something magical, it's that thinking, feeling, and consciousness aren't magical either.

I admire the force of your convictions, but I don't think you'll have much support.

Again I ask, does anyone think feeling thinking conscious machines already exist? This is a skeptics' forum, is it not? I can't imagine such an assertion going unchallenged.

If anyone thinks so, I would like to know which particular computers are conscious, what evidence they base such an assertion on, and links to experts who agree with such a position. So far, the only links posted haven't supported this claim.

PixyMisa
5th July 2012, 08:56 PM
If anyone thinks so, I would like to know which particular computers are conscious
Fallacy of division.

Edit: Sorry, my mistake. I misread Fudbucker's post.

Fudbucker
5th July 2012, 09:01 PM
Fallacy of division.

How is that a fallacy? You said "many of them [computers] are conscious". I'm simply asking which computers are conscious.

PixyMisa
5th July 2012, 09:03 PM
How is that a fallacy? You said "many of them [computers] are conscious". I'm simply asking which computers are conscious.
Whoops, sorry! My mistake, I somehow misread that as "components". That would be a fallacy of division... But that hardly matters, since it ain't what you wrote.

Modified
5th July 2012, 10:00 PM
It's not clear at all. There are three ways (maybe more) "is" can be used:

Definitional:

Water IS H20.
They are one and the same. All instances of water are instances of H2O and vice-versa.

Property:

Water IS clear.
One of the properties of water is that it is clear.

Member of a set:

Water IS a liquid.
Water belongs to the set of all things that are liquid.

So, no it's not clear what "A brain is a computer" means.


The first two are ways "is", not "is a", can be used.

Water is a H2O. - No

Water is a clear. - No

Water is a liquid. - Yes

Honestly, how can you have a conversation in English if you don't know what "is a" means?

Fudbucker
5th July 2012, 11:25 PM
The first two are ways "is", not "is a", can be used.

Water is a H2O. - No

Water is a clear. - No

Water is a liquid. - Yes

Honestly, how can you have a conversation in English if you don't know what "is a" means?

No:

A bachelor IS AN unmarried man.

That's a definitional claim.

In the same way, "a brain IS A computer" can be taken to mean two different things: equivalence, or a member of a set.

However, I think it's clear Pixy isn't making a definitional claim, but rather stating that a brain is a kind of computer. That raises the questions that have posed by other posters (which Pixy, to give him credit, has answered):

He claims a brain is a member of the set of computers that include CONSCIOUS, THINKING, FEELING machines.

The question is, does such a set actually exist? Pixy has the burden of proof, and IMHO, I don't think he's met it, or even come close to demonstrating such computers exist.

PixyMisa
6th July 2012, 01:30 AM
However, I think it's clear Pixy isn't making a definitional claim, but rather stating that a brain is a kind of computer.
Well, they're really the same thing here, since computers are defined by their function.

That raises the questions that have posed by other posters (which Pixy, to give him credit, has answered):

He claims a brain is a member of the set of computers that include CONSCIOUS, THINKING, FEELING machines.
Yes.

The question is, does such a set actually exist?
Brains exist, and are computers, so trivially yes.

PixyMisa
6th July 2012, 01:34 AM
In answer to your question of evidence:

Consciousness is the ability to think about your own thoughts. Self-awareness. Computer programs do this via the technique of reflection. Not all programs do this, but many complex and/or real-time systems do, because it's as useful a technique for computers as it is for animals.

Feeling is either awareness of of an abstract representation of sensory data, or the conscious reflection of the same. So again, many computers do this.

Thinking is just processing data, so all computers do this by definition.

Belz...
6th July 2012, 02:54 AM
Hilarious!!!!!!!!!!!!!!!!!!!!!


wait... you are not joking.

You have provided no reason why your argument applies to brains and not to iPhones, Zen. I'm must pointing that out in the vain hope that perhaps you will explain yourself more succintly and make a coherent argument.

Belz...
6th July 2012, 02:59 AM
Regarding a computer emulating a brain, why not ask me why a computer couldn't emulate my truck and get me to work every morning?

I'm sure you'll agree that a truck and a mind are two very different things and that comparing them like you did is not entirely relevant ?

If you mean they both change states according to a set of rules, well, everything in the universe does that.

Yeah but using that definition makes calling your PC "computer" also useless.

Do they really?

Not in the world I live in.

My brain does calculations. Computers do too. My brain does comparisons. Computers do too. My brain interfaces with my sensors and peripherals. Computers do too. Aero fits well in your purse. Milk ? Why not ? :p

Belz...
6th July 2012, 03:01 AM
At the moment, we have no reason to believe -- and every reason not to believe -- that self-awareness is necessary for the production of color, sound, pain, and so forth.

Agreed. Awareness but not self-awareness. It kinda kills the idea of qualia, in a way.

We have no clue what the mechanism is, and currently no theory on which to base hypotheses.

That can't be right. Between you saying we know nothing about it, and Pixy saying we've got a firm grasp of it, there has to be a middle ground where reality lies.

Belz...
6th July 2012, 03:03 AM
It's not clear at all.

I'm sorry, but if your grasp of the English language is that poor it will be difficult to have a discussion.

Water IS H20.
They are one and the same. All instances of water are instances of H2O and vice-versa.

Synonyms are like that.

Water IS a liquid.
Water belongs to the set of all things that are liquid.

No, no, no. You mean Water is a KIND of liquid, right ?

Or did you miss that it's like "Brain IS a computer" ?

So, no it's not clear what "A brain is a computer" means. However, it can't be an equivalence claim. I think you agree with that. Computers aren't brains.

Oh, brother.

Belz...
6th July 2012, 03:04 AM
Third, it's just wrong. A functioning brain really is nothing like a computer:
- is made of neurons
- utilizes trinary
cbcl.mit.edu/cbcl/news/files/liu-tp-picower.
- communicates in digital AND ANALOGUE sciencedaily.com/releases/2006/04/060412223937
- is self aware
- is conscious
- can imagine
- can feel
- thinks
- no software/hardware distinction

We get it, Fud. The brain is not A PC. No one is suggesting that.

Belz...
6th July 2012, 03:05 AM
How can the brain INCLUDE a computer if you think the brain is a KIND of computer? Does a Mustang include a car???

It would be great if you could follow the discussion between me and Zen rather than jump at the end and, without context and unable to understand my post, decide to comment on it.

PixyMisa
6th July 2012, 03:10 AM
But I can't cite any research demonstrating that the brain is not a computer
Because the brain is a computer.

Regarding a computer emulating a brain, why not ask me why a computer couldn't emulate my truck and get me to work every morning?
I'd rather ask you why you believe that's a coherent question.

rocketdodger
6th July 2012, 08:28 AM
Rocket Dodger claimed, in the Robot Consciousness thread, I think, that you could build a conscious brain out of rope.

Dunno if he's reconsidered that position.


Ropes, pullies, and buckets, to be specific.

I suppose you could find a clever way to make pullies and buckets out of rope in the first place, so yeah you could build a brain out of nothing but rope if that is what you mean.

I can tell you right now Pixy will agree with me. We have been telling you for years that the requirements for the building blocks of a computer is that they can switch. If you can find a way to make a rope switch, then go for it. If not, then whatever point you were trying to make isn't valid.

rocketdodger
6th July 2012, 08:32 AM
Until and unless we figure out what the brain's doing when it performs consciousness, we can't even imagine designing conscious machines to study, much less building them.



This is why it isn't even worth discussing things with you, piggy.

Someone brings up good research.

You respond with "blah blah blah, I didn't read it, I don't need to, it wasn't an MRI and it wasn't done by someone with a "biologist" in their title, so it isn't relevant."

Modified
6th July 2012, 08:39 AM
Because the brain is a computer.


I'd rather ask you why you believe that's a coherent question.

That fact that it's not coherent demonstrates that "a brain is a computer" is meaningful, but "a truck is a computer" is not.

rocketdodger
6th July 2012, 09:03 AM
Again I ask, does anyone think feeling thinking conscious machines already exist? This is a skeptics' forum, is it not? I can't imagine such an assertion going unchallenged.


There are no machines that are conscious in all the ways humans are. There aren't even machines that are conscious in all the ways a squirrel or a bird might be.

There are machines that are conscious in the way a paramecium might be.

If you don't think a paramecium is conscious, then that implies you don't think there are any machines that are conscious in that way.

There are a number of machines that some people say are conscious in certain ways that aren't exactly the same as the ways animals are conscious, because they are using a definition for "consciousness" that the machines happen to satisfy. It is easy -- I can define "consciousness" as spinning, and that means a centrifuge is conscious.

If you don't agree with the definitions these people use, then you obviously don't think those machines are conscious. However, the machines still do whatever they do -- your choosing to not apply a term to what they do doesn't change reality.

As soon as you get away from the generic "consciousness" and think hard about what the specific inquiry is, things become much more clear.

rocketdodger
6th July 2012, 09:12 AM
That can't be right. Between you saying we know nothing about it, and Pixy saying we've got a firm grasp of it, there has to be a middle ground where reality lies.

Well, you have to look at the sources.

Pixy writes code for a living and one could say he even does research in computer science, and he thinks in terms of the most fundamental behavior that could be required for consciousness.

Piggy can't be bothered to even read research done by others, never mind perform his own, and he thinks in terms of every behavior that might be required for full human consciousness.

Obviously the conclusion those two extremes reach will be very different from each other.

The "middle ground" you speak of can be reached by 1) taking the time and effort to at least try and understand the science behind the issue and 2) being specific and honest about the behaviors in question.

If there is any fault in Pixy's approach, it is that he assumes people will take his words literally. Saying a sufficiently advanced thermostat is aware of itself means only that. However most people read it and think he means "like a human," which is absurd, so they then think he is absurd. Should Pixy need to say "aware of itself, but not like a human, in particular the thermostat can't see, it has no body map, it can't sense anything but temperature, it has no imagination, and in general it's reference to "self" is so simple and alien to a human that we actually shouldn't even think of it as being "aware" of itself, except that in a formal sense this term does apply ?" I dunno. If he said that, he certainly would head off 90% of the pointless responses people make. On the other hand, he wouldn't need to say that if people bothered to educate themselves on what self reference actually is.

I tend to think that compromising "downward" is bad policy, people should always strive to educate themselves to where they can understand what someone else is talking about. But then again, I'm a Democrat.

Fudbucker
6th July 2012, 09:28 AM
Originally Posted by Fudbucker
However, I think it's clear Pixy isn't making a definitional claim, but rather stating that a brain is a kind of computer.

Well, they're really the same thing here, since computers are defined by their function.

You getting this, Belz? Cause I have no idea what Pixy means here. I read "A brain is a computer" to mean a brain is a KIND of computer (which is how you said you read it). But now Pixy seems to be making a definitional claim: they're the same thing.

Pixy, are you are making an equivalence claim?

Just to be clear, is your position:
For any X, if X is a computer, X is a brain?

??

I Ratant
6th July 2012, 09:38 AM
How can the brain INCLUDE a computer if you think the brain is a KIND of computer? Does a Mustang include a car???
.
The Mustang is the whole megilla.
It contains a motor.
The motor produces heat.
The heat is removed by the radiator.
The brain is only a part of the body.
It consumes energy and creates heat.
This heat must be shed... or radiated, if you will.

I Ratant
6th July 2012, 09:42 AM
The rope then is a ruler. You can consider the rope to be a component in a speed-calculating computer, but by itself the rope just lies there on the deck of the ship.
.
All of my computers and calculators and ropes just lie there getting dusty until I decide to do something with them.
I decide that in my onboard computer, my wetware. :)
Which in the winter time I keep a cap on, lest the heat generated by living and using it escapes and I get cold in the head area.

Fudbucker
6th July 2012, 09:47 AM
In answer to your question of evidence:

Consciousness is the ability to think about your own thoughts. Self-awareness.

I don't agree with this. I can be conscious of basic perceptions (hot, cold, pain, etc.), which don't have any underlying self-aware thoughts attached to them.

Computer programs do this via the technique of reflection. Not all programs do this, but many complex and/or real-time systems do, because it's as useful a technique for computers as it is for animals.

But you haven't demonstrated that computers are able to think. They're just a collection of binary switches switching away (of course, brains are just collections of neurons, but we are in the unique position of KNOWING brains can think). At what point does a computer begin to "think". Could ENIAC think? Does a pocket calculator think?

Feeling is either awareness of of an abstract representation of sensory data, or the conscious reflection of the same. So again, many computers do this.

But you can feel absent sensory data. Memories can arouse feelings. And your definition lacks the essence of what it is to "feel" something. Pain hurts. It feels bad. Can a computer also experience pain? If so, what kind of computer? If a computer can feel, do you think they can suffer? That is the logical extension of your claim. That opens a unique can of worms. We generally think it's immoral to cause needless suffering...

Thinking is just processing data, so all computers do this by definition.

I'm not JUST processing data when I think of my family, and feel all the emotions those thoughts bring. Your definition is lacking.

Even if it's true that that computers are thinking, feeling conscious machines, it doesn't follow that the brain is a kind of computer. The architecture is all wrong. Brains aren't binary collections of switches, they don't need programs to function, they're analog and digital.

PixyMisa
6th July 2012, 10:19 AM
:eek:I don't agree with this. I can be conscious of basic perceptions (hot, cold, pain, etc.), which don't have any underlying self-aware thoughts attached to them.
You can be aware of those perceptions without self-awareness. But you can't know that you're aware of those perceptions without self-awareness.

That one little trick, that self-referential loop: That's consciousness.

But you haven't demonstrated that computers are able to think.
Of course I have. That's what computers do.

But you can feel absent sensory data. Memories can arouse feelings. And your definition lacks the essence of what it is to "feel" something. Pain hurts. It feels bad. Can a computer also experience pain?
Sure, absolutely.

If so, what kind of computer?
Any suitably-programmed general-purpose computer can do this. Per the Church-Turing thesis, all general-purpose computers are equivalent in power (though not necessarily capacity).

If a computer can feel, do you think they can suffer?
Definitely.

That is the logical extension of your claim. That opens a unique can of worms. We generally think it's immoral to cause needless suffering.
Yep.

I'm not JUST processing data when I think of my family, and feel all the emotions those thoughts bring. Your definition is lacking.
Really?

Then what are you doing that is (a) relevant and (b) isn't just processing data?

Because the brain is a computer, a data processing engine. We can chase the signals from neuron to neuron, we can inspect the bias levels applied by various neurochemicals, and that is what is going on in there. When you think of your family - or anything else - that's data processing.

Even if it's true that that computers are thinking, feeling conscious machines, it doesn't follow that the brain is a kind of computer.
True. But the brain is a computer.

The architecture is all wrong.
Eh?

Brains aren't binary collections of switches
It basically is a binary collection of switches. It's a pulse-coded switched digital network.

they don't need programs to function
Of course they do. The program is encoded in the structure of the brain itself, kind of like ENIAC, but, oh, several times more complex.

they're analog and digital.
Mostly digital. Not that it matters a lot; electronic computers can be analog or digital, and you can process analog data on a digital computer and digital data on an analog computer.

This is a computer:

GcDshWmhF4A

So is this:

BlbQsKpq3Ak

So is this:

cYw2ewoO6c4

So is this (http://www.blikstein.com/paulo/projects/project_water.html).

So is this (http://news.bbc.co.uk/2/hi/science/nature/358822.stm).

And so, very importantly (a lot of people get stuck at this point), is LGkkyKZVzug.

Edit: This is a computer inside a computer inside another computer, recorded on a computer, loaded onto a computer, and then sent through a series of computers so that your computer can watch it on your computer.

xP5-iIeKXE8

Wait for it.... Waaaait for it... :eek:

Neat, huh?

PixyMisa
6th July 2012, 10:48 AM
If he said that, he certainly would head off 90% of the pointless responses people make.
Alas, no. (See any number of arguments here where I've said "I mean precisely this, nothing more, nothing less" and the other party has simply gone on attacking the position I've just explained fourteen times I do not hold.)

I Ratant
6th July 2012, 11:00 AM
Of course I have. That's what computers do (think).


...
.
Computers work with ones and zeroes.
The significance of any operation is performed outside the computer by the programmer.

Belz...
6th July 2012, 11:23 AM
You getting this, Belz? Cause I have no idea what Pixy means here.

That much is clear.

Read Rocketdodger's post just above yours.

Belz...
6th July 2012, 11:24 AM
Just to be clear, is your position:
For any X, if X is a computer, X is a brain?

No. All As are Bs does not entail that Bs are As.

rocketdodger
6th July 2012, 11:47 AM
.
Computers work with ones and zeroes.
The significance of any operation is performed outside the computer by the programmer.

That's equivalent to saying the significance of anything you do was performed by whoever or whatever set up the laws of physics.

Is that really the position you want to take?

PixyMisa
6th July 2012, 11:47 AM
.
Computers work with ones and zeroes.
Computers work with anything that can be represented by ones and zeroes, which includes the Universe, all its laws, and everything in it.

PixyMisa
6th July 2012, 11:55 AM
You getting this, Belz? Cause I have no idea what Pixy means here. I read "A brain is a computer" to mean a brain is a KIND of computer (which is how you said you read it). But now Pixy seems to be making a definitional claim: they're the same thing.

Pixy, are you are making an equivalence claim?

Just to be clear, is your position:
For any X, if X is a computer, X is a brain?
If you're taking the usual definition and not a metaphor, a brain is a particular implementation of a computer (biological, cellular, switched network).

But functionally, they're the same thing. All my examples above, and that thing in your head, all computers, and all equivalent in power (not capacity, but in their basic ability to instantiate algorithms).

This is kind of important. (http://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis#Success_of_the_thesis )

There is no magical hey-this-is-like-a-computer-only-better. It's not mathematically possible. If the brain does it, then one way or another, any computer built on any other substrate can do it.

I Ratant
6th July 2012, 12:36 PM
That's equivalent to saying the significance of anything you do was performed by whoever or whatever set up the laws of physics.

Is that really the position you want to take?
.
Unless I'm mistaken, the laws of physics are THE laws the universe runs by.
Our brain/imagination may be able to conceive of things outside those laws, but we have no way to get around them, other than by thought, which is all we do about any thwarting or work arounds.

I Ratant
6th July 2012, 12:37 PM
Computers work with anything that can be represented by ones and zeroes, which includes the Universe, all its laws, and everything in it.

.
True. Knots on a rope as mentioned.
Ones and zeroes are just voltage levels which are interpreted.

Mr. Scott
6th July 2012, 12:46 PM
Regarding a computer emulating a brain, why not ask me why a computer couldn't emulate my truck and get me to work every morning?

Here's how I analyze this question: A device can be defined in terms of its inputs, process, and outputs.

E.g., a radiator device has the intended purpose of inputting concentrated heat energy and using entropy to output diffuse heat energy. Almost every physical thing in the universe has the side effect of radiating (even a black hole, says Mr. Hawking).

A truck's purpose s to take the inputs of gasoline, driving controls, storage space, and output movement around our world on roads and such of what it's storing.

The brain's function, once again, is to take input from the five senses, and output muscle and hormone controls to the body. The earliest central nervous systems, e.g. c. elegans, had this function, like all more evolved brains of higher complexities, like ours. The brain's "purpose" is to pass on the genes that made it, and among its many ways of doing this is to dazzle mates with mastery in fine arts by controlling muscles that hold paint brushes, musical instruments, etc.

Massimo Pigliucci has argued that emulating a conscious brain in a computer is akin to emulating photosynthesis in a computer. You don't get sugar and you don't get consciousness.

Photosynthesis inputs light, water, and carbon dioxide, and outputs sugars and oxygen. A computer can't emulate this. It can simulate it (like a computer can simulate ping pong), but it won't produce sugar.

A computer, by its most familiar understanding, inputs information (keyboard, camera, mike, Internet, etc.) and outputs information (screen, speakers, etc). The brain, likewise, inputs information (senses) and outputs information (muscle signals). In this sense, the brain and the standard computer are the same. In theory, a computer could replace the brain, inputting senses and outputting muscle control. We already do this in bits, with prosthetic eyes, ears, arms and fingers. Also, in theory, a brain could replace a computer in a robot (e.g. Robocop). This is what we mean when we say the brain is a computer: information in, information out. What we are arguing is the process inside the brain which produces, as a side effect, consciousness, and whether or not consciousness is computation or an output in and of itself.

The anti-computationalists apparently see consciousness itself as an output of the brain (see this thread's post #1 (http://forums.randi.org/showthread.php?t=234306)) like sugar is an output of photosynthesis. I've yet to see them bring home. All I've seen is, essentially, appeals to emotions.

rocketdodger
6th July 2012, 01:07 PM
.
Unless I'm mistaken, the laws of physics are THE laws the universe runs by.
Our brain/imagination may be able to conceive of things outside those laws, but we have no way to get around them, other than by thought, which is all we do about any thwarting or work arounds.

Err, you totally missed my point, which is:

You claim that the fact that all programs can be traced back to code written by a programmer, even programs that write programs, somehow implies that the only significant causal events in the life of a program are the events of the programmer writing the code.

I understand, and even agree with this, for code like "if Ford Automotive stock is over $35.94 a share, buy 20 shares."

However for code like " if a > b return true else return false " that is less clear. In particular, if I as a programmer wrote that code, and I don't know exactly what a and b are for each instance of execution, then I don't see how the most significant causal event was me writing the code. I didn't even know what values a and b would end up taking, so how on Earth is my writing the code the significant event?

That is tantamount to saying that since the particles inside me follow the physical laws of the universe, and supposing those laws have some origin, then the only significant causal events related to me are whatever happened billions of years ago that led to those laws existing.

I disagree with that, because whatever led to those laws certainly couldn't imagine that I would be here typing this post. Even if the universe is 100% deterministic, the significant causal events related to me are what is happening now.

I Ratant
6th July 2012, 02:44 PM
I can't see a computer, with the rules it must be assembled with, coming up with:
"It was a dark and stormy night"... or "Call me Ismael"... or "In the beginning..",
particularly the last which has what a computer would recognize as physical absurdities, much less plots that have been pre-figured in the author's mind to go in a specific direction, which may or may not be possible in a physical world.
A computer that might do that would have so few constraints its output would resemble that herd of monkeys more often than not.
Dawkins got his program to find a particular phrase in Shakespeare, but he had to direct the computer to that goal, external to the programming of the phrase generator.

dlorde
6th July 2012, 03:03 PM
< marble is near dent > ----> given enough time -----> < marble is settled in dent >

contrast this with

< marble is not near dent > -----> given enough time -----> < marble is not near dent >

Thanks for an exceptionally clear explanation of attractors.

Takes me back to the heady days when fractals, chaos, and strange attractors were new and exciting ;)

dlorde
6th July 2012, 03:20 PM
... I'd have to see a direct quote from Markram, tho, to accept that he would posit that the simulator would become conscious.

Well, the Seed magazine article (http://seedmagazine.com/content/article/out_of_the_blue/P2/) Pixy linked to says:“There is nothing inherently mysterious about the mind or anything it makes,” Markram says. “Consciousness is just a massive amount of information being exchanged by trillions of brain cells. If you can precisely model that information, then I don’t know why you wouldn’t be able to generate a conscious mind.”

It seems Markram expects a human scale Blue Brain to be conscious, as the article continues:“I think it will be just as interesting, perhaps even more interesting, if we can’t create a conscious computer,” Markram says. “Then the question will be: ‘What are we missing? Why is this not enough?’”

rocketdodger
6th July 2012, 03:47 PM
I can't see a computer, with the rules it must be assembled with, coming up with:
"It was a dark and stormy night"... or "Call me Ismael"... or "In the beginning..",
particularly the last which has what a computer would recognize as physical absurdities, much less plots that have been pre-figured in the author's mind to go in a specific direction, which may or may not be possible in a physical world.
A computer that might do that would have so few constraints its output would resemble that herd of monkeys more often than not.
Dawkins got his program to find a particular phrase in Shakespeare, but he had to direct the computer to that goal, external to the programming of the phrase generator.

Well, let me ask you some questions.

Have you ever programmed?
Have you ever programmed any kind of A.I.?
Have you ever programmed any kind of A.I. that learns?
Have you ever programmed any kind of A.I. that learns using artificial neural networks?

If you answer "no" to any of those, then concluding that a computer can't do something the human brain can seems to be a bit premature to me.

If you answer "yes" to all of them, then I would certainly be interested in a discussion involving the results you got that lead to to this conclusion of yours.

rocketdodger
6th July 2012, 03:54 PM
Well, the Seed magazine article (http://seedmagazine.com/content/article/out_of_the_blue/P2/) Pixy linked to says:

It seems Markram expects a human scale Blue Brain to be conscious, as the article continues:

I doubt piggy considers what Markram says to be evidence of what Markram means...

dlorde
6th July 2012, 04:16 PM
So when you say "computational equivalence", what exactly do you mean?
I mean it in a pragmatic Wolfram sense; he says ... most systems are computationally equivalent. For example, the workings of the human brain or the evolution of weather systems can, in principle, compute the same things as a computer.

Clearly we are not in a position to use weather systems to compute anything non-trivial, but if we can use a digital computer to compute the same things as a working human brain (and the Blue Brain project suggests we might), and if it is the computational workings of the human brain that give rise to consciousness, then, as Markram says, "I don’t know why you wouldn’t be able to generate a conscious mind".

Piggy
6th July 2012, 04:31 PM
In answer to your question of evidence:

Consciousness is the ability to think about your own thoughts. Self-awareness. Computer programs do this via the technique of reflection. Not all programs do this, but many complex and/or real-time systems do, because it's as useful a technique for computers as it is for animals.

Feeling is either awareness of of an abstract representation of sensory data, or the conscious reflection of the same. So again, many computers do this.

Thinking is just processing data, so all computers do this by definition.

That's simply not a definition of consciousness that works with brains, which are the only objects we know of that produce conscious experience.

Consciousness may (or may not) be required to "think about your own thoughts", but that process is not what consciousness is.

Consciousness cranks up when you wake up, or when you dream, and it's not happening when you're in deep sleep or under general anesthesia. It is marked by the production (by your brain) of colors, sounds, odors, pain, proprioception, and other phenomena which are unique to it.

We can group these under the general category of "experiences", and the experiences of other species will overlap with ours to varying degrees, presumably moreso for gorillas than for cats or crows.

That's the phenomenon -- or cluster of phenomena -- that needs to be explained.

Designing and building computers which simply refer to data sets describing their own states is not relevant to the study of consciousness, because no one has any reason to believe (and every reason to doubt) that this will result in a machine which behaves in such a way that it performs experiences of any sort.

And in fact, it's not true that thinking is just processing data... at least, not conscious thinking, anyway.

The result of processing data is, necessarily, data. And data must be read to be of use (or even to be data).

There is no one there to read our brains, so discussion of "data" there is necessarily metaphorical to some extent, although it is immensely useful!

But as I've repeatedly said, this way of looking at consciousness which you describe is utterly unproductive in the lab and field, so it is not used. Progress is being made, however, using the functional definition.

dlorde
6th July 2012, 04:37 PM
We think of "hallucinations" as deviations from the accurate perception of reality.
...
That's what I mean when I say that our everyday experience is a "normative hallucination".
OK; that makes sense, unlike 'normative non-normative experience'

So onto the other part, "the brain produces colors and brightness"....

Colors, smells, pain, these exist nowhere but in the brain.

Where else could these things possibly exist?

I'm just emphasising the need for care and clarity here - the brain only produces red if you stab it. It experiences the sensation or perception of red[ness] when a red object is seen (or when the relevant areas of the visual cortex are active).

Piggy
6th July 2012, 04:43 PM
I'm sure you'll agree that a truck and a mind are two very different things and that comparing them like you did is not entirely relevant ?

A rock and a hammer are two very different things, but I wouldn't want to get hit with either one. ;)

For our purposes, the truck and the brain are enough alike to make the point here. And that's because if you want real-time behavior in actual spacetime, then a computer can be part of making that happen, but you can't just program that.

You can't program the real-world behavior of a truck... a purely programmed truck can't get you to work. And a purely programmed pancreas can't function in your body.

A computer might be part of an artificial pancreas, or a working model of my truck, but you can't program real-world behavior and have it occur with only enough hardware to "run the logic" and none to carry it out.

And ditto for a brain. Unless you've got some reason for me to believe that this particular glob of matter operates under some very special rules.

Everything in the cluster of behaviors we call consciousness -- including colors, sounds, smells, pain, and so forth... which are all behaviors, and not properties of things out there in the world -- is, as far as anyone can tell, the result of the normal physical behavior of your brain, just like everything else your body does.

Just because we haven't figured out what's going on yet, we are not therefore relieved from our obligation not to jump to the conclusion that something other than brute physics is at work here.

We still assume that the phenomenon of consciousness is caused by the same physics that underlie all other phenomena we observe.

Yeah but using that definition makes calling your PC "computer" also useless.

Damn right. But that's how Turing saw it, you know.

If you want to use that view, which is Wolfram's view, then everything computes.

But if you're talking about symbolic information processing, which is what my PC does, then you're talking about a system which requires an encoding and decoding mind, and there's no such service for what's going on in our brains.

There is as much symbolic information in our brains as there is in our fingernails.

My brain does calculations. Computers do too. My brain does comparisons. Computers do too. My brain interfaces with my sensors and peripherals. Computers do too. Aero fits well in your purse. Milk ? Why not ? :p

That's pretty damn weak. Are there any two objects that you couldn't get up a list of that caliber for?

dlorde
6th July 2012, 04:45 PM
Again I ask, does anyone think feeling thinking conscious machines already exist?

I'm happy to accept that computers can feel and think. I don't think they're conscious yet because I don't accept Pixy's definition (self-referential information processing) as sufficient; necessary, but not sufficient. Having said that, it's hard pin down anything more than that as a requirement, so I can understand why he takes such a 'minimalist' approach.

Piggy
6th July 2012, 04:48 PM
I'm just emphasising the need for care and clarity here - the brain only produces red if you stab it. It experiences the sensation or perception of red[ness] when a red object is seen (or when the relevant areas of the visual cortex are active).

No, if you stab my brain, my brain does not produce red. It stops producing red because I lose consciousness.

If you're looking at my brain when it gets stabbed, then your brain will produce (or more accurately, perform) red.

There is no such thing as "redness" for your brain to "experience"... there are no "red objects" for you to see.

In various circumstances, your brain will perform red. That's all.

Similarly, if you stick me in the arm with a knife, I feel pain, but the pain isn't in the knife. It isn't even in my arm. Pain doesn't, and can't, exist in either of those places.

The knife in my arm triggers a nervous response, which triggers my brain to perform pain.

Similarly, when light comes from certain sections of this screen in front of me, it triggers a nervous response, which triggers my brain to perform red.

But red doesn't exist in the screen any more than pain exists in the knife.

It makes no more sense to say the one than to say the other.

Piggy
6th July 2012, 05:08 PM
Between you saying we know nothing about it, and Pixy saying we've got a firm grasp of it, there has to be a middle ground where reality lies.

No, there doesn't.

Nobody yet knows the mechanism, and we don't even have any firm theoretical basis for proposing one.

That's just the state of things right now.

I gave you direct citations from heavyweights, several as recent as 2011, stating frankly that this is the case.

The other side hasn't provided any citations of anybody.

Piggy
6th July 2012, 05:21 PM
Because the brain is a computer.

So say ye.

3 things I ask....

Say clearly what you mean by "computer".

Cite your evidence that the brain is one.

Explain what this contributes to our understanding of consciousness.


I'd rather ask you why you believe that's a coherent question.

Sure.

If you're talking about making a machine conscious, which is to say making it hallucinate in some way, whether normative-waking or dream-state or what we'd commonly call a hallucination -- or any other state we may be unfamiliar with -- then you're talking about making it behave a certain way.

Conscious phenomena like colors and sounds and textures and such are real-spacetime behaviors. Yes, color is a bodily function.

To get real-spacetime behaviors of any sort out of a machine, you can't just use enough hardware to "run the logic" as you have said... you have to run the logic and perform the action.

This is true of machines, including computers, that we have now and it will be true of any machine, including computers, we ever build.

So far, neurobiological research has discovered no reason to impose any symbolic or informational or computational layer in between the physical-energetic behavior of the brain and the cluster of phenomena we call conscious awareness.

Progress is being made without any of that, although the concepts of computation, computational power, and information processing have greatly contributed to our understanding of how the brain works, to our ability to talk about the brain and consciousness, and our capacity to do research.

But still, at the end of the day, if you want a liver to do what a liver does in a real body -- be it a human one or a robot one -- you have to build a liver, you can't just program it.

All neurobiological research thus far has given no indication that the same is not true of the brain and all of its real-spacetime function, including consciousness.

So if you want to build a truck to get you somewhere, you can't program it, you have to build it. And by the same token, if you want to build a thing that actually has experiences, you can't just program it, you have to build it that way.

dlorde
6th July 2012, 05:22 PM
Similarly, when light comes from certain sections of this screen in front of me, it triggers a nervous response, which triggers my brain to perform red.

I guess 'perform' will do for 'experience'. Makes more sense than 'produce'. It may sound pedantic, but how one thinks about these things makes a difference.

Piggy
6th July 2012, 05:24 PM
This is why it isn't even worth discussing things with you, piggy.

Someone brings up good research.

You respond with "blah blah blah, I didn't read it, I don't need to, it wasn't an MRI and it wasn't done by someone with a "biologist" in their title, so it isn't relevant."

If you think it's relevant, then let's look at it.

I'll go back and pick it up again (you were right, I didn't finish the first page) and if you don't mind just tell me in a few sentences what was accomplished and how you think it's relevant to the study of consciousness.

And we'll pick it up from there.

Piggy
6th July 2012, 05:28 PM
Pixy writes code for a living and one could say he even does research in computer science, and he thinks in terms of the most fundamental behavior that could be required for consciousness.

Piggy can't be bothered to even read research done by others, never mind perform his own, and he thinks in terms of every behavior that might be required for full human consciousness.

This is so crazy I can hardly believe it.

PixyMisa has an idiosyncratic definition of consciousness -- as I've shown -- which is not accepted by anyone in the field, so he has no frame of reference for thinking about what is or isn't required for consciousness. Writing code for a living makes as much difference as building cars.

And no, I don't do my own research, but I do read other people's research on consciousness. Research on AI is another thing entirely.

Piggy
6th July 2012, 05:31 PM
Massimo Pigliucci has argued that emulating a conscious brain in a computer is akin to emulating photosynthesis in a computer. You don't get sugar and you don't get consciousness.

Exactly.

At the end of the day, no matter how you slice it, it really is that simple.

rocketdodger
6th July 2012, 06:06 PM
No, there doesn't.

Nobody yet knows the mechanism, and we don't even have any firm theoretical basis for proposing one.

That's just the state of things right now.

I gave you direct citations from heavyweights, several as recent as 2011, stating frankly that this is the case.

The other side hasn't provided any citations of anybody.

Saying that the mechanism is not yet known, and that we don't have a "firm" basis for proposing one, is a far cry from knowing nothing about it.

Furthermore I question why you perpetually consider people who do little besides write popular books "heavyweights."

I personally would consider the people doing the actual research the "heavyweights," if the term is even applicable, and I doubt they share such a conservative viewpoint because the very fact that they are doing research suggests they have some hypothesis to test at least.

Piggy
6th July 2012, 06:18 PM
Well, the Seed magazine article (http://seedmagazine.com/content/article/out_of_the_blue/P2/) Pixy linked to says:

It seems Markram expects a human scale Blue Brain to be conscious, as the article continues:

Hmm... those statements were sometime after Gazzaniga, so I'll have to excuse MG for now.

And I will count this as a cite (finally) for the computationalists.

I do find this statement odd, tho:

“Once we can model a brain, we should be able to model what every brain makes. We should be able to experience the experiences of another mind.”

There's a big and obvious problem with this idea.

Our brains use some of the same real-estate, not surprisingly, when we remember or imagine an event as we use when we experience that event.

That's why you'd never be able to describe what something smells like to a person who's never been able to smell. The bits of their brain that would need to be active to understand what you're saying -- because language doesn't work by "standing for" things, it works by activating brain cells -- just don't activate.

So no, looking inside the activity of a cat's brain isn't going to make us able to know how a cat experiences the world, because all we have are our own tools, which create a human experience.

We'd have to rebuild our own brains to warp our experience catward.

Think about birds, they have some sort of way of sensing magnetic fields. Let's just suppose, for the sake of speculation here, that they're consciously aware of that somehow... that changes in magnetic fields "seem like something" to them, like chemicals in our nose seem like smells, and air bouncing in our ears seems like sounds, and such.

Even if we figured out exactly what brain activity corresponded with their experience of magnetic fields, we have no way of currently imagining how we would answer the question "what does that seem like to them?".

So I'm skeptical of that view.

As is Markram, to some extent, it seems....

And yet, Markram is candid about the possibility of failure. He knows that he has no idea what will happen once the Blue Brain is scaled up. “I think it will be just as interesting, perhaps even more interesting, if we can’t create a conscious computer,” Markram says. “Then the question will be: ‘What are we missing? Why is this not enough?’”

I think the answer might be here:

“There is nothing inherently mysterious about the mind or anything it makes,” Markram says. “Consciousness is just a massive amount of information being exchanged by trillions of brain cells."

I agree with the first statement completely.

The second statement, though, has problems, unless you're talking about information in the Wolfram sense, and in that case it's trivial because so is everything else your brain does.

If there's a theory for how and why we get from particular configurations of information (independent of particular physical behaviors) and not from others to phenomena in the consciousness cluster such as yellow, mustiness, or the sound of thunder, I'd like to know what it is.

And I'd like to know why consciousness is more likely to depend on a supposed informational layer supported by the physical behavior rather than directly from the physical behavior itself.

Pure Argent
6th July 2012, 06:19 PM
Say clearly what you mean by "computer".

While I can't answer for Pixy, I'd say that the definition of "computer" is pretty simple. It's a device which can accept, store, and process data according to a pre-determined set of rules. Storing may not be absolutely necessary; I don't think it really matters for the purpose of this discussion, but there it is.

Cite your evidence that the brain is one.

Well, the brain can accept data (the five senses), store it (memory), and process them according to a predetermined set of rules (various neurons and nerve clusters interacting).

Explain what this contributes to our understanding of consciousness.

It lets us know that it arises from a complex system like the brain, can be altered by making changes to the rules or damaging the system, and can theoretically arise from artificial systems as well as natural ones.

So if you want to build a truck to get you somewhere, you can't program it, you have to build it. And by the same token, if you want to build a thing that actually has experiences, you can't just program it, you have to build it that way.

I'm not entirely sure what you're arguing here. Or, rather, I'm pretty sure I understand what you're saying; I just don't get how it applies to the situation at hand.

Do you think that a robot could ever be conscious? Say that it was given analogues for the ear, eye, nose, mouth, hand, leg, and so on, as well as an artificial brain that could receive, process, and react to all this information in exactly the same way that a human brain could.

Would this robot be conscious? It has been "built", after all, and has the capability to "go somewhere" - experience and react to things. If you think it isn't conscious, why do you think that?

rocketdodger
6th July 2012, 06:26 PM
If you think it's relevant, then let's look at it.

I'll go back and pick it up again (you were right, I didn't finish the first page) and if you don't mind just tell me in a few sentences what was accomplished and how you think it's relevant to the study of consciousness.

And we'll pick it up from there.

They wired up some artificial neural networks according to a hypothesized pathway that they thought might be part of how the mamallian brain controls motor movements via imagination, hooked the simulation up to a robot arm with a camera sensor, and it worked. The robot imagined that if it turned a certain way it would see an object of a color that it "liked" and because of that imagined reward it actually turned.

I don't think it is necessarily directly relevant to the search for phenomenal consciousness. Rather, I think it is relevant to all of the arguments you have made against the computational model, for starters.

You have explicitly stated that simulating neurons can't lead to real world results, that what happens in simulations is just ones and zeros that only humans can make sense of. Well, you are wrong about that, because these guys hooked up a robot to simulated neurons, and it did something on its own.

You have explicitly stated your distaste for AI research because it goes on in some sort of vacuum from brain research. Well, you are wrong about that, because these guys made an artificial neural network based on known brain pathways, in fact the paper goes into a good amount of detail describing such pathways and even provides citations of its own for the "brain research," as you call it, that the researchers educated themselves with.

You have explicitly stated that computing is somehow not "real time" and thus a program is incapable of emulating what is necessary for consciousness. Well, you are wrong about that, because these guys simulated the neurons in software, and those neurons didn't have any problem interacting with the world in "real time."

You have explicitly stated that there is no indication by any research thus far that the functions of the brain necessary for consciousness can be divorced from its "real-spacetime function" which you have explicitly stated is surely more than what known computers can provide. Well, you are wrong about that, because these guys seem to have emulated at least a few of the functions required for that thing we call "imagination" in a computer.

Whether or not the research explains much in terms of consciousness for you isn't really the point. The point is that the research clearly shows that just about every single one of your counterarguments isn't valid. That's why I brought it up. Turns out there are dozens of projects like this, just go to the aisb home site and look at their symposiums over the last decade.

So my question would be, given that there is actual research going on where people are looking at the mamallian brain and emulating its functions with artificial computers and getting behavioral results that confirm their hypotheses, why do you think that simply making post after post trying to argue to the contrary is productive? Once you move past this silly need to control the conversation and "win" a debate at any cost maybe you will start to learn a few things, eh?

Piggy
6th July 2012, 06:27 PM
Clearly we are not in a position to use weather systems to compute anything non-trivial, but if we can use a digital computer to compute the same things as a working human brain (and the Blue Brain project suggests we might), and if it is the computational workings of the human brain that give rise to consciousness, then, as Markram says, "I don’t know why you wouldn’t be able to generate a conscious mind".

When I asked what you meant by "computational equivalence" it's really the same thing as asking what you mean by "compute the same things".

I think you'd agree with me that you can only "compute" stomach acid in the Wolfram sense of "compute".

If you're talking about the IP sense of "compute" then the outcome of any computation is symbolic, and requires a reader.

So to "compute consciousness" makes sense if we're talking about Wolfram computation.

But to "compute consciousness" makes no sense if we're talking about info processing of a symbolic type. That's categorically.

If there's another way to define "computing" that allows for digital computer and human brains to be "computing the same thing" in a non-trivial sense, then that's certainly interesting.

In the meantime, I would anticipate that the odds of the Blue Brain generating consciousness are about as good as a computer simulation of a power plant generating electricity -- it'll happen if they get the hardware right.

And since Markram specifically discusses putting the brain into a body, well, they might just do that.

Piggy
6th July 2012, 06:28 PM
I guess 'perform' will do for 'experience'. Makes more sense than 'produce'. It may sound pedantic, but how one thinks about these things makes a difference.

I prefer "perform" over "produce" myself, and prefer both over "contain", a term I wish we'd do away with in this context.

rocketdodger
6th July 2012, 06:52 PM
If there's a theory for how and why we get from particular configurations of information (independent of particular physical behaviors) and not from others to phenomena in the consciousness cluster such as yellow, mustiness, or the sound of thunder, I'd like to know what it is.

And I'd like to know why consciousness is more likely to depend on a supposed informational layer supported by the physical behavior rather than directly from the physical behavior itself.

Why do you insist on thinking there is a difference between information and physical behavior?

I have no idea where you get this from. Whatever your source is, it is entirely wrong.

Piggy
6th July 2012, 07:17 PM
Saying that the mechanism is not yet known, and that we don't have a "firm" basis for proposing one, is a far cry from knowing nothing about it.

Furthermore I question why you perpetually consider people who do little besides write popular books "heavyweights."

I personally would consider the people doing the actual research the "heavyweights," if the term is even applicable, and I doubt they share such a conservative viewpoint because the very fact that they are doing research suggests they have some hypothesis to test at least.

I've never said we know nothing about it, of course.

And although there are some hypotheses being tossed around, the focus at the moment is just to continue investigating NCCs and see where it goes, which is pretty raw stuff.

Besides, I quoted them directly so the viewpoints expressed are theirs.

Anyway let's see if the folks I cited have done any actual research....

Christof Koch (http://www.klab.caltech.edu/~koch/) of Caltech, author of Biophysics of Computation: Information Processing in Single Neurons (http://www.klab.caltech.edu/~koch/biophysics-book/) -- his research focuses on visual systems, information theory, and questions of machine sentience (http://spectrum.ieee.org/static/sing_koch).

Ned Block (http://www.nyu.edu/gsas/dept/philo/faculty/block/) from NYU -- you might be interested in his panel appearance along with Koch and info-theorist Giulio Tononi in the "Consciousness and Intelligence (http://techtv.mit.edu/videos/13236-consciousness-and-intelligence)" seminar of last year's "Brains, Minds, and Machines" symposium at MIT, which btw I think demonstrates the actual current intersection of neurobiology and info theory.

Bernard Baars (http://vesicle.nsi.edu/users/baars/cv.html) of Berkeley who's done all kinds of things (http://www.google.com/#hl=en&sclient=psy-ab&q=bernard+baars&oq=bernard+baars).

V.S. Ramachandran (http://cbc.ucsd.edu/ramabio.html) of UC San Diego's Center for Brain and Cognition, who has done some research (http://cbc.ucsd.edu/research.html).

Michael Gazzaniga (http://www.psych.ucsb.edu/~gazzanig/) who was on the original split-brain study team, led the continuing research, and is the current president of the Cognitive Neuroscience Institute.

Steven Pinker (http://pinker.wjh.harvard.edu/about/current_cv/cv%20Steven%20Pinker.htm) of Harvard, advocate of the computational model of mind

Mr. Scott
6th July 2012, 11:36 PM
Exactly.

At the end of the day, no matter how you slice it, it really is that simple.

It's as if you didn't read my refutation of Massimo's comparison of consciousness to sugar, or you didn't understand it.

Photosynthesis outputs sugar.

What does the brain output?

How is consciousness like sugar?

PixyMisa
7th July 2012, 12:50 AM
I can't see a computer, with the rules it must be assembled with, coming up with:
"It was a dark and stormy night"... or "Call me Ismael"... or "In the beginning..",
particularly the last which has what a computer would recognize as physical absurdities, much less plots that have been pre-figured in the author's mind to go in a specific direction, which may or may not be possible in a physical world.
The pig go. Go is to the fountain. The pig put foot. Grunt. Foot in what? ketchup. The dove fly. Fly is in sky. The dove drop something. The something on the pig. The pig disgusting. The pig rattle. Rattle with dove. The dove angry. The pig leave. The dove produce. Produce is chicken wing. With wing bark. No Quack.

PixyMisa
7th July 2012, 12:57 AM
Say clearly what you mean by "computer".
Any system that is Turing complete. That's what anyone who understands computers means by the term.

Cite your evidence that the brain is one.
The brain is Turing complete. This was understood by everyone in the all related fields the minute we came up with the term "Turing complete".

Explain what this contributes to our understanding of consciousness.
Consciousness is a self-referential informational process. From Descarte's cogito to Dennett and Hofstadter, that's what we've always meant by the term. Consciousness is a function of the brain. The brain is a computer.

It's not hard, Piggy. Complicated, yes. but not hard. All you have to do is stop believing in magic.

PixyMisa
7th July 2012, 01:12 AM
When I asked what you meant by "computational equivalence" it's really the same thing as asking what you mean by "compute the same things".

I think you'd agree with me that you can only "compute" stomach acid in the Wolfram sense of "compute".
Well, you can of course also simulate it. But for things defined by their physical properties (e.g. acid) there's a divide that does not exist for things defined by their informational properties (minds, consciousness).

If you're talking about the IP sense of "compute" then the outcome of any computation is symbolic, and requires a reader.
Only if you're a dualist. Are you a dualist?

So to "compute consciousness" makes sense if we're talking about Wolfram computation.
And in the symbolic sense.

But to "compute consciousness" makes no sense if we're talking about info processing of a symbolic type. That's categorically.
Categorically wrong, yes.

In the meantime, I would anticipate that the odds of the Blue Brain generating consciousness are about as good as a computer simulation of a power plant generating electricity -- it'll happen if they get the hardware right.
On what basis? Other than the usual arguments from personal incredulity and ignorance?

Belz...
7th July 2012, 02:53 AM
I can't see a computer, with the rules it must be assembled with, coming up with:
"It was a dark and stormy night"... or "Call me Ismael"... or "In the beginning..",

Well I can't see humans ever reaching the moon, myself.

:rolleyes:

Belz...
7th July 2012, 02:59 AM
For our purposes, the truck and the brain are enough alike to make the point here.

No, they're not. The truck is a thing that moves from A to B while transporting something. The brain is a thing that gets data and outputs other data. How are they alike ?

You can't program the real-world behavior of a truck... a purely programmed truck can't get you to work. And a purely programmed pancreas can't function in your body.

True. You could program a very precise simulation of a truck, but not a real truck.

And ditto for a brain. Unless you've got some reason for me to believe that this particular glob of matter operates under some very special rules.

Not "special rules", but "different processes". See, the brain, as I said above, gets, processes, and outputs symbolic data. It just so happens that this is exactly what computers do, and nothing else in the universe does only that. To me that's a pretty convincing case for their similarity, and why one can emulate, simulate or be the other.

Just because we haven't figured out what's going on yet, we are not therefore relieved from our obligation not to jump to the conclusion that something other than brute physics is at work here.

Quite true.

If you want to use that view, which is Wolfram's view, then everything computes.

Right. I don't like that definition because it's not very useful, by way of being too broad.

But if you're talking about symbolic information processing, which is what my PC does, then you're talking about a system which requires an encoding and decoding mind, and there's no such service for what's going on in our brains.

Yes, it requires an encoding and decoding mind. But it need not be an external mind, right ?

There is as much symbolic information in our brains as there is in our fingernails.

I'm sorry, what ?

That's pretty damn weak.

Didn't like the 80s commercial reference ?

Are there any two objects that you couldn't get up a list of that caliber for?

For computing ? No.

Belz...
7th July 2012, 03:03 AM
No, if you stab my brain, my brain does not produce red. It stops producing red because I lose consciousness.

If you're looking at my brain when it gets stabbed, then your brain will produce (or more accurately, perform) red.

Did you just make a completely irrelevant distinction ?

No, there doesn't.

Well it's hard to imagine having a reasoned conversation with you about this, then, isn't it ?

The other side hasn't provided any citations of anybody.

That is untrue. I've seen one on the same page as your post.

Belz...
7th July 2012, 03:10 AM
So if you want to build a truck to get you somewhere, you can't program it, you have to build it. And by the same token, if you want to build a thing that actually has experiences, you can't just program it, you have to build it that way.

I just had an idea for an analogy and I'm not sure if it'll work. Here goes...

A truck is a metal frame over an axle and four wheels, transports stuff and goes places, right ?

Let's do the consciousness/computer thing in reverse.

Let's say I build an organism in a lab. It's a vertebrate with a tough skin with hard plates. It has flabs on either side that you can open to get inside its frontal cavity, in which there are ganglions you can push or pull to trigger certain functions of its nervous system. In front are two white bioluminescent organs. Same thing for the rear, but red. There is another cavity at the back. You can input a slush-like food in an orifice to the side, which feeds the organism, and comes out as fart from the rear. Under it are four legs arranged in a way that they can spin on themselves without damage, so they turn like wheels. Using these attributes, you can shove stuff in its back cavity, hop inside, press a few ganglions and go see your father-in-law in Jersey.

Is it a truck ?

I'd say you'd have a number of truck enthousiasts, amongst others, claiming it can never be a truck because it's organic, etc.

Belz...
7th July 2012, 03:13 AM
The pig go. Go is to the fountain. The pig put foot. Grunt. Foot in what? ketchup. The dove fly. Fly is in sky. The dove drop something. The something on the pig. The pig disgusting. The pig rattle. Rattle with dove. The dove angry. The pig leave. The dove produce. Produce is chicken wing. With wing bark. No Quack.

Huh ?

Mr. Scott
7th July 2012, 05:15 AM
In various circumstances, your brain will perform red. That's all.

Who or what is the brain "performing" red to? You have to explain not just the performer and how it performs, but also the audience, and how THAT works. And why performance of qualia needed to evolve in the first place. The biological sciences have yet to isolate a performer or an audience in the brain of such things, have they?

How do you know the internal subjective experience of a non-biological data processing machine cannot possibly seem, to it, like magic?

PixyMisa
7th July 2012, 06:06 AM
Huh ?
No Quack. (http://thedailywtf.com/Articles/Classic-WTF-No-Quack.aspx)

Piggy
7th July 2012, 06:28 AM
Why do you insist on thinking there is a difference between information and physical behavior?

I have no idea where you get this from. Whatever your source is, it is entirely wrong.

"Information" is a term we can use in a variety of ways, as you know.

In the Wolfram sense, it's totally equivalent to physical activity and properties.

If you use the term to refer to degrees of entropy, then the equivalence is not complete.

If you use the term in such a way that it allows an informational "world of the simulation" to exist which can (and must) be described in radically different ways from the physical activity of the simulator -- as y'all do -- then you (not I) have divorced the information from the physical behavior in a radical way.

If we use the term "information" in the sense of something that can be encoded and decoded, then that information becomes independent of physical reality in very important ways.

I can send the same message to you -- say, "Help!" -- using the motion of flags, or an arrangement of rocks on a beach, or by flashing lights in certain patterns, or by causing air to bounce inside your ears in a particular way. That means that there must be a "difference between information and physical behavior" when we're talking about this kind of information, or else all those different physical set-ups couldn't carry the same kind of information.

However, there's a caveat when talking about this last sort of information, because it requires an encoder and decoder.

Someone who doesn't speak the same language as me (whether that's English or semaphore) won't get the message, unless they use other shared information such as our human understanding of what a frantic cry sounds like, which is another kind of encoding and decoding.

In other words, there actually is no "information" in the rocks or the lights or the flags or the air, just like there's no pain in the knife.

So we can talk about the brain objectively (without inserting any encoder and decoder) in the Wolfram sense, and we can speak metaphorically about "information in the brain" when we're really referring to the brute physical activity, and we can talk about information exchanged in a system that contains more than one brain.

But to talk about this last kind of information existing objectively in the isolated brain, well, that makes very little sense.

PixyMisa
7th July 2012, 06:36 AM
If you use the term in such a way that it allows an informational "world of the simulation" to exist which can (and must) be described in radically different ways from the physical activity of the simulator -- as y'all do -- then you (not I) have divorced the information from the physical behavior in a radical way.
Yes, it's simulated. However, this does not divorce the information from the information.

However, there's a caveat when talking about this last sort of information, because it requires an encoder and decoder.
Which can be pretty much any switching mechanism. The operation of a computer does not need to be decoded, because it is itself a decoder.

Piggy
7th July 2012, 07:10 AM
I'd say that the definition of "computer" is pretty simple. It's a device which can accept, store, and process data according to a pre-determined set of rules. Storing may not be absolutely necessary; I don't think it really matters for the purpose of this discussion, but there it is.

Well, the brain can accept data (the five senses), store it (memory), and process them according to a predetermined set of rules (various neurons and nerve clusters interacting).

What do you mean by "data"?

If I consider food "data", then my body can accept it, store it, and process it according to a predetermined set of rules.

But that seems odd to say because we normally use the term "data" to refer to sets of symbols -- to what we find in spreadsheets and on printed pages and burned into discs and such.

A baseball hitting you in the head is not generally considered "data". The number of beanballs a pitcher has thrown in his career (an abstraction) is condidered "data".

And because "data" requires abstraction, or encoding, it must be decoded -- if all intelligent life were to vanish from the universe tomorrow, our data would vanish with us.

So the problem with your model is that the brain's "input" is the equivalent of getting hit by a baseball -- it's just a whole bunch of physical bombardments.

No "data" goes into that organ of your body.

Once that organ is up and running properly, then we can talk about producing data, but not when we're talking about how the organ itself runs.

My PC, on the other hand, has been specially designed by human beings to help us handle data.

It lets us know that it arises from a complex system like the brain, can be altered by making changes to the rules or damaging the system, and can theoretically arise from artificial systems as well as natural ones.

But we already knew that consciousness arises from a complex system like the brain, because we already knew that it's a function of the brain. So there's no contribution on that count.

And we already knew that consciousness can be altered by physically manipulating or damaging the brain through lab experiments.

And we already assumed that this behavior, like all others, can (in theory, unless we discover different) be mimicked in some sort of other physical system, if properly designed and built.

We get all of this from observing the brain. So there are no contributions here.

I'm not entirely sure what you're arguing here. Or, rather, I'm pretty sure I understand what you're saying; I just don't get how it applies to the situation at hand.

Do you think that a robot could ever be conscious? Say that it was given analogues for the ear, eye, nose, mouth, hand, leg, and so on, as well as an artificial brain that could receive, process, and react to all this information in exactly the same way that a human brain could.

Would this robot be conscious? It has been "built", after all, and has the capability to "go somewhere" - experience and react to things. If you think it isn't conscious, why do you think that?

No, that's not my point.

Let me rephrase it....

As long as you're talking about signal-in/signal-out, it's fine to swap a computer out for some part of a system.

For instance, I can replace my car-body paint crew by replacing their brains and their arms with computers and mechanical arms.

That's because to run the paint sprayers I just need the computer to accept electrical signals in the right pattern and put them out in the right pattern.

That's why it's so (relatively) easy to build a machine that does what your knee does when the doctor taps it with a mallet.

Making your leg jump requires only that a nerve signal go up one line and trigger other signals that go back down another. (You do consciously feel what's going on, but you don't have to -- it's not necessary for the process to occur.)

So why can't I get a computer -- and a computer alone -- to haul my trash to the dump, or produce sugar, or filter my blood?

In other words, why can't I swap a computer (with only enough hardware to run the logic and nothing else) for my truck, or a leaf, or my liver?

It's because these processes are not just signal-in/signal-out... real work is required for these outcomes, even though they are all quite different in nature.

And although we don't know what it is yet, exactly, or how it's done, consciousness does not seem to be "signal out". (If it is, what is the signal and where is it going to, and how does it produce colors and sounds and smells and pain and such?)

In another thread I used this analogy to illustrate the point....

Suppose there's a town in a mountain pass separating the interior farmland from the port on the coast. All farms must go through this town.

Inside the town, there were various points where taxes had to be paid (in produce) and you might get "change" back in the form or other produce -- e.g. if you don't have a cantaloup to pay a tax, you pay a watermelon and get back two tomatoes.

Also, when farmers would meet at intersections, they would often barter among themselves.

The taxes changed all the time, and so did the barter, so you never knew what you'd end up with in your wagon when you left... you just had to wait and see.

So that's a vegetables-in/vegetables-out system.

But eventually, folks got tired of the traffic, so one day the mayor had an idea. They'd build a moat around the town, float all the farmers' crops and their wagons on barges from one gate to another, and give the farmers lists of their crops, both type and weight/number.

That way, they could do all their transactions on paper, and just pick up the right mix of veggies and their wagon on the other side.

Essentially, they've swapped an information system for a physical system.

But there's a catch... this only works if no real work needs to be done with the original inputs along the way.

(Think back to the truck and the leaf... you can't use a computer alone to substitute for either of these because the computer does symbolic work, and we need real work done here.)

So imagine if the mayor had contracted out the work of building a moat, but hadn't told the contractor why. And suppose he requested some sort of engine to make the moat flow in a circle.

The contractor thinks, "I know, there are weigh scales at every tax office and on every corner in town to facilitate the taxes and barter on vegetables... we'll capture the energy of the movement of the scales to make this moat flow."

So that's how he designs the system.

Well, now we're in trouble because the new information-based system can't do that work, so the moat doesn't flow.

As far as we know, consciousness is a direct result of the physical activity of the brain. We have no reason to believe otherwise.

As far as we can tell, the electrochemical activity in your brain directly results in behaviors like blue and green and pain and the smell of coffee and the feeling of disgust and the sound of birds singing.

Consciousness is not signal-in/signal-out.

Conscious phenomena are the result of real physical work by the brain, and if you swap it out with a computer which is only designed to "run the logic" and isn't built with the right hardware to actually make blue and green and pain happen, they won't happen.

That's why the comparison with the truck and the leaf is apt.

Here's a second way of thinking about it....

Suppose Major Tom is on a spacewalk to repair a space probe. When he's done, he sees there's a dangerous space squid between him and the mothership.

So he requests to be beamed back. They convert his physical structure somehow, preserve the information on how it's assembled, and re-assemble it back on the ship, and he's home safe.

But suppose he decides that the space squid is a threat to the ship, and he must kill it.

In that case, you can't convert to your information system... he has to go there himself in his suit and kill the thing.

You could, of course, beam him aboard and alter the information so that he appears in the same state he would have been in if he had killed the squid -- he'll be tired and a bit roughed up and he'll have a memory of killing the squid.

But the squid will still be there.

To actually do that work, you can't switch to an informational system.

That's why you can use a machine with a computer in it as a substitute for a liver, but you can't use just a computer with only enough hardware to run the logic, because real-world behavior can't be accomplished with logic alone.

That's why you can use a machine with a computer in it to do what my truck does, but you can't use a computer alone with only enough hardware to run the logic.

Ditto for a leaf... photosynthesis cannot be performed with logic alone.

There is absolutely no reason to think that the behaviors of conscious awareness are any different. They must require real work, not just logic.

ETA: That's where this "magic bean" nonsense comes from. The computational literalists believe that logic alone can create consciousness -- although they have no theory of how or why -- so they dub the physical process the "magic bean" since they think we don't need it.

Piggy
7th July 2012, 07:17 AM
It's as if you didn't read my refutation of Massimo's comparison of consciousness to sugar, or you didn't understand it.

Photosynthesis outputs sugar.

What does the brain output?

How is consciousness like sugar?

How is sugar like hauling brush to the dump?

It's not.

But neither can be accomplished by logic alone... in fact, no real-world phenomenon can be.

There's no reason to believe -- and every reason not to believe -- that colors and sounds and smells and pain and pleasure and all the other conscious behaviors can be performed by logic alone without real work.

Piggy
7th July 2012, 07:18 AM
They wired up some artificial neural networks according to a hypothesized pathway that they thought might be part of how the mamallian brain controls motor movements via imagination, hooked the simulation up to a robot arm with a camera sensor, and it worked. The robot imagined that if it turned a certain way it would see an object of a color that it "liked" and because of that imagined reward it actually turned.

Great post. I don't have time to respond now, but I will when I get back later today.

I think we can actually use this post to point out where some of our differences aren't, and where they are.

More later.

rocketdodger
7th July 2012, 07:49 AM
"Information" is a term we can use in a variety of ways, as you know.

In the Wolfram sense, it's totally equivalent to physical activity and properties.

If you use the term to refer to degrees of entropy, then the equivalence is not complete.

If you use the term in such a way that it allows an informational "world of the simulation" to exist which can (and must) be described in radically different ways from the physical activity of the simulator -- as y'all do -- then you (not I) have divorced the information from the physical behavior in a radical way.

If we use the term "information" in the sense of something that can be encoded and decoded, then that information becomes independent of physical reality in very important ways.

I can send the same message to you -- say, "Help!" -- using the motion of flags, or an arrangement of rocks on a beach, or by flashing lights in certain patterns, or by causing air to bounce inside your ears in a particular way. That means that there must be a "difference between information and physical behavior" when we're talking about this kind of information, or else all those different physical set-ups couldn't carry the same kind of information.

However, there's a caveat when talking about this last sort of information, because it requires an encoder and decoder.

Someone who doesn't speak the same language as me (whether that's English or semaphore) won't get the message, unless they use other shared information such as our human understanding of what a frantic cry sounds like, which is another kind of encoding and decoding.

In other words, there actually is no "information" in the rocks or the lights or the flags or the air, just like there's no pain in the knife.

So we can talk about the brain objectively (without inserting any encoder and decoder) in the Wolfram sense, and we can speak metaphorically about "information in the brain" when we're really referring to the brute physical activity, and we can talk about information exchanged in a system that contains more than one brain.

But to talk about this last kind of information existing objectively in the isolated brain, well, that makes very little sense.

I'm just so tired of having to wade through walls of text in order to make sense of your responses to what should be extremely simple questions. You insist on smothering people with words, typically when you are just wrong about something.

Piggy, you can't divorce information from physical behavior because everything is physical behavior.

You are as wrong as wrong can be. "We" don't consider the world of the simulation to be different from the physical activity of the simulator, I have no idea why you keep saying this even though we keep saying we don't mean it. How stubborn does someone need to be to repeatedly tell multiple people that they mean something other than what they explicitly say they mean?

rocketdodger
7th July 2012, 07:57 AM
But neither can be accomplished by logic alone... in fact, no real-world phenomenon can be.

There's no reason to believe -- and every reason not to believe -- that colors and sounds and smells and pain and pleasure and all the other conscious behaviors can be performed by logic alone without real work.

Here you go again -- what the heck are you talking about? Who has ever said that logic can be performed without real work?

The behavior of making absurd stuff up and then refuting it just so you can say "see, I'm right" is less than useful.

Here, let me try it: There's no reason to believe that you can get green paint by mixing red and black paint.

By saying that, I implicitly suggest that you are stupid enough to think that the opposite is true, eh? Why else would I have said it, if it didn't need to be said, eh? And now anyone reading the posts might think I am a little bit smarter than you, because something I said in a post is obviously true. I obviously have a grasp on reality because I said something obviously true ( never mind that nobody ever said the opposite ). Hooray for me!!

dlorde
7th July 2012, 08:04 AM
If you're talking about the IP sense of "compute" then the outcome of any computation is symbolic, and requires a reader.

What, exactly, do you mean by a 'reader'? One can hook up a symbol manipulating computer to physical effectors and have its computations cause physical actions. I'm happy to accept that the interface to an effector is a 'reader' (although 'interpreter' seems more precise), but it leads me to query why you say the outcome of a computation requires a reader. Computations directed to controlling effectors require those effectors in order to cause an effect, if you remove the effector interface the computation can still be done, although to no useful effect.

In the case of consciousness, who or what is the 'reader'? It seems clear, as evidenced by 'locked-in' syndrome, that consciousness doesn't require physical effectors, but seems able to be maintained effectively in isolation. I can only think that by your requirement above, consciousness must be reading itself - a clear expression of SRIP :D

rocketdodger
7th July 2012, 08:31 AM
I've never said we know nothing about it, of course.

You have said we don't know much, in a number of ways, a number of different times.

And although there are some hypotheses being tossed around, the focus at the moment is just to continue investigating NCCs and see where it goes, which is pretty raw stuff. Forgive me if I don't take your word for it.

Besides, I quoted them directly so the viewpoints expressed are theirs. No offense, but I certainly think you are guilty of quote-mining in almost every single case. I absolutely cannot stand quote mining ...

Anyway let's see if the folks I cited have done any actual research.... You realize that it is a double edged sword, and if you are not good at swordplay ( I don't think you are ), you are more likely to hurt yourself, right?

Christof Koch (http://www.klab.caltech.edu/~koch/) of Caltech, author of Biophysics of Computation: Information Processing in Single Neurons (http://www.klab.caltech.edu/~koch/biophysics-book/) -- his research focuses on visual systems, information theory, and questions of machine sentience (http://spectrum.ieee.org/static/sing_koch).

Ok, he definitely sounds like he would know a thing or two. I stand corrected on that front.

On the flipside, just a cursory glance at his research page should convince anyone capable of reading English that he feels the current state of research is beyond "we don't know much," as you claim he would say. He certainly doesn't seem to be just slogging through NCC data with little clue as to how to put it all together.

Ned Block (http://www.nyu.edu/gsas/dept/philo/faculty/block/) from NYU -- you might be interested in his panel appearance along with Koch and info-theorist Giulio Tononi in the "Consciousness and Intelligence (http://techtv.mit.edu/videos/13236-consciousness-and-intelligence)" seminar of last year's "Brains, Minds, and Machines" symposium at MIT, which btw I think demonstrates the actual current intersection of neurobiology and info theory.

I disagree with you on this one. This guy is neither a programmer nor a biologist. He is a philosopher and psychologist. Him saying "we don't know much" carries about as much weight with me as Romney saying he could fix global warming.

Bernard Baars (http://vesicle.nsi.edu/users/baars/cv.html) of Berkeley who's done all kinds of things (http://www.google.com/#hl=en&sclient=psy-ab&q=bernard+baars&oq=bernard+baars).

If you think I don't know about Baars then you really are crazy, especially since the global workspace model is used all over the place in the world of machine consciousness, in fact I am trying to get my colleagues to agree to experiment using it for game AI.

But here is the thing, piggy -- he hasn't actually written a paper since 2002, according to that cv page of his. And here is another thing, piggy -- he is a really smart guy whose position changes over time, you can see that in the history of his work.

So are you going to sit there and honestly tell me you can find quotes of his to the extent of "we don't know much" from anytime in the near past? Even a quote more than 5 years old is irrelevant, given the speed at which research moves forward today.

It sort of boggles my brain that you would consider Bernard Baars as a supporter of the notion that we are still just slogging through NCC data trying to make heads or tails out of it.

V.S. Ramachandran (http://cbc.ucsd.edu/ramabio.html) of UC San Diego's Center for Brain and Cognition, who has done some research (http://cbc.ucsd.edu/research.html).

Yeah, let's talk about quote mining piggy. Quote mining is when you look through something someone has written just to find isolated quotes that could be interpreted as supporting your position. It isn't inherently bad, except that often ( as is the case with you ) the context of the quote is completely changed. Case in point:

Ramachandran's site says "It is ironic that although we now have a vast amount of factual information about the brain, even the most basic questions about the human mind remain unanswered."

Whoah, it looks like you might be correct!! Hmmm.. wait a second though, I should keep reading...

"Why do we laugh, i.e., make a rhythmic sound and bob our heads in certain situations? Why do we cry? Why does a salty liquid flow down our cheeks when sad? How does the human brain create and respond to art? Why do we enjoy music? What causes us to dance? What makes some of us so amazingly creative in mathematics, science, and poetry? How are metaphors represented in the brain? What is "body image" and why does it get distorted in anorexia nervosa? How did language evolve?"

Ohhhh, I see what he means. He means what humans would consider the most basic questions, not the fundamental questions. I think it would be slightly dishonest to attribute the position of "we don't know much" to someone like Ramachandran based on a quote taken so out of context, don't you? ( not sure why I'm asking you, since you did it !! ) Let's keep reading though....

"Then there are more basic questions. How do we see color? Why can we pay attention to only one thing at a time? How do we recognize faces so effortlessly?"

Ahh, those are the kinds of things I had in mind. So we don't know much about them??

"Neuroscientists and psychologists have, in the past, shied away from such questions, but our center has become well known for tackling questions such as these experimentally, questions that have traditionally been the preoccupation of philosophers."

Hmmm....so they are actually doing research on these things? Call me crazy, but that doesn't sound like a lab full of people who think "we don't know much."

Michael Gazzaniga (http://www.psych.ucsb.edu/~gazzanig/) who was on the original split-brain study team, led the continuing research, and is the current president of the Cognitive Neuroscience Institute.

I don't even need to touch this one. Please cite a recent quote ( not mined, please keep it in context ) where he says anything remotely similar to "we don't know much."

Steven Pinker (http://pinker.wjh.harvard.edu/about/current_cv/cv%20Steven%20Pinker.htm) of Harvard, advocate of the computational model of mind

Same. Please cite a recent quote ( not mined, please keep it in context ) where he says anything remotely similar to "we don't know much."


So I admit I was partially wrong -- you do actually reference people who know, at least partially, what they are talking about. However I would strongly argue that you are taking their sentiments completely out of context to support your own position. Prove me wrong. Start using recent statements and link to the full articles where you got them. Avoid quote-mining.

EDIT -- what I am trying to figure out is why you are a proclaimed materialist yet you seem to have a desire to "dumb down" the current state of research, almost as if you want to convince people that we know far less than we really know. It is peculiar ...

Belz...
7th July 2012, 08:43 AM
What do you mean by "data"?

If I consider food "data", then my body can accept it, store it, and process it according to a predetermined set of rules.

And if I consider cows to be tables, I can put napkins, set up dinner and and eat on one.

Belz...
7th July 2012, 08:46 AM
What, exactly, do you mean by a 'reader'? One can hook up a symbol manipulating computer to physical effectors and have its computations cause physical actions.

An better answer than mine.


So nobody has anything to say about my biological truck ? :(

Mr. Scott
7th July 2012, 09:38 AM
I can't see a computer, with the rules it must be assembled with, coming up with:
"It was a dark and stormy night"... or "Call me Ismael"... or "In the beginning..",
particularly the last which has what a computer would recognize as physical absurdities, much less plots that have been pre-figured in the author's mind to go in a specific direction, which may or may not be possible in a physical world.
A computer that might do that would have so few constraints its output would resemble that herd of monkeys more often than not.
Dawkins got his program to find a particular phrase in Shakespeare, but he had to direct the computer to that goal, external to the programming of the phrase generator.

Ah, the old "fine arts" argument against consciousness as data processing. It's a fallacy class: argument from ignorance and argument from personal incredulity. It follows the form:

"I don't know (or can't imagine) how computers could write beautiful prose, so computers can't write beautiful prose, and human brains must do it with some form of magic (outside the known laws of physics)."

The argument unfortunately implies that people who don't produce fine art are not fully conscious.

There are some poem generators online. Here's a nice Haiku Generator (http://www.everypoet.com/haiku/default.htm) I found. I don't doubt that some of its poems would challenge one to sense they were computer-generated.

dlorde
7th July 2012, 09:39 AM
So nobody has anything to say about my biological truck ? :(

Apart from the gratuitous surgical tweaking of neurons and the novel method of locomotion, I couldn't really see much difference, in principle, between it and pack animals used for transport, e.g. donkeys, horses, elephants, camels, bullocks, etc.

It did bring to mind Boston Dynamics' 'Big Dog' - should we call that a truck?
W1czBcnX1Ww

Mr. Scott
7th July 2012, 09:45 AM
I just had an idea for an analogy and I'm not sure if it'll work. Here goes...

A truck is a metal frame over an axle and four wheels, transports stuff and goes places, right ?

Let's do the consciousness/computer thing in reverse.

Let's say I build an organism in a lab. It's a vertebrate with a tough skin with hard plates. It has flabs on either side that you can open to get inside its frontal cavity, in which there are ganglions you can push or pull to trigger certain functions of its nervous system. In front are two white bioluminescent organs. Same thing for the rear, but red. There is another cavity at the back. You can input a slush-like food in an orifice to the side, which feeds the organism, and comes out as fart from the rear. Under it are four legs arranged in a way that they can spin on themselves without damage, so they turn like wheels. Using these attributes, you can shove stuff in its back cavity, hop inside, press a few ganglions and go see your father-in-law in Jersey.

Is it a truck ?

I'd say you'd have a number of truck enthousiasts, amongst others, claiming it can never be a truck because it's organic, etc.

Yes, a truck is substrate-independent. If it's above a certain size and licensed to transport stuff on the roads, it's a truck, even if it's made of wet, fleshy carbon-based and DNA-directed materials.

The brain processes data. It would, in theory, process data the same way, generating fine arts and reports of internal subjective experiences, if it were made of metal and ran on gasoline, I maintain.

Mr. Scott
7th July 2012, 09:53 AM
If it's true that the brain "performs" red (and qualia in general) then there must be a consequence in thermodynamics.

Some energy has to be expended performing red, and some of this would have to be recovered (with losses) in the process of appreciating the performance.

Wouldn't this have to be happening if the arguments of qualiaphiles were correct? Is there any evidence for the metabolism needed for qualia generation and appreciation, beyond what's involved in action potentials and neurotransmitter activity?

Fudbucker
7th July 2012, 09:56 AM
Ah, the old "fine arts" argument against consciousness as data processing. It's a fallacy class: argument from ignorance and argument from personal incredulity. It follows the form:

"I don't know (or can't imagine) how computers could write beautiful prose, so computers can't write beautiful prose, and human brains must does it with some form of magic (outside the know laws of physics)."

The argument unfortunately implies that people who don't produce fine art are not fully conscious.

There are some poem generators online. Here's a nice Haiku Generator I found. I don't doubt that some of its poems would challenge one to determine if it was computer-generated.

To be charitible, I think Iratant was talking about computers as they currently are.

A couple Haikus from the haiku generator:

fisherwomen climb
giddily, moose flying plum
lying conjurers

pumpkins recover
wistfully, laughter festers
rosemary frolics

blazing strut gravestone
dwindles, frivolous lustrous
dunes pulsate, mealy

I'm not too impressed
Haiku program is lacking
Hail new machine lords!

(OK, that last was my own)

So computers aren't at the "dark and stormy night" level yet. However, there's no reason to think they won't be. Maybe fairly soon.

Piggy
7th July 2012, 10:22 AM
Piggy, you can't divorce information from physical behavior because everything is physical behavior.

You are as wrong as wrong can be. "We" don't consider the world of the simulation to be different from the physical activity of the simulator, I have no idea why you keep saying this even though we keep saying we don't mean it. How stubborn does someone need to be to repeatedly tell multiple people that they mean something other than what they explicitly say they mean?

You keep saying you don't mean it, and then you make arguments which presume it.

That's what troubles me.

Piggy
7th July 2012, 10:52 AM
Here you go again -- what the heck are you talking about? Who has ever said that logic can be performed without real work?

You're not seeing what I have plainly said.

Of course logic must be performed with real work.

But if you only demand (or have) enough real work in the system to run the logic, then all you have done is run the logic.

If you want to run the logic and do something like make enzymes or move objects or play sounds, then you need enough work to run the logic and enough work to make the enzyme or move the object or play the sound.

Nobody disagrees with that, of course.

Where you and I differ is that you believe that no additional work is required for conscious behavior like colors and sounds and textures to be performed, above and beyond what's necessary to "run the logic" because these phenomena result directly (and exclusively) from the logic, which is not entirely dependent on substrate -- in other words, you can run the logic with a brain or a computer or whatever, as long as it runs, that's all you need.

I've called this the "pure programming" point of view, for short, on other threads.

If this is accurate, then machines can be programmed to be conscious, they need not be otherwise specially built to be conscious. Creating conscious machines becomes a question entirely of computer engineering rather than mechanical-electrical engineering (except as the latter serves the needs of the former).

The neurobiological approach sees things a bit differently, and this is what I've been saying about the requirements for doing work....

Per the NB approach, the neural correlates of consciousness (NCCs) aren't so much seen as a substrate supporting logic which in turn enables conscious behavior to occur, but as the direct mechanism itself -- one which is much often simpler to talk about in terms of information, but which nevertheless must be plain old physics at the end of the day.

From your point of view, comparing consciousness to photosynthesis or hauling trash to the dump makes no sense.

Because from your point of view, unlike photosynthesis or hauling trash, consciousness is a pure-programming problem.

Which means that running a digital simulation of the brain will create consciousness -- with no extra work required for doing so, beyond what's needed to run the simulation -- even though a digital simulation of a leaf will never make sugar and a digital simulation of a truck will never haul your trash to the dump.

From a neurbio point of view, the challenge to building conscious machines is a bit different, because we assume that electro-mechanical engineering will be crucially important over and above its contribution to supporting the logic.

In other words, the machine brain will have to behave in real spacetime in some ways like the biological brain to make these behaviors actually occur, and this is the bit of work (despite your dubbing it a "magic bean") which I'm referring to.

So the simulation will not actually create an instance of blue or pain or the smell of coffee or anything else, unless it's got the hardware to make that happen in real spacetime.

That's the difference.

Sorry if it takes a while to explain it.

Piggy
7th July 2012, 11:23 AM
They wired up some artificial neural networks according to a hypothesized pathway that they thought might be part of how the mamallian brain controls motor movements via imagination, hooked the simulation up to a robot arm with a camera sensor, and it worked. The robot imagined that if it turned a certain way it would see an object of a color that it "liked" and because of that imagined reward it actually turned.

Sounds good.

I don't think it is necessarily directly relevant to the search for phenomenal consciousness. Rather, I think it is relevant to all of the arguments you have made against the computational model, for starters.

You have explicitly stated that simulating neurons can't lead to real world results, that what happens in simulations is just ones and zeros that only humans can make sense of. Well, you are wrong about that, because these guys hooked up a robot to simulated neurons, and it did something on its own.

No, that's not what I've said, explicitly or otherwise.

Of course they can lead to real-world results -- happens every day. It makes as much sense as being able to send letters.

What I've said is that when you do insert a simulation into part of a process, it has to be the case that you need signal-in and signal-out.

The trouble comes if you need any actual physical mechanism to do real work in spacetime anywhere along that part of the system that's been swapped with a simulator.

If you do, then your system breaks because the simulated stuff is in informationland and can't do any work... the only work being done is the simulator being run.

So yeah, there's no problem with simulated neurons "leading to real-world results".

But there's a problem with simulators allegedly performing work that they're not performing in spacetime (which from your point of view isn't a problem, I know).

You have explicitly stated your distaste for AI research because it goes on in some sort of vacuum from brain research. Well, you are wrong about that, because these guys made an artificial neural network based on known brain pathways, in fact the paper goes into a good amount of detail describing such pathways and even provides citations of its own for the "brain research," as you call it, that the researchers educated themselves with.

My objections aren't with AI research, never have been.

I've consistently said it's interesting, important, and useful.

My objection is to its application on this forum to the question of consciousness, that's all.

I object to irrelevant discussions based on AI research that doesn't have any bearing (or precious little) on consciousness research.

And I haven't claimed that AI researchers ignore brain research. I've complained that the comp.lit camp on this forum much prefers to discuss research on non-conscious machines than to discuss current neurobiology, and I'm dead right about that, as far as I can see.

Those are two very different things.

You have explicitly stated that computing is somehow not "real time" and thus a program is incapable of emulating what is necessary for consciousness. Well, you are wrong about that, because these guys simulated the neurons in software, and those neurons didn't have any problem interacting with the world in "real time."

I don't recally anyone saying that computing is not real time.

I do recall discussions about whether a conscious machine could stay conscious at any arbitrarily slow operating speed.

In any case, nothing in this experiment is shocking to me, it doesn't go against anything I've said so far, as far as I'm aware.

You have explicitly stated that there is no indication by any research thus far that the functions of the brain necessary for consciousness can be divorced from its "real-spacetime function" which you have explicitly stated is surely more than what known computers can provide. Well, you are wrong about that, because these guys seem to have emulated at least a few of the functions required for that thing we call "imagination" in a computer.

That's not quite what I've said, but close enough for our purposes here anyway.

Yeah, I've said that running logic alone won't get consciousness done, just like logic alone won't make a machine sing or run or perform any other behavior.

That's why I always talk about conscious machines rather than conscious computers.

But this only means that, for the particular function of consciousness, you're going to need properly set up hardware.

What these guys are doing really has nothing to do with that question, I'm afraid. It's not that what they're doing isn't damn interesting and useful -- it certainly is -- but it doesn't have any bearing on the question of the hardware component of consciousness, I'm afraid.

Whether or not the research explains much in terms of consciousness for you isn't really the point. The point is that the research clearly shows that just about every single one of your counterarguments isn't valid. That's why I brought it up. Turns out there are dozens of projects like this, just go to the aisb home site and look at their symposiums over the last decade.

And that's fine, and very cool, but my objections have never been what you seem to think they are. It's a matter of relevance, nothing else.

So my question would be, given that there is actual research going on where people are looking at the mamallian brain and emulating its functions with artificial computers and getting behavioral results that confirm their hypotheses, why do you think that simply making post after post trying to argue to the contrary is productive? Once you move past this silly need to control the conversation and "win" a debate at any cost maybe you will start to learn a few things, eh?

I'm not arguing against that fact.

I know there are folks doing this research.

Some of the people I cite are involved in it, and have written about it.

Yes, this is "actual research" but it is not particularly relevant actual research. And I wonder why it ends up dominating every thread about consciousness to the exclusion of much more relevant research on the human brain itself, which is indeed focused on the question of consciousness.

Once we crack the big questions about consciousness neuobiologically, then we'll be able to design similar machine tests in that area.

I sincerely hope I'm dead before that happens, though.

PixyMisa
7th July 2012, 12:36 PM
You're not seeing what I have plainly said.

Of course logic must be performed with real work.

But if you only demand (or have) enough real work in the system to run the logic, then all you have done is run the logic.
Yep..

Where you and I differ is that you believe that no additional work is required for conscious behavior like colors and sounds and textures to be performed, above and beyond what's necessary to "run the logic" because these phenomena result directly (and exclusively) from the logic, which is not entirely dependent on substrate -- in other words, you can run the logic with a brain or a computer or whatever, as long as it runs, that's all you need.

I've called this the "pure programming" point of view, for short, on other threads.

If this is accurate, then machines can be programmed to be conscious, they need not be otherwise specially built to be conscious. Creating conscious machines becomes a question entirely of computer engineering rather than mechanical-electrical engineering (except as the latter serves the needs of the former).
Yep.

The neurobiological approach sees things a bit differently
Nope.

That's the problem. There is no "differently". Either at some level it's computational, or it's magic.

tensordyne
7th July 2012, 01:42 PM
That's the problem. There is no "differently". Either at some level it's computational, or it's magic.

So I guess then trucks and leaves are magical. I simply did not know.

tensordyne
7th July 2012, 02:12 PM
If it's true that the brain "performs" red (and qualia in general) then there must be a consequence in thermodynamics.

Some energy has to be expended performing red, and some of this would have to be recovered (with losses) in the process of appreciating the performance.

Wouldn't this have to be happening if the arguments of qualiaphiles were correct? Is there any evidence for the metabolism needed for qualia generation and appreciation, beyond what's involved in action potentials and neurotransmitter activity?

Interesting idea, but I do have to note that if the neurobio is approach is correct, qualia generation might be the physical result of action potentials and neurotransmitter activity without any need of bringing in the computational role of the same. Of course, I would like to add in that qualia generation as per thermodynamics and CEMI might come in the form of energy generated to create the EM field.

Piggy
7th July 2012, 03:05 PM
Any system that is Turing complete. That's what anyone who understands computers means by the term.


The brain is Turing complete. This was understood by everyone in the all related fields the minute we came up with the term "Turing complete".

Forgive me if I don't just take your word for that last bit about the brain.

And it's funny that your claim about the brain isn't mentioned in the article you cite, but my claim about the whole universe is.

And the folks over at Stanford don't seem to agree with you (http://plato.stanford.edu/entries/church-turing/#Bloopers).

A myth seems to have arisen concerning Turing's paper of 1936, namely that he there gave a treatment of the limits of mechanism and established a fundamental result to the effect that the universal Turing machine can simulate the behaviour of any machine. The myth has passed into the philosophy of mind, generally to pernicious effect.

In fact, their treatment of this "myth" sounds like a pretty good paraphrase of many of your claims in this forum.

So unless you can demonstrate that this is so, and unless you can cite leaders in the field agreeing with you, I'm betting that the Stanford guys are right and you're wrong.

Piggy
7th July 2012, 03:11 PM
Consciousness is a self-referential informational process. From Descarte's cogito to Dennett and Hofstadter, that's what we've always meant by the term. Consciousness is a function of the brain. The brain is a computer.

It's not hard, Piggy. Complicated, yes. but not hard. All you have to do is stop believing in magic.

That is what you claim to be the contribution of seeing the brain as a computer?

I'm not impressed.

I don't see any contribution there, just restatement of your (still) unsubstantiated assertion.

If you want to call physics "magic", OK -- that's still my bet for what causes consciousness.

If you want to say it's caused by information, well, there are several problems here, not the least of which is the fact that there is no blue in the information which you claim causes blue.

Nothing in the sky performs blue. The light from the sky doesn't perform blue. My eyeballs don't either. Neither do the bits of my brain that aren't involved in consciousness.

What's worse, that same "information" which sometimes causes blue can cause any number of other behaviors besides blue, or no behaviors in the consciousness cluster at all.

If blue -- like all other phenomena that are part of consciousness, is caused by information, then how do you get from information which has nothing to say about blue, to blue?

Piggy
7th July 2012, 03:13 PM
Well, you can of course also simulate it. But for things defined by their physical properties (e.g. acid) there's a divide that does not exist for things defined by their informational properties (minds, consciousness).

What makes you imagine that consciousness is defined by "informational properties"?

What about the smell of coffee is fundamentally "informational"?

Piggy
7th July 2012, 03:22 PM
On what basis? Other than the usual arguments from personal incredulity and ignorance?

On what basis do I conclude that the Blue Brain simulation won't result in a conscious machine?

On the basis that it makes more sense, at this point, to posit that the phenomenon of consciousness is the result of the physical rather than informational architecture of the brain, even if the specifics of each instance of conscious behavior depend on the informational architecture.

It's a simpler way of looking at it, more in line with what we know about how the universe works, doesn't require any radical hypotheses, and it's working in the lab and the field.

The neurobio approach is making advances by asking "what is the brain doing when we're conscious in different ways, at different times?"

That's the whole point of the investigation of neural correlates, which is the dominant activity in the field at the moment.

Consciousness is behavior. To get a machine -- biological or not -- to exhibit real behavior in the real world, you have to build the right physical architecture some way or other.

That's not very controversial.

Piggy
7th July 2012, 03:31 PM
No, they're not. The truck is a thing that moves from A to B while transporting something. The brain is a thing that gets data and outputs other data. How are they alike ?

The brain doesn't get any data. The brain's input is entirely physical, and that input is only "encoded with information" in the same way that ripples in a pond are "encoded with information" about the rock that was dropped in the pond.

The nonconscious parts of the brain do their thing, and the electrochemical reactions bounce around, and some leave the brain, and muscles move.

We don't need to talk about data and information to explain that, it's just much much easier if we do.

Other parts of the brain cause other types of behaviors such as odors and sounds and feelings of joy or dread -- that's what we call consciousness.

And that bit is handled entirely in the brain.

So although the behaviors of the brain and the truck and leaf are very different, it's still true that none of them can be replaced entirely by a computer simulation, because a computer running a simulation does different types of work than all 3.

No matter what the work is, if it has to be done during the point where you're replacing any real component with a digital simulation of that component, the work won't get done.

That's why you can't replace my truck's engine with a simulation of an engine and expect it to do everything it did before (like haul brush) and why you can't replace my body's brain with a simulation of a brain and expect it to do everything it did before (like perform colors and sounds).

Sure, my truck can do some things if you replace its guts with a computer simulation -- could still work the lights and the wipers and play the radio... but it can't do everything.

Ditto for a brain. Not all functions will work if you remove it and replace it with a digital sim.

Piggy
7th July 2012, 03:33 PM
Not "special rules", but "different processes". See, the brain, as I said above, gets, processes, and outputs symbolic data. It just so happens that this is exactly what computers do, and nothing else in the universe does only that. To me that's a pretty convincing case for their similarity, and why one can emulate, simulate or be the other.

Does the brain really do that?

And if so, is that what the components that perform consciousness do?

Can you demonstrate that this is so?

Piggy
7th July 2012, 03:41 PM
Who or what is the brain "performing" red to? You have to explain not just the performer and how it performs, but also the audience, and how THAT works. And why performance of qualia needed to evolve in the first place. The biological sciences have yet to isolate a performer or an audience in the brain of such things, have they?

How do you know the internal subjective experience of a non-biological data processing machine cannot possibly seem, to it, like magic?

That's like asking who or what the brain is performing breathing to.

The body just carries out its tasks, that's all. It just performs the way it's built to perform.

As I said before, the model of consciousness as (normative) hallucination eliminates the need for any audience.

When your body wakes up from deep sleep, it starts up certain behaviors that weren't occurring before, such as colors and sounds and the smell of coffee and a small pain in your back and so on.

When these behaviors are happening, you (the person you think of yourself as) are happening. When you fall asleep and these behaviors stop, you stop.

Something similar happens when you dream.

Which means that all these things that start when you're conscious and stop when you're unconscious are consciousness and are you.

After all, my eyes don't really look out into the world and remotely detect what's really out there.

My conscious experience is me, and is everything that appears to be going on "out there".

Everything I feel like I'm seeing, hearing, smelling, tasting, feeling... that's what I am. There's no difference between me and my conscious experience. You could say, I am the qualia.

The sense of a self is one of things that consciousness can produce (or perform) but that's all.

Once you really come to terms with the fact that absolutely nothing you experience has any existence outside your experience, and that there's no difference between your experience and you, then a lot of apparent problems simply become non-problems.

Piggy
7th July 2012, 03:46 PM
Yes, it's simulated. However, this does not divorce the information from the information.

No, but if you say that the simulator is "really" doing something that the machine is not physically doing, and that this "real" behavior conforms to the physics of the thing being simulated instead, then you've got a serious problem.

Which can be pretty much any switching mechanism. The operation of a computer does not need to be decoded, because it is itself a decoder.

I was speaking specifically of computers used as info processors in the ways that PCs often are. We input symbols, we get back symbols. Computer couldn't care less what they mean to us, and would have no way of knowing anyway.

rocketdodger
7th July 2012, 03:48 PM
You keep saying you don't mean it, and then you make arguments which presume it.

That's what troubles me.

Or, you are just misunderstanding the arguments.

rocketdodger
7th July 2012, 03:57 PM
You're not seeing what I have plainly said.

... snip ....

OK I understand what you are saying now.

The neurobiological approach sees things a bit differently, and this is what I've been saying about the requirements for doing work....

Per the NB approach, the neural correlates of consciousness (NCCs) aren't so much seen as a substrate supporting logic which in turn enables conscious behavior to occur, but as the direct mechanism itself -- one which is much often simpler to talk about in terms of information, but which nevertheless must be plain old physics at the end of the day.

Um, but I don't get this "viewpoint" from any of the researchers you have ever cited.

It seems to me like you are looking at NCC research and developing your own ideas that the non-logic portion of the physical behavior is somehow crucial.

EDIT -- it just occurred to me that perhaps you are simply a little confused, and think that the computationalists view the neurons of the brain as just the "hardware" running "software" which is merely the impulses traveling between them. This is absolutely not the case. The computationalist position is that the concepts of "hardware" and "software" do not apply to the biological brain, at least not in the way it does to silicon computers. The computational model, however, just assumes that the physical connectivity between neurons, which is entirely essential to the functioning of a neural network, can be emulated successfully in software. But that doesn't imply that it isn't important. On the contrary, the actual physical ( or emulated physical ) connectivity of neurons is the most important factor.

From a neurbio point of view, the challenge to building conscious machines is a bit different, because we assume that electro-mechanical engineering will be crucially important over and above its contribution to supporting the logic.

Lol, who is "we?" You and Baars? Had coffee with him lately?

I see no evidence that any researcher doing work in what you call "neurobio" has taken the position that there is something essential about the neuron "over and above its contribution to supporting the logic." I think you are fabricating this position and attributing it to a whole bunch of people that actually support the computational model but they would just rather do research on real brains than artificial ones.

Piggy
7th July 2012, 04:34 PM
What, exactly, do you mean by a 'reader'? One can hook up a symbol manipulating computer to physical effectors and have its computations cause physical actions. I'm happy to accept that the interface to an effector is a 'reader' (although 'interpreter' seems more precise), but it leads me to query why you say the outcome of a computation requires a reader. Computations directed to controlling effectors require those effectors in order to cause an effect, if you remove the effector interface the computation can still be done, although to no useful effect.

Btw, where are the symbols that you think the computer is manipulating?

Is it really manipulating symbols when it's operating, or is the manipulation of symbols more appropriate to the programming of the machine?

Btw, it's important to note that I didn't say "the outcome of a computation requires a reader" -- I was only talking about symbolic, as opposed to physical computations.

If we take Turing straight, no chaser, then we can (and many scientists do) describe the entire universe in computational terms, where quite literally everything that happens is just part of a machine moving from state to state depending upon a set of rules (the laws of physics) that govern all transformations.

In fact, it's because the universe is always performing computations that we can piggy-back symbolically on some of them -- and modern computers offer us a way to do this very fast and predictably -- which gives us symbolic computations.

This is more narrowly what Turing was describing.

He started with the notion that everything that could be done by a human calculator (a person performing calculations with pencil and paper) could be done by a machine simply by setting it up to change the symbols in the right order, without the need for the machine to know what any of it meant.

But to be useful, this kind of computation does require users who know what the symbols at the end of the process are supposed to mean, otherwise the symbolic aspect of the computation is useless and the physical computation underlying it is all you're left with.

That's what I mean by symbolic computation requiring a reader.

I don't think neural work in the brain is symbolic, and I don't think the neural work in a computer running a paint sprayer is symbolic either.

And yet what I'm asking my pocket calculator to do is essentially symbolic. That is, from my point of view. What it actually does, without regard to me, is not symbolic at all.

In the case of consciousness, who or what is the 'reader'? It seems clear, as evidenced by 'locked-in' syndrome, that consciousness doesn't require physical effectors, but seems able to be maintained effectively in isolation. I can only think that by your requirement above, consciousness must be reading itself - a clear expression of SRIP :D

From the physicalist point of view, what's important to understand in the production of consciousness are the physical computations of the brain, not the symbolic computations, so you don't need a reader.

The brain has received no codes, no symbols, and it is not going to produce any.

The result of a physical computation is simply a new shape of the physical stuff involved in the computation. That's how the brain works, by constantly reshaping itself physically.

For folks with locked-in syndrome, the physical electro-chemical activity in their brains is sufficient in the right real estate to allow consciousness to occur, but their motor responses are shut down.

Feedback loops are everywhere in the brain, including areas that have no effect on consciousness, so yeah, you could say SRIP is going on here, but that's a trivial observation.

We don't know how it is that the brain performs these behaviors, such as pain and sound and color, but we know it can do it entirely on its own.

In any case, the "reader" issue doesn't come up for consciousness precisely because there's no reason to invoke symbolic computation to attempt to explain it.

Piggy
7th July 2012, 04:39 PM
No offense, but I certainly think you are guilty of quote-mining in almost every single case. I absolutely cannot stand quote mining ...

Then I suggest you get your own damn quotes from widely published neuroscientists who say that the mechanism of consciousness is known and that it's self-referential information processing.

Don't come accusing me of quote-mining if you're not going to pony up with anybody saying that SRIP is the known mechanism underlying consciousness.

And they'd better have credentials at least as good as the folks I cited.

At certain points I have to stop talking with you because you resort to stuff like this. I'll come back later.

Piggy
7th July 2012, 04:46 PM
Lol, who is "we?" You and Baars? Had coffee with him lately?

:D Made me laugh, too, when I read it.

I see no evidence that any researcher doing work in what you call "neurobio" has taken the position that there is something essential about the neuron "over and above its contribution to supporting the logic." I think you are fabricating this position and attributing it to a whole bunch of people that actually support the computational model but they would just rather do research on real brains than artificial ones.

I don't have a problem with the research. I think if you look at that MIT symposium video I posted, you'll see 3 guying saying stuff we both agree with.

I don't disagree with the computational model as it's being used in research on the brain.

My disagreements are with certain conclusions drawn about consciousness from the research by some folks on this forum, that's all.

As for any researchers working from "the position that there is something essential about the neuron 'over and above its contribution to supporting the logic'," no I don't know of anyone either.

Mr. Scott
7th July 2012, 06:10 PM
When your body wakes up from deep sleep, it starts up certain behaviors that weren't occurring before, such as colors and sounds and the smell of coffee and a small pain in your back and so on.


Those are behaviors? Not by any common definition of that word.

I'd still say you'd need a qualia detector to know that you have qualia at all.

Where do you find the certainty that a complete computer emulation of a human brain (say, with a real color camera and speech synthesis) couldn't possibly report an experience of qualia?

In Dennett's talk on free will (http://www.youtube.com/watch?v=jrCZYDm5D8M&feature=related), he mentioned the headline of a review of his book on consciousness that made me laugh out loud: "Yes, we have a soul, but it's made of lots of tiny robots."

tensordyne
7th July 2012, 06:26 PM
Video of Chalmers stating how it is as far as consciousness research goes.

http://youtu.be/r4SLOr2icnY

tensordyne
7th July 2012, 07:50 PM
Those are behaviors? Not by any common definition of that word.


The topic of behavior and consciousness has come up on this forum in the recent past (that I am aware of). Consciousness is not behavior (contra to the cousin of the computationalists the bahviorists), it is an internal state of being. A black box could just be sitting there, experiencing (and thus is conscious) a sunny day. You would not be able to tell from outside of the box that this is happening, which is why consciousness is not fundamentally about behavior.


I'd still say you'd need a qualia detector to know that you have qualia at all.


You have one qualia detector, your own consciousness. There is something it is like for you to experience red, right?


Where do you find the certainty that a complete computer emulation of a human brain (say, with a real color camera and speech synthesis) couldn't possibly report an experience of qualia?


Emulate or simulate? If you are simulating the brain (or anything else) there is never any reason to expect that the simulation will do anything beyond hopefully giving good predictions of what is being simulated. As far as emulation goes, you can program computers to say all sorts of things, that doesn't mean a thing.

The question is not if a machine made above could not possibly report back whatever you want it to in terms of experience as far as emulation goes, the question is, why should you believe it? The same goes for humans and other animals. I have my own answer for that; I am sure you have your own as well.

Fudbucker
7th July 2012, 08:10 PM
It basically is a binary collection of switches. It's a pulse-coded switched digital network.



Not according to what others are saying:

"Human brains store information using many different strategies, none of which are very analogous to binary in a digital computer."
Paul King, Computational Neuroscientist

MIT neuroscientist, Guosong Liu, has found that human neurons compute in trinary, using signals that are the equivalents of -1, 0 and 1.

The brain as a whole operates more like a social network than a digital computer, with neurons communicating to allow learning and the creation of memory, according to O'Reilly.
psychology Professor Randall O'Reilly

tsig
7th July 2012, 09:10 PM
I'm just so tired of having to wade through walls of text in order to make sense of your responses to what should be extremely simple questions. You insist on smothering people with words, typically when you are just wrong about something.

Piggy, you can't divorce information from physical behavior because everything is physical behavior.

You are as wrong as wrong can be. "We" don't consider the world of the simulation to be different from the physical activity of the simulator, I have no idea why you keep saying this even though we keep saying we don't mean it. How stubborn does someone need to be to repeatedly tell multiple people that they mean something other than what they explicitly say they mean?

Some seem to think that lots of words substitute for substance.

Reading his posts are like a mirage in the desert, instead of a cool drink of water you get a mouthful of sand.

rocketdodger
7th July 2012, 09:47 PM
Then I suggest you get your own damn quotes from widely published neuroscientists who say that the mechanism of consciousness is known and that it's self-referential information processing.

But I don't claim there are widely published neuroscientists who say such a thing.

That is the singular fact of the matter that you seem to not understand.

Don't come accusing me of quote-mining if you're not going to pony up with anybody saying that SRIP is the known mechanism underlying consciousness.

Again -- I don't claim there are widely published neuroscientists who say such a thing.

In fact, I have made zero claims about what any researchers "say." All my claims have been about the research itself. This is the fundamental difference between your position and mine. I say "we have research showing X," and you respond with "but so and so says Y." You never -- ever -- reference actual research showing X to be false.

Regarding SRIP, since frankly I am sick of it, that is Pixy's torch to carry: All I have claimed is that the frameworks posited by widely published neuroscientists all reduce to a form of SRIP. Whether or not they explicitly state "my framework is a form of SRIP" is as irrelevant as whether or not a mechanical engineer explicitly states that his engine follows the laws of thermodynamics, whether or not a biologist points out that cells he is studying contain carbon and oxygen, or whether a mathematician points out that the integers he is using can be arrived at by repeatedly summing the number 1.

If you disagree that any of these frameworks is a form of SRIP, then we can have that specific discussion.

But you can't simply claim that because a specific phrase isn't mentioned in an article that the stuff in the article doesn't meet some external definition of that phrase. I can show you tens of thousands of mechanical engineering research articles that make no mention of the laws of thermodynamics -- are you gonna claim that the stuff in the research doesn't satisfy those laws? I can show you tens of thousands of biology papers that never mention any chemicals at all -- are you gonna claim chemicals have nothing to do with biology? I can show you all the math major's theses in any school and I doubt even 1% of them mention the fact that you can make any integer by simply adding 1 to a number over and over -- are you going to argue that the integers used in those theses can't be made in this way?

The question you have been ignoring -- I don't know why, since it is the only question that should matter to you -- is whether or not Bernard Baars would say "yes, given the definition of SRIP that PixyMisa is using, a framework like 'Global Workspace' is an instance of SRIP."

Do you think he would say that? Or do you think he would say "no, it is not an instance of SRIP?"

Everything else just seems like beating around the bush. If the answer is "yes, I think he would" then why on Earth are you even arguing this point so relentlessly? If the answer is "no, I don't think he would" then what reason do you have for reaching that conclusion? Either way, taking the position that "since Baars never mentions SRIP, whatever he is talking about must not be an instance of SRIP" seems bizarre. That's like saying Mendel "didn't deal with genetics because he never mentioned DNA in any of his research."

Mr. Scott
7th July 2012, 10:07 PM
The question is not if a machine made above could not possibly report back whatever you want it to in terms of experience as far as emulation goes, the question is, why should you believe it? The same goes for humans and other animals. I have my own answer for that; I am sure you have your own as well.

I should have added one word, which I thought was understood:

Where do you find the certainty that a complete computer emulation of a human brain (say, with a real color camera and speech synthesis) couldn't possibly honestly report experience of qualia?

I await your response.

rocketdodger
7th July 2012, 10:09 PM
As for any researchers working from "the position that there is something essential about the neuron 'over and above its contribution to supporting the logic'," no I don't know of anyone either.

But you just said there were, in the statement of yours that I quoted:

From a neurbio point of view, the challenge to building conscious machines is a bit different, because we assume that electro-mechanical engineering will be crucially important over and above its contribution to supporting the logic.

I interpreted that "we" to mean Piggy + the "neurobio" researchers.

Mr. Scott
7th July 2012, 10:23 PM
Video of Chalmers stating how it is as far as consciousness research goes.

http://youtu.be/r4SLOr2icnY

All he says basically is he doesn't know, and people who claim to know really don't know.

Has Chalmers' work produced any fruit at all? Or, does it just serve to make us feel secure against the progress of those heartless machines?

PixyMisa
7th July 2012, 11:01 PM
Has Chalmers' work produced any fruit at all?
He's a dualist. He produces antifruit.

PixyMisa
7th July 2012, 11:06 PM
EDIT -- it just occurred to me that perhaps you are simply a little confused, and think that the computationalists view the neurons of the brain as just the "hardware" running "software" which is merely the impulses traveling between them. This is absolutely not the case. The computationalist position is that the concepts of "hardware" and "software" do not apply to the biological brain, at least not in the way it does to silicon computers.
Right.

A neural network like the brain is different on the surface from a stored-program computer, but per the Church-Turing thesis they have mathematically identical expressive power.

Anything one can do, the other can do. Always.

PixyMisa
7th July 2012, 11:19 PM
The topic of behavior and consciousness has come up on this forum in the recent past (that I am aware of). Consciousness is not behavior (contra to the cousin of the computationalists the bahviorists), it is an internal state of being. A black box could just be sitting there, experiencing (and thus is conscious) a sunny day. You would not be able to tell from outside of the box that this is happening, which is why consciousness is not fundamentally about behavior.
Wrong. It is always possible to open the box, which is why everything is fundamentally about behaviour.

Unless you're a dualist, in which case you're just plain wrong.

Emulate or simulate?
Same process, different levels of abstraction.

If you are simulating the brain (or anything else) there is never any reason to expect that the simulation will do anything beyond hopefully giving good predictions of what is being simulated. As far as emulation goes, you can program computers to say all sorts of things, that doesn't mean a thing.
If you simulate the brain accurately, it is impossible for the simulation not to reproduce the behaviours of the brain. That applies for anything. It's what an accurate simulation is.

If you don't simulate the brain accurately, then fix your simulation.

The question is not if a machine made above could not possibly report back whatever you want it to in terms of experience as far as emulation goes, the question is, why should you believe it?
You should believe it more than you'd believe anyone, even yourself, because you can so easily look inside the box and check.

tensordyne
7th July 2012, 11:52 PM
He's a dualist. He produces antifruit.

Antifruit? What the heck is that about?

tensordyne
7th July 2012, 11:56 PM
PixyMisa, straight up question, is there anything it is like for you to experience red?

tensordyne
8th July 2012, 12:08 AM
All he says basically is he doesn't know, and people who claim to know really don't know.

Has Chalmers' work produced any fruit at all? Or, does it just serve to make us feel secure against the progress of those heartless machines?

I agree with the assessment you gave above of what Chalmers said. I even agree with what Chalmers said as well. From an academic point of view Chalmers is pretty successful. As far as figuring out consciousness (or artificial consciousness), no.

And please stop with this defending the human status psycho-social motif. If you agree that we have not yet figured out what the physical basis for consciousness is, that does not mean you are a Luddite. It does not mean that you think we will never be able to replicate consciousness artificially. It just means that you think the way it is supposedly replicated now, as per some adherents, is not the way. That is all it means.

tensordyne
8th July 2012, 12:55 AM
You should believe it more than you'd believe anyone, even yourself, because you can so easily look inside the box and check.

In terms of the definitions you have given before for what you mean by consciousness and awareness, yes.

In terms of what I mean (you could say, the Real version because it is the type of consciousness most everyone else is talking about when they say consciousness), no.

PixyMisa
8th July 2012, 01:44 AM
Antifruit? What the heck is that about?
Chalmers is a dualist. We can look at this in one of two ways:

1. He believes that the Universe is not logically consistent.
2. His beliefs are not logically consistent.

Either way, nothing he does is of use to anyone except as an object lesson. (If you don't eat your vegetables, you'll grow up to be David Chalmers!)

Hence, antifruit.

PixyMisa
8th July 2012, 01:45 AM
PixyMisa, straight up question, is there anything it is like for you to experience red?
Yes. Obviously.

And it's a computational process.

PixyMisa
8th July 2012, 01:46 AM
In terms of the definitions you have given before for what you mean by consciousness and awareness, yes.
That's what everyone means by consciousness, from Descartes to, well, you. That's the whole point.

In terms of what I mean (you could say, the Real version because it is the type of consciousness most everyone else is talking about when they say consciousness), no.
Wrong.

Belz...
8th July 2012, 02:53 AM
Forgive me if I don't just take your word for that last bit about the brain.

You don't think the brain is turing complete ?

The brain doesn't get any data.

This comment boggles the mind.

The brain's input is entirely physical, and that input is only "encoded with information" in the same way that ripples in a pond are "encoded with information" about the rock that was dropped in the pond.

The computer doesn't get data. The computer's input is entirely physical, etc. After all it's electrical impulses affecting switches and transistors and memory made of small bits of metal.

You're creating an unwarranted distinction between the brain and a computer, and it's easy to see why someone would think it's because you don't want them to be alike.

Sure, my truck can do some things if you replace its guts with a computer simulation -- could still work the lights and the wipers and play the radio... but it can't do everything.

It can do everything that it has an interface to the real world with.

And what about my fleshy truck ?

Does the brain really do that?

What, exactly, is it that you think brains do ?

And if so, is that what the components that perform consciousness do?

I don't know, and irrelevant to my point.

Belz...
8th July 2012, 02:55 AM
Not according to what others are saying:

"Human brains store information using many different strategies, none of which are very analogous to binary in a digital computer."
Paul King, Computational Neuroscientist

MIT neuroscientist, Guosong Liu, has found that human neurons compute in trinary, using signals that are the equivalents of -1, 0 and 1.

The brain as a whole operates more like a social network than a digital computer, with neurons communicating to allow learning and the creation of memory, according to O'Reilly.
psychology Professor Randall O'Reilly

You're using a far too narrow definition of computer, whereas some people are using a far too broad one. Your PC is not the only possible type of computer.

Belz...
8th July 2012, 02:56 AM
He's a dualist. He produces antifruit.

Or, more likely, fruit and fruit spirit.

PixyMisa
8th July 2012, 03:08 AM
Not according to what others are saying:

"Human brains store information using many different strategies, none of which are very analogous to binary in a digital computer."
Paul King, Computational Neuroscientist

MIT neuroscientist, Guosong Liu, has found that human neurons compute in trinary, using signals that are the equivalents of -1, 0 and 1.
So, what you're saying is that the brain is a trinary pulse-coded switched digital network?

I'm fine with that (if the evidence supports it). That makes it a computer.

The brain as a whole operates more like a social network than a digital computer, with neurons communicating to allow learning and the creation of memory, according to O'Reilly.
Yes, it's a pulse-coded switched digital network. That makes it a computer.

tensordyne
8th July 2012, 05:30 AM
That's what everyone means by consciousness, from Descartes to, well, you. That's the whole point.


I do not like people putting words in my mouth. I do not mean what you mean by the word consciousness, of that, I am pretty sure.


Wrong.

That is not an argument.

Fudbucker
8th July 2012, 07:17 AM
You're using a far too narrow definition of computer, whereas some people are using a far too broad one.

I am? We are? Evidence?

However you want to define computer, the people I quoted (experts in the field) believe the brain does not operate like a computer in at least some areas.

Your PC is not the only possible type of computer.

Never said it was. The claim is that a "BRAIN IS A COMPUTER". It's still not clear what Pixy means by this (member of a set or are they one and the same?). If you could pin him down, that'd be great.

Either way, if the brain is a computer (or a kind of computer), it's working in non-computerlike ways. That presents a challenge to the claim.

Fudbucker
8th July 2012, 07:21 AM
So, what you're saying is that the brain is a trinary pulse-coded switched digital network?

I'm fine with that (if the evidence supports it). That makes it a computer.

[/quote]

That makes it work in trinary. Are there any trinary computers out there?

Yes, it's a pulse-coded switched digital network. That makes it a computer.

Not what Oreilly said:
"the brain as a whole operates more like a social network than a digital computer, with neurons communicating to allow learning and the creation of memory, according to O'Reilly."

And not according to Paul King. You didn't address that quote: "Human brains store information using many different strategies, none of which are very analogous to binary in a digital computer."
Paul King, Computational Neuroscientist

steenkh
8th July 2012, 08:09 AM
That makes it work in trinary. Are there any trinary computers out there?[/QUOTE]
No. What is the point?

It has been considered many times to switch to trinary computing, but many physical storage systems are naturally digital, and it is very difficult to distinguish between trinary voltages with sufficient accuracy. The brain on the other hand is known to be error prone, but that does not make it less of a computer.

Not what Oreilly said:
"the brain as a whole operates more like a social network than a digital computer, with neurons communicating to allow learning and the creation of memory, according to O'Reilly."
Are you suggesting that O'Reilly thinks that each neuron has intelligence in itself? I do not think so, but in any case that would not stop the brain being a computer, it will simply make the computer more complex, and that can be emulated too.

And not according to Paul King. You didn't address that quote: "Human brains store information using many different strategies, none of which are very analogous to binary in a digital computer."
Paul King, Computational Neuroscientist
But computing is not limited to a certain storage system, so this is irrelevant for the discussion. Even if neurons turned out to be completely analog, it would pose no particular problem.

rocketdodger
8th July 2012, 08:48 AM
Right.

A neural network like the brain is different on the surface from a stored-program computer, but per the Church-Turing thesis they have mathematically identical expressive power.

Anything one can do, the other can do. Always.

Yes but when you say that it doesn't make it clear that the physical connectivity of the neurons is important. I honestly think piggy ( and others ) thinks the computationalist position is that the brain is like a stored program computer, that it can just be loaded up with any set of instructions and run any program because the architecture is so generalized.

tensordyne
8th July 2012, 01:20 PM
It has been considered many times to switch to trinary computing, but many physical storage systems are naturally digital, and it is very difficult to distinguish between trinary voltages with sufficient accuracy. The brain on the other hand is known to be error prone, but that does not make it less of a computer.


Just a quick recap, it does not add anything to the argument about the brain not being a computer (or, that to whatever extent it is like a computer that that does not help in understanding consciousness) to say neurons work in base 3.

In the above quote the word digital was used when the correct word probably would have been binary (it is in bold for your convenience). Digital just means discontinuous and discrete. I also doubt it is hard to distinguish between voltages to set up trinary gate systems. It just involves more circuitry.

It is interesting to point out that anytime analog signals are being processed in a machine, then technically the operations of said machine are outside the paradigm of Turing Based Computation. Of course, one can use Turing Computation to approximate the results (if they are repeatable) of mechanisms that use analog processing.

Belz...
8th July 2012, 01:31 PM
I am? We are? Evidence?

Your POSTS. You are defining "computer" as a specific type of computer, unjustifiably so.

Either way, if the brain is a computer (or a kind of computer), it's working in non-computerlike ways. That presents a challenge to the claim.

No, you meant that it doesn't work like your PC, which is irrelevant. Who cares if it's binary or trinary or if it has transistors or not ? Those things are not requisites for a computer.

tensordyne
8th July 2012, 01:47 PM
Yes but when you say that it doesn't make it clear that the physical connectivity of the neurons is important. I honestly think piggy ( and others ) thinks the computationalist position is that the brain is like a stored program computer, that it can just be loaded up with any set of instructions and run any program because the architecture is so generalized.

It is interesting listening to the thoughts of someone you disagree with as they try and figure out your mindset. Since I am '( and others )' I think it is appropriate if I give a response.

Piggy has already addressed why he thinks the comp.lit position is wrong and it does not have anything to do with contentions over where the program in the brain is stored, so far as I can tell.

From my own perspective, the bone I have is that the comp position does not in any way address how the experience of perception works. Perhaps because the very concept of sensory experience is missing in some essential way from the Comp position mindset? Hard to say.

I only know that if you understand what consciousness is in terms of sensation (consciousness as just a bundle of sensations for some period of time), then the idea of computation being what gives rise to it is obviously absurd. Computation can only rearrange, it does not create, destroy or sustain the existence of an aspect of the Universe. To study those kinds of questions about the Universe you need Physics (in this case Psychophysics).

And that is it really.

steenkh
8th July 2012, 02:04 PM
In the above quote the word digital was used when the correct word probably would have been binary (it is in bold for your convenience).
You are right, I meant to say "binary", but I was distracted while writing it. Thanks for understanding the meaning.

Digital just means discontinuous and discrete. I also doubt it is hard to distinguish between voltages to set up trinary gate systems. It just involves more circuitry.
Yes, obviously, but it goes without saying that trinary systems also need to be more cost efficient than the ruling binary systems.

It is interesting to point out that anytime analog signals are being processed in a machine, then technically the operations of said machine are outside the paradigm of Turing Based Computation. Of course, one can use Turing Computation to approximate the results (if they are repeatable) of mechanisms that use analog processing.
In reality, all analog systems will work to a certain resolution, and a digital value can emulate this to the same resolution.

dlorde
8th July 2012, 03:55 PM
Btw, where are the symbols that you think the computer is manipulating?

Is it really manipulating symbols when it's operating, or is the manipulation of symbols more appropriate to the programming of the machine?
There are symbols in different contexts - the symbology of programming - the conceptual reification of the computational abstractions representing patterns of interaction within the system. Programming establishes a logical architecture for processing, the symbology is for the convenience of the programmer. When operating, the system manipulates actual data - although in processes at higher levels of abstraction, e.g. conceptual consciousness (thinking about ideas), it can be considered symbolic, but this is an operational, not structural, symbolism.

I was only talking about symbolic, as opposed to physical computations.Strictly speaking, symbolic computations are physical computations, in the abstract; they embody the conceptual relationships between computations (and their data) that manipulate real data in a particular context. 'Symbolic Computation (http://en.wikipedia.org/wiki/Symbolic_computation)' as generally defined refers to manipulation of algebraic or mathematical expressions, and I wouldn't suggest that the brain explicitly does this (except at high levels of abstraction, e.g. in the meta-cognitive sense of a mathematician or algebraist at work). But I'm not sure that the manipulation of algebraic or mathematical expressions has any particular relevance to computational or biological consciousness.

That's what I mean by symbolic computation requiring a reader.So 'reader' == interpreter, as I suggested. Interpreting the symbolic output in terms of the physical relationships it abstractly represents, in the given context.

I don't think neural work in the brain is symbolicI'm not entirely sure what you mean by 'symbolic' in this context (i.e. why it's relevant). I'm happy to accept the brain doesn't symbolically manipulate algebraic or mathematical expressions, although its structure and connections can be viewed as instantiating certain mathematical or logical expressions, and can be modified.

I don't think the neural work in a computer running a paint sprayer is symbolic either.:confused: 'neural work' in a computer running a paint sprayer?? I have no idea what are you talking about.

what I'm asking my pocket calculator to do is essentially symbolic. That is, from my point of view. What it actually does, without regard to me, is not symbolic at all.I agree it is just a point of view. The physical processes do their stuff, and the symbolism is just a convenience you use to helps you describe and understand what's going on as those processes do their stuff at various levels of abstraction.

From the physicalist point of view, what's important to understand in the production of consciousness are the physical computations of the brain, not the symbolic computations, so you don't need a reader.I don't see why one might not have symbolic processing internally with a 'reader' incorporated to 'decode' the output - but I don't see that it matters here.

The result of a physical computation is simply a new shape of the physical stuff involved in the computation. That's how the brain works, by constantly reshaping itself physically.
It does that; restructuring the connectome is only part of the story, but it can be viewed as implicitly manipulating neural circuit expressions.

Feedback loops are everywhere in the brain, including areas that have no effect on consciousness, so yeah, you could say SRIP is going on here, but that's a trivial observation.
It's a question of the level of abstraction - SRIP seems to occur at every level.

dlorde
8th July 2012, 04:00 PM
... Consciousness is not behavior (contra to the cousin of the computationalists the bahviorists), it is an internal state of being.If there is activity, internal or otherwise, there is behaviour. It need not be motor activity. Consciousness is a process, it involves neural activity, so it is a behaviour (or aggregate of behaviours).

dlorde
8th July 2012, 04:05 PM
.. if the brain is a computer (or a kind of computer), it's working in non-computerlike ways.

For example? I'm curious to know in what ways you feel the brain doesn't function like a computer (http://en.wikipedia.org/wiki/Computer).

Mr. Scott
8th July 2012, 04:11 PM
So, what you're saying is that the brain is a trinary pulse-coded switched digital network?

I'm fine with that (if the evidence supports it). That makes it a computer.


Yes, it's a pulse-coded switched digital network. That makes it a computer.

A binary computer can emulate or simulate a trinary computer. In fact, most computers use tristate logic in their bus systems (http://en.wikipedia.org/wiki/Three-state_logic), but it's really just to simplify what would otherwise have been a network of binary OR gates.

dlorde
8th July 2012, 04:13 PM
Are there any trinary computers out there?
Yes, as a matter of fact, there are (they're usually called 'ternary computers (http://en.wikipedia.org/wiki/Ternary_computer)').

You didn't address that quote: "Human brains store information using many different strategies, none of which are very analogous to binary in a digital computer."
Paul King, Computational Neuroscientist

The analogy is with the concept of digital computers in general, rather than specific common implementations. As the Blue Brain project shows, digital systems can emulate biological brains and their storage strategies.

dlorde
8th July 2012, 04:22 PM
In reality, all analog systems will work to a certain resolution, and a digital value can emulate this to the same resolution.

In any case, the Nyquist–Shannon sampling theorem (http://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampling_theorem) says that sampling at twice the highest analogue frequency is sufficient.