PDA

View Full Version : Why should you be emotionally invested in strong AI?


Pages : [1] 2

rocketdodger
16th December 2009, 10:27 AM
I am aware that there are many individuals emotionally invested in the notion that strong AI is false. I can understand why -- the idea that our consciousness has some magical component that is beyond the boring and cold scientifically understood world is one that can be tempting.

But there are also very good emotional reasons to support the notion that strong AI is true.

Because if strong AI is true, the implications aren't limited to a realization that we are all just meatbags having the illusion of non-deterministic thought. There is so much more.

If strong AI is true, humans will someday be able to upload their consciousness into any suitable substrate. This is not only the closest thing to immortality that could be available in our universe, but it is also far better than immorality. The ability to upload our consciousness implies the ability to modify it as well -- in any way we desire. Being able to upload also implies the ability to travel at lightspeed between suitable locations. So if you are interested in living forever, or living in any way you could possibly think of, you should want strong AI to be true.

Now you might say "but religion tells us that immorality is available now -- we don't have to wait for the technology, which might not arrive for hundreds of years."

To that I reply that my own estimates put the arrival time of such capability at less that 50 years from now. In fact such a thing might be possible within 20 years or so, and economic feasibility would follow within a few decades.

But even if you can't wait that long, you can always freeze yourself. Because an implication of strong AI being true is that technology to thaw you and bring you back to life is no longer relevant -- all that is needed is the technology to scan your frozen brain and extract the topography of the neural networks contained within. After that, the upload technology takes care of everything else.

Hopefully I have shown that there are emotional reasons to support strong AI that are just as good, if not better, than those for opposing it.

Ladewig
16th December 2009, 10:55 AM
I am not very knowledgeable in this area so forgive my question, but why are you assuming that strong AI will allow the ability to upload individuals' consciousness?

Meadmaker
16th December 2009, 11:01 AM
But even if you can't wait that long, you can always freeze yourself.

You do realize that this part is a bunch of hooey, don't you?

Third Eye Open
16th December 2009, 11:21 AM
This is all very immoral.

Richard Masters
16th December 2009, 11:29 AM
You do realize that this part is a bunch of hooey, don't you?

It's a risky form of insurance, but if we can unfreeze people while preserving their organs at some point in the future, then it's not exactly a bunch of hooey.

rocketdodger
16th December 2009, 12:44 PM
I am not very knowledgeable in this area so forgive my question, but why are you assuming that strong AI will allow the ability to upload individuals' consciousness?

Because an implication of strong AI is that there is no loss of "essence" of consciousness, whatever that may be, when the intelligence is instantiated upon a non-biological substrate.

That is, if AI can really be conscious, then there is no logical reason why our own consciousness could not be transferred to a non-biological substrate.

rocketdodger
16th December 2009, 12:47 PM
You do realize that this part is a bunch of hooey, don't you?

I think it is a bunch of hooey that those frozen people will ever be un-frozen and "fixed" or whatever they think will happen.

I do not think it is implausible that at some point in the future -- certainly within a few hundred years, much sooner if my estimates are correct -- there will be technology to extract from a frozen brain the information needed to completely reconstruct a model of the neurons and their connections.

Assuming the brain is kept at a low enough temperature (maybe they aren't currently, but that isn't a fault of my argument), there is no reason that any neural connections should degrade past the point of being able to recover the topography.

justcharlie09
16th December 2009, 12:48 PM
Now you might say "but religion tells us that immorality is available now -- we don't have to wait for the technology, which might not arrive for hundreds of years."

.

Oh yes, I think religion all too often does preach the availability of immorality. Just look at the recent sex scandals in the RCC.

No technology required.

....

Sorry, couldn't resist...

....

I'm a PKD and Asimov fan, too. AI would certainly make things more interesting, wouldn't it?

aggle-rithm
16th December 2009, 01:00 PM
Because an implication of strong AI is that there is no loss of "essence" of consciousness, whatever that may be, when the intelligence is instantiated upon a non-biological substrate.

That is, if AI can really be conscious, then there is no logical reason why our own consciousness could not be transferred to a non-biological substrate.

By "uploading" are you talking about moving consciousness from one place to another, or simply copying it?

If I uploaded my consciousness to a substrate, would I be me, or would the substrate-I be me? Would either me or the substrate-me be disposable? Is it sufficient for my personality and memory to survive for me to survive, from a subjective standpoint?

Can you envision a world in which everyone is constantly dying and being recreated, but nobody notices because the personalities and memories are continuous, and we are unable to experience our own death and resurrection subjectively?

roger
16th December 2009, 01:11 PM
By "uploading" are you talking about moving consciousness from one place to another, or simply copying it?

If I uploaded my consciousness to a substrate, would I be me, or would the substrate-I be me? Would either me or the substrate-me be disposable? Is it sufficient for my personality and memory to survive for me to survive, from a subjective standpoint?

Can you envision a world in which everyone is constantly dying and being recreated, but nobody notices because the personalities and memories are continuous, and we are unable to experience our own death and resurrection subjectively?
You are assuming a "me" that doesn't exist (according to the way of thinking expoused in this thread).

All you have is a brain, and processes running on the brain. The processes are self aware, and refer to something called 'me'. But there is no 'me' there.

If this is all true (and I think it is, I don't see any mechanism but the brain to create all this), then if you were to copy your brain state to silicon or whatever, the processes running on silicon would think "I'm still me!!!". Meanwhile, the processes running on the brain would think "hey, that silicon sure is acting a lot like me".

It is absolutely no different from the fact that you don't have my consciousness. That's not because there is a "you" and "me", but because the processes running in each of our brains are not networked in any way.

It's also no different, given these assumptions, then you falling asleep and waking up. There is no 'me' thing - it's just that the processes that are running in your brain now have access to memory of previous times when they were active, and your brain creates the fiction of a 'me'. But, if we swapped out your brain for silicon you couldn't tell. If we ran 10,000 instances of your brain on different computers, none of those 10,000 could tell - they'd all still think they were the you of the meat brain.

PixyMisa
16th December 2009, 01:32 PM
Can you envision a world in which everyone is constantly dying and being recreated, but nobody notices because the personalities and memories are continuous, and we are unable to experience our own death and resurrection subjectively?
Wil McCarthy's The Collapsium (http://www.amazon.com/Collapsium-Wil-McCarthy/dp/055358443X).

rocketdodger
16th December 2009, 01:38 PM
By "uploading" are you talking about moving consciousness from one place to another, or simply copying it?

One and the same, according to the computational model.

If I uploaded my consciousness to a substrate, would I be me, or would the substrate-I be me?

Both would be you for an instant, after which they would diverge. If it worries you, go ahead and keep a pistol nearby the original like in The Prestige.

Would either me or the substrate-me be disposable?

Up to you.

Is it sufficient for my personality and memory to survive for me to survive, from a subjective standpoint?

Yes. Your subjective experience could be recreated (or restarted, as it were) from the neural map data. Kind of like what happens when you go to sleep and then wake up -- there is a break in subjective experience but you are still the same person nonetheless.

Can you envision a world in which everyone is constantly dying and being recreated, but nobody notices because the personalities and memories are continuous, and we are unable to experience our own death and resurrection subjectively?

Yes, in theory it would be identical to your experience right now.

But furthermore I can envision even a world where you can recombine with other instances of yourself and merge the memories.

Think about it -- try to remember a place you have been, and the stuff you did there. Now ask yourself this -- were you thinking about that, at all, prior to me mentioning it in the above statement? That is what it would be like to acquire memories from another copy of yourself -- you wouldn't know the difference.

Pretty cool stuff, if strong AI is true.

Frank Newgent
16th December 2009, 02:01 PM
Reminds me for some reason of The Three Stigmata of Palmer Eldritch (http://en.wikipedia.org/wiki/The_Three_Stigmata_of_Palmer_Eldritch)... and the hallucinogen to be marketed as "Chew-Z" with its slogan: God promises eternal life. We can deliver it.

Very entertaining read.

kellyb
16th December 2009, 02:09 PM
Yes. Your subjective experience could be recreated (or restarted, as it were) from the neural map data. Kind of like what happens when you go to sleep and then wake up -- there is a break in subjective experience but you are still the same person nonetheless.

That makes perfect sense but seems so intuitively incorrect at the same time.

If I were to upload my brain into a robot brain right now, there would be two of "me", but it seems like the "real me" would still be the one operating in this body. And when my body dies, "real me" would die, too, even if the duplicate lived on.

Bikewer
16th December 2009, 02:16 PM
Quite a while back, I read a discussion on the concept in OMNI magazine. The idea (according to the article) would be that at the point one was ready to make the "transfer" (or upload or whatever), one would indulge in switching back-and-forth from the body's POV to the machine POV.
Eventually, the ability to tell the difference (provided sufficient computational power) would be such that you could dispose of the biological body and simply continue on as an AI.

Fred Pohl had a nice exploration of what this might be like in one of his HeeChee novels; with the ability to "think" so much faster than a human that speaking to one would be an exercise in multi-tasking to avoid being bored....
It's an interesting notion, but one that's down the road a bit.

Gate2501
16th December 2009, 02:16 PM
That makes perfect sense but seems so intuitively incorrect at the same time.

If I were to upload my brain into a robot brain right now, there would be two of "me", but it seems like the "real me" would still be the one operating in this body. And when my body dies, "real me" would die, too, even if the duplicate lived on.

You should go read the "teleporter" thread if it still exists. It really challenged my thinking on this matter, and brought me over to RD's position after much internal debating.

Gate2501
16th December 2009, 02:19 PM
Quite a while back, I read a discussion on the concept in OMNI magazine. The idea (according to the article) would be that at the point one was ready to make the "transfer" (or upload or whatever), one would indulge in switching back-and-forth from the body's POV to the machine POV.
Eventually, the ability to tell the difference (provided sufficient computational power) would be such that you could dispose of the biological body and simply continue on as an AI.

Fred Pohl had a nice exploration of what this might be like in one of his HeeChee novels; with the ability to "think" so much faster than a human that speaking to one would be an exercise in multi-tasking to avoid being bored....
It's an interesting notion, but one that's down the road a bit.

That is a really good idea, and would make me much more comfortable. You could have the bio body and the AI share memory for the switching period so that it feels very seamless to the mind being transferred.

Maia
16th December 2009, 02:20 PM
Two questions.

1.) RD, what are you smoking?
2.) Can I have some? :)

I Ratant
16th December 2009, 02:24 PM
It's a risky form of insurance, but if we can unfreeze people while preserving their organs at some point in the future, then it's not exactly a bunch of hooey.
.
Storing a bunch of people, with the population increasing as it... is it realistic to expect the future would -want- more people?
Especially as in the typical Sci-fi scenario, these corpsicles are infected with probable long erased diseases for which the living population has no resistance.

rocketdodger
16th December 2009, 02:28 PM
That is a really good idea, and would make me much more comfortable. You could have the bio body and the AI share memory for the switching period so that it feels very seamless to the mind being transferred.

Yes.

In fact, this is the solution that I proposed in the teleporter thread. If you are hesitant about your consciousness being reduced to numbers, then you can always use this slower option.

My version was to teleport neurons one at a time, so that at any given instant neurons at the source are getting impulses from neurons at the destination and vice versa, until all the neurons have been moved to the destination.

kellyb
16th December 2009, 02:31 PM
Quite a while back, I read a discussion on the concept in OMNI magazine. The idea (according to the article) would be that at the point one was ready to make the "transfer" (or upload or whatever), one would indulge in switching back-and-forth from the body's POV to the machine POV.
Eventually, the ability to tell the difference (provided sufficient computational power) would be such that you could dispose of the biological body and simply continue on as an AI.



You'd have to be able to instantly upload the new info from the machine's "experience" back into the shutdown bio body's brain before "restarting" the bio brain, wouldn't you, to make that work?

westprog
16th December 2009, 02:35 PM
That makes perfect sense but seems so intuitively incorrect at the same time.


That doesn't mean that it's wrong, of course.


If I were to upload my brain into a robot brain right now, there would be two of "me", but it seems like the "real me" would still be the one operating in this body. And when my body dies, "real me" would die, too, even if the duplicate lived on.

All that means is that consciousness is an entirely unphysical phenomenon. It's an idea, but there's no need to take it too seriously.

westprog
16th December 2009, 02:38 PM
"essence" of consciousness, whatever that may be


Getting the belief in "essence" of consciousness might be that difficult first step for some people. But if you can just bring yourself to accept, look, eternal life!

Piscivore
16th December 2009, 03:01 PM
I am aware that there are many individuals emotionally invested in the notion that strong AI is false. I can understand why -- the idea that our consciousness has some magical component that is beyond the boring and cold scientifically understood world is one that can be tempting.

But there are also very good emotional reasons to support the notion that strong AI is true.

I'm sorry, did you just ask us to stop uncritically accepting a speculative, unevidenced belief just because it is pleasing, and instead uncritically accept your contradictory speculative, unevidenced belief because it is also pleasing?

Yeah... no.

kellyb
16th December 2009, 03:11 PM
I'm sorry, did you just ask us to stop uncritically accepting a speculative, unevidenced belief just because it is pleasing, and instead uncritically accept your contradictory speculative, unevidenced belief because it is also pleasing?

Yeah... no.

What do you see as being fundamentally illogical about "digital immortality"?

westprog
16th December 2009, 03:15 PM
What do you see as being fundamentally illogical about "digital immortality"?

He didn't say it was fundamentally illogical. It isn't. Given the premises, it makes sense.

However, it is entirely unsupported by evidence. Which, if you were asked to literally bet your life, might give one pause. Or not. I can well imagine people queuing up in a few years time to have their brain patterns transcribed. You will lose a few thousand dollars, my friends, but I am promising you eternal life!

westprog
16th December 2009, 03:20 PM
You'd have to be able to instantly upload the new info from the machine's "experience" back into the shutdown bio body's brain before "restarting" the bio brain, wouldn't you, to make that work?

Not if you put Kellyb into a coma before creating Kellyc.

rocketdodger
16th December 2009, 03:20 PM
All that means is that consciousness is an entirely unphysical phenomenon. It's an idea, but there's no need to take it too seriously.

Nor is there a need to take any of the other stuff seriously. You know, like religion and all that jazz.

Yet, billions do, and wars are fought over it.

Funny how that works.

kellyb
16th December 2009, 03:21 PM
He didn't say it was fundamentally illogical. It isn't. Given the premises, it makes sense.

However, it is entirely unsupported by evidence. Which, if you were asked to literally bet your life, might give one pause. Or not. I can well imagine people queuing up in a few years time to have their brain patterns transcribed. You will lose a few thousand dollars, my friends, but I am promising you eternal life!

Well, he said it was not not only speculative but also "contradictory". I'm not seeing it.

kellyb
16th December 2009, 03:24 PM
Not if you put Kellyb into a coma before creating Kellyc.

Right...but if you're going to bounce back and forth, you've got to get the KellyC experience into the comatose KellyB before awakening KellyB again.
If you're trying to actually switch back and forth.

rocketdodger
16th December 2009, 03:24 PM
I'm sorry, did you just ask us to stop uncritically accepting a speculative, unevidenced belief just because it is pleasing, and instead uncritically accept your contradictory speculative, unevidenced belief because it is also pleasing?

Yeah... no.

No, I didn't. I simply gave some reasons why the belief is pleasing, and suggested that you become emotionally invested in the belief if you find those reasons valid.

I don't recall every saying you should accept strong AI. Perhaps you could quote me?

Oh, wait, you can't, because I never said it.

rocketdodger
16th December 2009, 03:26 PM
Right...but if you're going to bounce back and forth, you've got to get the KellyC experience into the comatose KellyB before awakening KellyB again.
If you're trying to actually switch back and forth.

Yes, correct.

That scenario would require the ability to modify neurons via an external mechanism, so that kellyB's brain topography would match kellyC's.

rocketdodger
16th December 2009, 03:27 PM
Well, he said it was not not only speculative but also "contradictory". I'm not seeing it.

He meant contradictory to the idea of consciousness being magical, not self-contradictory.

I never say anything self-contradictory ... if I can help it :)

kellyb
16th December 2009, 03:35 PM
He meant contradictory to the idea of consciousness being magical, not self-contradictory.

I never say anything self-contradictory ... if I can help it :)

Oh. lol.

So, do you think we'd have to work in the digital netherworld? To pay the living to service our computers or whatever?

roger
16th December 2009, 03:45 PM
However, it is entirely unsupported by evidence. In what way? All available evidence indicates that our brain creates our consciousness, that the brain is wholly physical, and that there is nothing specific to the hardware of the brain that cannot be duplicated (after all, we duplicate it every time we have a baby). It is proven that memories are stored in the brain. It is proven that by stimulating neurons we can cause people to reexperience them. It is proven that destruction of neurons can lead to destruction of memory, and alteration or destruction of personality.

It's superbly supported by the evidence, so far as I can see.

Darat
16th December 2009, 03:45 PM
...snip...

If I were to upload my brain into a robot brain right now, there would be two of "me", but it seems like the "real me" would still be the one operating in this body. And when my body dies, "real me" would die, too, even if the duplicate lived on.

No there wouldn't be, and that holds even if we follow the speculation in the opening post, what there would be is two very similar people who share an identical history.

Piscivore
16th December 2009, 03:51 PM
No, I didn't. I simply gave some reasons why the belief is pleasing, and suggested that you become emotionally invested in the belief if you find those reasons valid.

I don't recall every saying you should accept strong AI. Perhaps you could quote me?

Oh, wait, you can't, because I never said it.

You said "But there are also very good emotional reasons to support the notion that strong AI is true." As far as I'm concerned, there are NO "good emotional reasons" to support any notion as true.

Richard Masters
16th December 2009, 03:54 PM
.
Storing a bunch of people, with the population increasing as it... is it realistic to expect the future would -want- more people?
Especially as in the typical Sci-fi scenario, these corpsicles are infected with probable long erased diseases for which the living population has no resistance.

I'm not arguing for or against it. It's actually kind of creepy in a way, especially because the typical scenario involves decapitating the individual and keeping the head. (The rest of the body, if desired, can theoretically be reconstructed from the individual's DNA).

kellyb
16th December 2009, 03:56 PM
No there wouldn't be, and that holds even if we follow the speculation in the opening post, what there would be is two very similar people who share an identical history.

I guess what I meant was, two people both having equal rights to claim "the real Kellyb" status. And if they both exist, functioning, at the same time, the biological "body located consciousness" ("me") can't escape the non-existance of death.

It's almost like there's some sort of paradox in there, but I'm having trouble wrapping my mind around it. But it seems like having the simulated consciousness operating at the same time as the bio consciousness renders the simulated consciousness insufficient for the purpose of escaping death.

Or something. I think.:boggled:

westprog
16th December 2009, 04:12 PM
In what way? All available evidence indicates that our brain creates our consciousness, that the brain is wholly physical, and that there is nothing specific to the hardware of the brain that cannot be duplicated (after all, we duplicate it every time we have a baby). It is proven that memories are stored in the brain. It is proven that by stimulating neurons we can cause people to reexperience them. It is proven that destruction of neurons can lead to destruction of memory, and alteration or destruction of personality.

It's superbly supported by the evidence, so far as I can see.

That the brain is a physical object is fairly clear. This theory makes the claim that a different physical object is the same brain. Not similar, not likely to behave in the same way, but exactly the same thing. That's the contention which is unsupported by the evidence. What's the difference between "essence of consciousness" and "immortal soul"?

The very fact that consciousness is located in a physical object implies that it is tied to that physical object. A big, big leap of faith is required to say that it isn't.

westprog
16th December 2009, 04:14 PM
I guess what I meant was, two people both having equal rights to claim "the real Kellyb" status. And if they both exist, functioning, at the same time, the biological "body located consciousness" ("me") can't escape the non-existance of death.

It's almost like there's some sort of paradox in there, but I'm having trouble wrapping my mind around it. But it seems like having the simulated consciousness operating at the same time as the bio consciousness renders the simulated consciousness insufficient for the purpose of escaping death.

Or something. I think.:boggled:

IMO, kellyb and kellyc would be different people, and neither would be disposable at any stage. The idea that they would have the same consciousness seems to deny any physical attribute to consciousness at all.

roger
16th December 2009, 04:25 PM
That the brain is a physical object is fairly clear. This theory makes the claim that a different physical object is the same brain. Not similar, not likely to behave in the same way, but exactly the same thing. That's the contention which is unsupported by the evidence. What's the difference between "essence of consciousness" and "immortal soul"?

The very fact that consciousness is located in a physical object implies that it is tied to that physical object. A big, big leap of faith is required to say that it isn't.
My bolding.

The assumptions say no such thing. What they say is that the same processes will run in the same way (though of course they will start to diverge as each set of processes receive different sensory data). Each set will be self conscious. Each consciousness will have the same memory and perceived continuity.

As for "essence of consciousness", I have no idea what that is or means. The claim is merely that the processes in the brain are complex enough to be self aware.

kellyb
16th December 2009, 04:25 PM
That the brain is a physical object is fairly clear. This theory makes the claim that a different physical object is the same brain. Not similar, not likely to behave in the same way, but exactly the same thing.

I think the claim is that consciousness is a synthesis of several processes that happen within and as a result of a human brain. And that a computer could, in theory, replicate those processes. (I admit I'm skeptical of that last bit myself.)

rocketdodger
16th December 2009, 04:26 PM
So, do you think we'd have to work in the digital netherworld? To pay the living to service our computers or whatever?

Yes, in some form or another.

Of course, the work will likely consist of thinking rather than any kind of physical labor, since physical labor would be pointless in a simulated world.

rocketdodger
16th December 2009, 04:33 PM
You said "But there are also very good emotional reasons to support the notion that strong AI is true." As far as I'm concerned, there are NO "good emotional reasons" to support any notion as true.

Wait, I think you are grouping the words in a way different than me.

I mean there are good emotional reasons that an individual should support an unproven notion as true I.E. assuming the notion is true would provide emotional benefit over the contrary.

I do not mean that there are good emotional reasons that support an unproven notion as being true.

Does that clear it up?

Frank Newgent
16th December 2009, 04:34 PM
I'm not arguing for or against it. It's actually kind of creepy in a way, especially because the typical scenario involves decapitating the individual and keeping the head.


At a cost of $120,000 they throw in a can of tuna fish.


Johnson writes that in July 2002, shortly after the Red Sox slugger died at age 83, technicians with no medical certification gleefully photographed and used crude equipment to decapitate the majors' last .400 hitter.

Williams' severed head was then frozen, and even used for batting practice by a technician trying to dislodge it from a tuna fish can.

http://www.nydailynews.com/news/national/2009/10/02/2009-10-02_book_reveals_chilling_details_of_how_cryonic_la b_thumped_remains_of_baseball_imm.html

rocketdodger
16th December 2009, 04:35 PM
As for "essence of consciousness", I have no idea what that is or means.

I mean any aspect of consciousness that various individuals hold to be essential, such as the popular "qualia."

Ladewig
16th December 2009, 04:39 PM
Because an implication of strong AI is that there is no loss of "essence" of consciousness, whatever that may be, when the intelligence is instantiated upon a non-biological substrate.

That is, if AI can really be conscious, then there is no logical reason why our own consciousness could not be transferred to a non-biological substrate.

Now I see where we diverge. All of the sources I have looked at define strong AI as the ability to program a computer to have all the same traits of a conscious person. Strong AI is creating a consciousness from scratch not simply copying every memory, every reasoning ability, every language, and every idiosyncrasy that one finds in a particular person's mind. I can imagine strong AI passing a Turing Test 20 years from now. Mapping every element in a persons mind is, I believe, at least 120 years away.

Meadmaker
16th December 2009, 04:39 PM
I do not think it is implausible that at some point in the future -- certainly within a few hundred years, much sooner if my estimates are correct -- there will be technology to extract from a frozen brain the information needed to completely reconstruct a model of the neurons and their connections.



I think you are underestimating the damage to the brain done by freezing. It might still be edible, but beyond that, I think there would be issues.

However, that isn't really the interesting part of the thought experiment. (No pun intended.) I think the interesting thing is the idea of duplicate consciousness.

The key element of this consciousness transfer is that the memories and thought patterns survive the transfer. Personally, I don't think that will ever happen. I think the information necessary is encoded in elements too small to be measured in a nondestructive manner, or even in a destructive manner in the time required. I think the bottom of the brain will be dead before you finish reading the top, and I think some of the information is only retained while the process is still running.

On the other hand, I think we will make an apparently conscious entity purely from silicon, and we will be able to duplicate those entities.

Piscivore
16th December 2009, 04:40 PM
The key element of this consciousness transfer is that the memories and thought patterns survive the transfer. Personally, I don't think that will ever happen. I think the information necessary is encoded in elements too small to be measured in a nondestructive manner, or even in a destructive manner in the time required. I think the bottom of the brain will be dead before you finish reading the top, and I think some of the information is only retained while the process is still running.

This.

rocketdodger
16th December 2009, 04:45 PM
At a cost of $120,000 they throw in a can of tuna fish.

lol, that is some shady stuff man.

roger
16th December 2009, 04:47 PM
I mean any aspect of consciousness that various individuals hold to be essential, such as the popular "qualia."
Okay, I didn't notice when that term was first introduced.

So, to elaborate to WestProg, if we accept that the brain is physical, and that conscious arises from the processes running on the brain, then that is all there is. There is no separate 'me', 'soul', or whatever. If you were to get hit on the head, go unconscious, then reawake, there is no connection between the processes running before you got hit and after, other than the stored brain states and memories. (that's a bit science-fictiony I know because the whole brain doesn't stop, just the conscious parts). What you have after waking up is some processes running and aware of themselves, calling themselves "me", and thinking they are the same "me" that was running 10 minutes ago.

But that identity doesn't make any real sense, does it? There is no 'me', just a process that is currently self aware, and has memories of being self aware earlier.

There is no difference that I can see, or that anyone has posited, between that shutdown/startup and what would be caused by moving the processes to a different substrate. You'd have some processes running on that substrate thinking "I'm me" with a memory of being in your body 10 minutes ago also thinking "I'm me". Processes aren't things, so it makes no sense to talk about whether they are the 'same' or not - same as in identity. they may be identical, as in their structure, but there is no identity.

Anyway, this has been argued 100,00 times on this forum. I don't have the strength/interest to go over it again. This OP is arguing more for a reason to be excited for the possibility for strong AI to be true, not whether it is true.

blobru
16th December 2009, 04:48 PM
I guess what I meant was, two people both having equal rights to claim "the real Kellyb" status. And if they both exist, functioning, at the same time, the biological "body located consciousness" ("me") can't escape the non-existance of death.

It's almost like there's some sort of paradox in there, but I'm having trouble wrapping my mind around it. But it seems like having the simulated consciousness operating at the same time as the bio consciousness renders the simulated consciousness insufficient for the purpose of escaping death.

Or something. I think.:boggled:


Hi Kelly.

A NFB Canada cartoon (http://www.youtube.com/watch?gl=CA&hl=en&v=pdxucpPq6Lc) I uploaded to rocketdodger's transporter thread (http://forums.randi.org/showthread.php?t=133721) you might enjoy:

pdxucpPq6Lc
I'm like you on this question (I think). Rocketdodger's premises for a computer-uploaded consciousness surviving death are far sounder than any religion's to date; however, I sometimes have trouble getting my head around the notion of physically separate points-of-view, each being "me". Anyway, the above short (10 min.) addresses the question in cartoon form.

Piscivore
16th December 2009, 04:48 PM
Wait, I think you are grouping the words in a way different than me.

I mean there are good emotional reasons that an individual should support an unproven notion as true I.E. assuming the notion is true would provide emotional benefit over the contrary.

I do not mean that there are good emotional reasons that support an unproven notion as being true.

Does that clear it up?

I do see the (minor) difference, but both are still resorting to a fallacy (http://en.wikipedia.org/wiki/Appeal_to_emotion). It's not any less an error for an individual to assume a notion is true because of the benefits one gets pretending it is, than it is for relying on emotional reasons as evidence.

ArcturusA
16th December 2009, 04:49 PM
I'm having trouble with the idea that the consciousness can be transfered, even if strong AI is true.

If we imagine brain activity (including consciousness) simply as a program running on a computer, then you can't move that instance onto another processor. You can send all the information, including the program to run and the current state of the original instance to a new processor, duplicating it then letting it diverge, but it's a different instance, not a continuation of the original.

As far as I know (and correct me if I'm wrong!), it's not possible to transfer a program onto new hardware while it's running.

rocketdodger
16th December 2009, 04:50 PM
Now I see where we diverge. All of the sources I have looked at define strong AI as the ability to program a computer to have all the same traits of a conscious person. Strong AI is creating a consciousness from scratch not simply copying every memory, every reasoning ability, every language, and every idiosyncrasy that one finds in a particular person's mind. I can imagine strong AI passing a Turing Test 20 years from now. Mapping every element in a persons mind is, I believe, at least 120 years away.

Yeah but one implies the other, even if the time frames to the technology are very different.

Ladewig
16th December 2009, 04:51 PM
I can imagine strong AI passing a Turing Test 20 years from now.

Apparently I am a good guesser

http://en.wikipedia.org/wiki/Turing_test#Predictions
futurist Raymond Kurzweil predicted that Turing test-capable computers would be manufactured in the near future. In 1990, he set the year around 2020.[59] By 2005, he had revised his estimate to 2029.

rocketdodger
16th December 2009, 04:53 PM
I think you are underestimating the damage to the brain done by freezing. It might still be edible, but beyond that, I think there would be issues.

However, that isn't really the interesting part of the thought experiment. (No pun intended.) I think the interesting thing is the idea of duplicate consciousness.

The key element of this consciousness transfer is that the memories and thought patterns survive the transfer. Personally, I don't think that will ever happen. I think the information necessary is encoded in elements too small to be measured in a nondestructive manner, or even in a destructive manner in the time required. I think the bottom of the brain will be dead before you finish reading the top, and I think some of the information is only retained while the process is still running.

Well, what happens when you go to sleep? I would be fine with loosing 12 hours of information that hasn't been encoded in actual physical neural connections. Everything beyond that is in principle recoverable from only the topography of your neurons (and supporting glia perhaps, but we can model that as well).

roger
16th December 2009, 04:53 PM
I'm having trouble with the idea that the consciousness can be transfered, even if strong AI is true.
Consider using the term 'duplicated' instead.

If you fall asleep your consciousness largely dissipates. When you wake up, processes start up again. How is that any different from the processes running on substrate A, then on substrate B?

The only reason you think there is a 'me' is because the currently self conscious processes have access to memories of processes being self conscious (so goes the argument).

Ladewig
16th December 2009, 04:53 PM
Yeah but one implies the other, even if the time frames to the technology are very different.

I really don't see why one implies the other. Can you or some other posters explain why that implication is valid?

rocketdodger
16th December 2009, 04:56 PM
Anyway, this has been argued 100,00 times on this forum. I don't have the strength/interest to go over it again. This OP is arguing more for a reason to be excited for the possibility for strong AI to be true, not whether it is true.

Yes, I cannot stress this enough.

The formal arguments about the truth of the premise are irrelevant here, just like formal arguments for the existence of God are irrelevant in a thread about what heaven or hell might be like.

rocketdodger
16th December 2009, 05:06 PM
I do see the (minor) difference, but both are still resorting to a fallacy (http://en.wikipedia.org/wiki/Appeal_to_emotion). It's not any less an error for an individual to assume a notion is true because of the benefits one gets pretending it is, than it is for relying on emotional reasons as evidence.

It may be an error when it comes to formal logic, but that doesn't mean the behavior is a good or bad choice for the entity in question.

ArcturusA
16th December 2009, 05:11 PM
Consider using the term 'duplicated' instead.

If you fall asleep your consciousness largely dissipates. When you wake up, processes start up again. How is that any different from the processes running on substrate A, then on substrate B?

The only reason you think there is a 'me' is because the currently self conscious processes have access to memories of processes being self conscious (so goes the argument).

I'm not worried about the issue of 'me' exactly, but rather that a program cannot be diverted, only duplicated, meaning there could be no continual stream between substrate A and B.

But you brought up a good point that I hadn't considered - that one life isn't one continuous stream anyway.
So assuming that the process is restarted every time you wake up, the experience of duplicating your consciousness would be, for the duplicate, no different to waking up in the morning.

That pretty much removes my issue/misunderstanding. Thanks!

Piscivore
16th December 2009, 05:16 PM
It may be an error when it comes to formal logic, but that doesn't mean the behavior is a good or bad choice for the entity in question.

To my way of thinking, it does. It's one of the main reasons for me for ditching religion and stop trying to please a god who just don't answer back. No matter how warm and cuddly the idea may seem of having a SuperSkyDaddie who particular cares about little old me, it just ain't ever gonna be a good use of my time or thinking pretending it's true just 'cause I might want it to be. I don't see any difference in embracing the same fuzzy, rubbish thinking to have pie-in-the-sky dreams about some slim possiblity of transhumanism in the distant future.

Third Eye Open
16th December 2009, 05:23 PM
To my way of thinking, it does. It's one of the main reasons for me for ditching religion and stop trying to please a god who just don't answer back. No matter how warm and cuddly the idea may seem of having a SuperSkyDaddie who particular cares about little old me, it just ain't ever gonna be a good use of my time or thinking pretending it's true just 'cause I might want it to be. I don't see any difference in embracing the same fuzzy, rubbish thinking to have pie-in-the-sky dreams about some slim possiblity of transhumanism in the distant future.

The major difference is that no one is modeling their entire life around the possibility of it being true. No one is desperately trying to get others to model their lives around this slim chance.

It is fun to think and talk about though.

The Kilted Yaksman
16th December 2009, 05:24 PM
But furthermore I can envision even a world where you can recombine with other instances of yourself and merge the memories.

See: Altered Carbon, by Richard K. Morgan

I Ratant
16th December 2009, 05:28 PM
IMO, kellyb and kellyc would be different people, and neither would be disposable at any stage. The idea that they would have the same consciousness seems to deny any physical attribute to consciousness at all.
Kb and Kc would be identical after the enoberation of Kc, but only immediately.
Their world experiences would then diverge as they independently experience it.. Unless there's a Wifi connection.

rocketdodger
16th December 2009, 05:44 PM
I don't see any difference in embracing the same fuzzy, rubbish thinking to have pie-in-the-sky dreams about some slim possiblity of transhumanism in the distant future.

But all unknowns are not equal.

In particular, the possibility is neither "slim" -- it is the logical conclusion of a widely accepted model of consciousness -- nor is it relegated to the distant future, since groups are already modeling entire portions of animal brains and human brain models will follow suit within 10 years according to some.

Malerin
16th December 2009, 06:13 PM
I am aware that there are many individuals emotionally invested in the notion that strong AI is false. I can understand why -- the idea that our consciousness has some magical component that is beyond the boring and cold scientifically understood world is one that can be tempting.

But there are also very good emotional reasons to support the notion that strong AI is true.

Because if strong AI is true, the implications aren't limited to a realization that we are all just meatbags having the illusion of non-deterministic thought. There is so much more.

If strong AI is true, humans will someday be able to upload their consciousness into any suitable substrate. This is not only the closest thing to immortality that could be available in our universe, but it is also far better than immorality. The ability to upload our consciousness implies the ability to modify it as well -- in any way we desire. Being able to upload also implies the ability to travel at lightspeed between suitable locations. So if you are interested in living forever, or living in any way you could possibly think of, you should want strong AI to be true.

Now you might say "but religion tells us that immorality is available now -- we don't have to wait for the technology, which might not arrive for hundreds of years."

To that I reply that my own estimates put the arrival time of such capability at less that 50 years from now. In fact such a thing might be possible within 20 years or so, and economic feasibility would follow within a few decades.

But even if you can't wait that long, you can always freeze yourself. Because an implication of strong AI being true is that technology to thaw you and bring you back to life is no longer relevant -- all that is needed is the technology to scan your frozen brain and extract the topography of the neural networks contained within. After that, the upload technology takes care of everything else.

Hopefully I have shown that there are emotional reasons to support strong AI that are just as good, if not better, than those for opposing it.

This is so revealing. Now we know why you (and some other people here) are so emotionally invested in the computational model of consciousness. You think you can cheat death with it.

popscythe
16th December 2009, 06:16 PM
(Pretty) Good thread!

I love these types of discussions. However, other than an applause for the participants, I have nothing unique to add.

kellyb
16th December 2009, 06:17 PM
But all unknowns are not equal.

In particular, the possibility is neither "slim" -- it is the logical conclusion of a widely accepted model of consciousness -- nor is it relegated to the distant future, since groups are already modeling entire portions of animal brains and human brain models will follow suit within 10 years according to some.

I dunno. It seems hundreds of years off, to me. I'm not entirely sure humans are smart enought to ever figure out how to do it, really.
But I do agree that at least in theory, it is possible.

kellyb
16th December 2009, 06:26 PM
This is so revealing. Now we know why you (and some other people here) are so emotionally invested in the computational model of consciousness. You think you can cheat death with it.

Actually, we generally reject happy rainbow magical explanations for things long before considering ways people in the future might be able to remedy unfortunate realities like death.

Stop projecting.

Malerin
16th December 2009, 07:57 PM
Actually, we generally reject happy rainbow magical explanations for things long before considering ways people in the future might be able to remedy unfortunate realities like death.

Stop projecting.

Projecting, LOL.

So if you are interested in living forever, or living in any way you could possibly think of, you should want strong AI to be true.

RD doesn't even try to hide it (which I give him props for). He wants strong AI to be true because he thinks you can cheat death with this upload nonsense.

rocketdodger
16th December 2009, 08:01 PM
I dunno. It seems hundreds of years off, to me. I'm not entirely sure humans are smart enought to ever figure out how to do it, really.

I would agree with you, except being at the forefront of the computing industry I know how fast our tools are progressing (I sense jokes coming on...).

If you take into consideration the possibility of bootstrapping ourselves with intermediate tools, all bets are off on how fast progress can be.

rocketdodger
16th December 2009, 08:04 PM
This is so revealing. Now we know why you (and some other people here) are so emotionally invested in the computational model of consciousness. You think you can cheat death with it.

That, and the sexual implications.

rocketdodger
16th December 2009, 08:06 PM
RD doesn't even try to hide it (which I give him props for). He wants strong AI to be true because he thinks you can cheat death with this upload nonsense.

Yes, that is why I want it to be true.

On the other hand, I believe it is true for reasons that have nothing to do with emotion.

NewtonTrino
16th December 2009, 08:35 PM
I would agree with you, except being at the forefront of the computing industry I know how fast our tools are progressing (I sense jokes coming on...).

If you take into consideration the possibility of bootstrapping ourselves with intermediate tools, all bets are off on how fast progress can be.

I agree with what rocketdodger is saying. In fact I've been thinking along similar lines for a while. Probably either the next company I start or the one after that is going to be in this field. Actually I want to start in biological simulation and move into nanotech design but a lot of that dovetails into this nicely.

For laymen I think a lot of this computing stuff can be hard to understand. However, the rate of scaling of the hardware side has been insane. Software is seriously lagging behind at this point. Just keep in mind that as the density of devices go up you get density^2 more stuff on the chip. This is what's been driving "Moore's Law". We are barely scratching the surface of whats possible software wise as we continue to scale up the hardware. I also have a whole rant on how Von Neumann computing is the wrong thing and we have to get past that before we get to brain level simulation (IMHO).

I agree that strong AI and copying consciousness are two separate problems. However, they are very related and I have high confidence that by the time we solve the strong AI problem that the rest of it will fall out relatively easily.

There are massive implications to all of this which we can only begin to talk about on this thread. Immortality is the least of it. Group consciousness, integrating multiple viewpoints (maybe multiple selves), AI rebellions, AI slavery, transhumanism, galactic colonization etc.

I will also agree some of this may turn out to be impossible. Parts of this may be harder than others (bringing a frozen brain back may be easy or tremendously impossible for example). Bottom line we don't understand enough about consciousness to truly rule out any of these scenarios at this point. As we learn more about how the brain works we may learn of things that make this not feasible. At this point there is no evidence to suggest this all isn't possible.

kellyb
16th December 2009, 11:27 PM
I would agree with you, except being at the forefront of the computing industry I know how fast our tools are progressing (I sense jokes coming on...).

If you take into consideration the possibility of bootstrapping ourselves with intermediate tools, all bets are off on how fast progress can be.

So...where's the evidence?
On a very immediate level, I worry about scams. The scam potential for something like this is enormous.
So, where's the evidence?

Malerin
16th December 2009, 11:33 PM
Yes, that is why I want it to be true.

On the other hand, I believe it is true for reasons that have nothing to do with emotion.

Have you read the Otherland series?

Piscivore
16th December 2009, 11:38 PM
Yes, that is why I want it to be true.

On the other hand, I believe it is true for reasons that have nothing to do with emotion.

The problem with getting emotions involved, though, is that when you want something to be true bad enough it can blind you to things that tell you it ain't. And you won't even know it.

PixyMisa
17th December 2009, 12:03 AM
All that means is that consciousness is an entirely unphysical phenomenon.
No. Not just no, but hell no. He's objecting to it on the grounds that physical continuity is not preserved (which is a perfectly healthy instinctive objection).

You couldn't be more wrong if you had set out up Mt Wrong on Wrongday morning with a dozen wrong-yaks loaded with dried wrong and six experienced Wrongmen sherpas trained in Wrong-fu.

PixyMisa
17th December 2009, 12:07 AM
That the brain is a physical object is fairly clear. This theory makes the claim that a different physical object is the same brain.
See my post immediately above.

Not similar, not likely to behave in the same way, but exactly the same thing.
And again.

The very fact that consciousness is located in a physical object implies that it is tied to that physical object.
Yes, exactly.

A big, big leap of faith is required to say that it isn't.
Yes, exactly.

You seem confused.

Meadmaker
17th December 2009, 05:04 AM
Well, what happens when you go to sleep? I would be fine with loosing 12 hours of information that hasn't been encoded in actual physical neural connections. Everything beyond that is in principle recoverable from only the topography of your neurons (and supporting glia perhaps, but we can model that as well).

Not much is known for certain about how information is stored in the brain, but I think there is more to it than the existense of a physical connection. I think the concentration of chemicals within those neurons and/or synapses also plays a role, and that is what would be impossible to restore.

Consider this. I have memories right now that I didn't have when I started this response. (i.e. I remember writing the previous paragraph.) Do you think any new connections were formed in the 15 seconds since I started writing?

Belz...
17th December 2009, 05:18 AM
Because an implication of strong AI is that there is no loss of "essence" of consciousness, whatever that may be, when the intelligence is instantiated upon a non-biological substrate.

That is, if AI can really be conscious, then there is no logical reason why our own consciousness could not be transferred to a non-biological substrate.

But I doubt it will be "continuous" with the original's, though the "copy" wouldn't be able to tell the difference.

westprog
17th December 2009, 07:51 AM
My bolding.

The assumptions say no such thing. What they say is that the same processes will run in the same way (though of course they will start to diverge as each set of processes receive different sensory data). Each set will be self conscious. Each consciousness will have the same memory and perceived continuity.

As for "essence of consciousness", I have no idea what that is or means. The claim is merely that the processes in the brain are complex enough to be self aware.

Your assumptions might say no such thing. It's not the same as what the person who originated "essence of consciousness" said. That's the theory I'm referring to.

westprog
17th December 2009, 07:56 AM
I think you are underestimating the damage to the brain done by freezing. It might still be edible, but beyond that, I think there would be issues.

However, that isn't really the interesting part of the thought experiment. (No pun intended.) I think the interesting thing is the idea of duplicate consciousness.

The key element of this consciousness transfer is that the memories and thought patterns survive the transfer. Personally, I don't think that will ever happen. I think the information necessary is encoded in elements too small to be measured in a nondestructive manner, or even in a destructive manner in the time required. I think the bottom of the brain will be dead before you finish reading the top, and I think some of the information is only retained while the process is still running.

On the other hand, I think we will make an apparently conscious entity purely from silicon, and we will be able to duplicate those entities.

In any case, even if we accept the claim that consciousness is entirely computational, and the claim that it can, even in theory, be copied from a brain to a computer, the further claim that once our consciousness is online, we can safely dispose of our boring old brain and body without a moment's qualm is totally unsupported.

westprog
17th December 2009, 08:01 AM
Rocketdodger's premises for a computer-uploaded consciousness surviving death are far sounder than any religion's


Since no religion has any scientific basis for their belief in an afterlife, I don't regard that as an especially strong claim.

westprog
17th December 2009, 08:03 AM
I can imagine strong AI passing a Turing Test 20 years from now.

You could have imagined the same thing 20 or 40 years ago. I expect the standard interval before passing the Turing test will remain fairly constant.

westprog
17th December 2009, 08:09 AM
Kb and Kc would be identical after the enoberation of Kc, but only immediately.
Their world experiences would then diverge as they independently experience it.. Unless there's a Wifi connection.

They wouldn't be identical because they'd be physically distinct. Unless the physical aspect of existence is disregarded, then they would be two different entities.

MRC_Hans
17th December 2009, 08:11 AM
I am aware that there are many individuals emotionally invested in the notion that strong AI is false. I can understand why -- the idea that our consciousness has some magical component that is beyond the boring and cold scientifically understood world is one that can be tempting.

But there are also very good emotional reasons to support the notion that strong AI is true.

This is a skeptical forum. Whatever our emotional reasons, all that counts is reality. The feasibility of strong AI is totally independent of wheter we would like it to be true or not.

If strong AI is true, humans will someday be able to upload their consciousness into any suitable substrate. This is not only the closest thing to immortality that could be available in our universe, but it is also far better than immorality.

Are you sure? How much of your life, identity, and happiness is inseparably tied to your bodily existence? Do you enjoy a summer's day? Sex? Good booze? A hug from a friend? A delicious meal?

The ability to upload our consciousness implies the ability to modify it as well -- in any way we desire. Being able to upload also implies the ability to travel at lightspeed between suitable locations. So if you are interested in living forever, or living in any way you could possibly think of, you should want strong AI to be true.

That may or may not be sure, but why do you find our desires for strong AI interesting?

To that I reply that my own estimates put the arrival time of such capability at less that 50 years from now. In fact such a thing might be possible within 20 years or so, and economic feasibility would follow within a few decades.

On what do you base that assumption? Currently, we don't even know for sure what makes us conscious. As long as we don't know the process, how can anybody estimate when replication will be technologically feasible?

Hopefully I have shown that there are emotional reasons to support strong AI that are just as good, if not better, than those for opposing it.

I think you have misunderstood the basics: From a skeptical POV, my reasons for wanting something have nothing to do with whether I regard it as reality.

Hans

westprog
17th December 2009, 08:32 AM
Currently, we don't even know for sure what makes us conscious. As long as we don't know the process,


You might find that a glaringly obvious statement, but it will be taken as an expression of religious fundamentalism by some.

RecoveringYuppy
17th December 2009, 08:40 AM
You could have imagined the same thing 20 or 40 years ago. I expect the standard interval before passing the Turing test will remain fairly constant.

Sprint's (and others) automated telephone attendants are already indistinguishable from their human attendants in their ability to piss me off even when I wasn't calling with a complaint.

westprog
17th December 2009, 08:43 AM
Sprint's (and others) automated telephone attendants are already indistinguishable from their human attendants in their ability to piss me off even when I wasn't calling with a complaint.

Yeah, but that's the anti-Turing test - the ability of a human being to behave like an inanimate object, albeit imbued with a Cylon-like loathing of humanity.

RecoveringYuppy
17th December 2009, 08:44 AM
On what do you base that assumption? Currently, we don't even know for sure what makes us conscious. As long as we don't know the process, how can anybody estimate when replication will be technologically feasible?

It's worse than that, we don't even know how memories are encoded yet. And it's doubtful that just knowing how the brain encodes memories will allow us to extract them.

Ladewig
17th December 2009, 08:54 AM
You could have imagined the same thing 20 or 40 years ago. I expect the standard interval before passing the Turing test will remain fairly constant.

Yes. The wikipedia article on the subject includes past predictions and your assertion is supported by the record.

I still maintain that producing an accurate and complete digital record of a human's mind is scores of years beyond the Turing test

rocketdodger
17th December 2009, 09:20 AM
Not much is known for certain about how information is stored in the brain, but I think there is more to it than the existense of a physical connection. I think the concentration of chemicals within those neurons and/or synapses also plays a role, and that is what would be impossible to restore.

Consider this. I have memories right now that I didn't have when I started this response. (i.e. I remember writing the previous paragraph.) Do you think any new connections were formed in the 15 seconds since I started writing?

I am aware of the neurobiology involved, and the difference between short term and long term memories.

That is why I said I wouldn't mind loosing the last 12 hours of memory to be able to upload.

You wouldn't?

rocketdodger
17th December 2009, 09:32 AM
Are you sure? How much of your life, identity, and happiness is inseparably tied to your bodily existence? Do you enjoy a summer's day? Sex? Good booze? A hug from a friend? A delicious meal?

How is any of that relevant, given that the ability to upload is in no way mutually exclusive to the ability to experience any of the above?

On what do you base that assumption? Currently, we don't even know for sure what makes us conscious. As long as we don't know the process, how can anybody estimate when replication will be technologically feasible?

The rate of technological progress in the field of computing.

Even if we don't understand consciousness at all, we are approaching a point where the computing power will be available to just simulate an entire brain, neurons and all. I mean, we have that much power now, actually, it just isn't economically feasible.



I think you have misunderstood the basics: From a skeptical POV, my reasons for wanting something have nothing to do with whether I regard it as reality.

The only reason you are "skeptical" to begin with is that you have a strong emotional investment in rational thought and non-contradictory evidence based reasoning.

So actually no, I haven't misunderstood the basics.

rocketdodger
17th December 2009, 09:33 AM
In any case, even if we accept the claim that consciousness is entirely computational, and the claim that it can, even in theory, be copied from a brain to a computer, the further claim that once our consciousness is online, we can safely dispose of our boring old brain and body without a moment's qualm is totally unsupported.

Is anyone making such a claim? Because I certainly didn't, and I don't recall seeing anyone else...

Meadmaker
17th December 2009, 09:35 AM
I am aware of the neurobiology involved, and the difference between short term and long term memories.

That is why I said I wouldn't mind loosing the last 12 hours of memory to be able to upload.

You wouldn't?

Whether short or long term, I think that memories, personality, and thought processes are stored as something other than the basic topology of the neural network in the brain. It has been a while since I studied this, and never in great depth, but I think that there is more required to recreating our thoughts than measuring what neurons are connected to what other neurons. It might involve chemical concentrations, and it might involve self reinforcing patterns of electrical activity which would vanish if the process were ever "turned off", i.e. by oxygen starvation that makes the neuronal activity cease. For that reason, I don't think any sort of "upload" will ever be possible.

If it could be done would I want it? It seems only marginally appealing. I would gain long life, but not immortality. The cost of this would be a change so fundamental in my nature that it's hard to argue that the new thing would still be "me". It would be a thing with my memories, but going forward our experiences would be so different, in large part due to the different nature of our bodies, that it's hard to say that somehow "I" would be still alive.

ETA: I guess I just won't be an "early adopter" of this new upload technology. If the newly uploaded beings start telling us from their silicon bodies that it's the best thing they have ever done and they are really happy about it, I might reconsider. For now, I'll just assume that there's a cremation in my future, and deal with it.

NewtonTrino
17th December 2009, 10:00 AM
Whether short or long term, I think that memories, personality, and thought processes are stored as something other than the basic topology of the neural network in the brain. It has been a while since I studied this, and never in great depth, but I think that there is more required to recreating our thoughts than measuring what neurons are connected to what other neurons. It might involve chemical concentrations, and it might involve self reinforcing patterns of electrical activity which would vanish if the process were ever "turned off", i.e. by oxygen starvation that makes the neuronal activity cease. For that reason, I don't think any sort of "upload" will ever be possible.


Quite simply we just don't know yet though. It's possible that you're right (although even if you are right about how it works that doesn't mean scanning it is impossible). The jury is simply still out. It's obviously far far beyond the technology we have right now though.


If it could be done would I want it? It seems only marginally appealing. I would gain long life, but not immortality. The cost of this would be a change so fundamental in my nature that it's hard to argue that the new thing would still be "me". It would be a thing with my memories, but going forward our experiences would be so different, in large part due to the different nature of our bodies, that it's hard to say that somehow "I" would be still alive.

It just depends on how you define "I" as well as the tech available. For example we may be able to simply clone you another body and move your consciousness into this. Although if you could have an indestructible robot body that has all of the sensation of a bio body why wouldn't you?

Anyway nobody is saying this is definitely possible. What we are saying is that we haven't been shown anything concrete that makes this line of thinking impossible at this point. I think it's worth exploring.

I Ratant
17th December 2009, 10:06 AM
They wouldn't be identical because they'd be physically distinct. Unless the physical aspect of existence is disregarded, then they would be two different entities.
.
The essence, the part that comprehends, of the K's would be the same at the time of the copying. What happens later wouldn't be. If Kc for instance was loaded into the control system of a spacecraft, while Kb continues getting drunk with his buddies on Saturday night down at the Dew Drop Inn.....

PixyMisa
17th December 2009, 10:09 AM
Are you sure? How much of your life, identity, and happiness is inseparably tied to your bodily existence?
27%.

Do you enjoy a summer's day? Sex? Good booze? A hug from a friend? A delicious meal?Maybe, yes, no such thing,* no, yes.

* Supertaster (http://en.wikipedia.org/wiki/Supertaster)

westprog
17th December 2009, 10:14 AM
.
The essence, the part that comprehends, of the K's would be the same at the time of the copying. What happens later wouldn't be. If Kc for instance was loaded into the control system of a spacecraft, while Kb continues getting drunk with his buddies on Saturday night down at the Dew Drop Inn.....

As soon as a physically distinct object is created, it's something different. There is no mystical linkage involved. Kb and Kc are two entirely different entities as soon as there is more than one of them.

They cannot be identical because simply by virtue of being in different locations they are not the same thing.

I Ratant
17th December 2009, 10:30 AM
As soon as a physically distinct object is created, it's something different. There is no mystical linkage involved. Kb and Kc are two entirely different entities as soon as there is more than one of them.

They cannot be identical because simply by virtue of being in different locations they are not the same thing.
.
True, but as I mentioned, until they become aware of the separation, the "thing" that makes both of them, which comes from Kb, is identical.
Eli Whitney with his demonstrating the use of interchangeable parts started this.
Any part can do the job of that part installed in any machine that uses that part.
The machines don't behave differently.
The mind being moved as a part would by necessity behave differently in the new assembly, just due to the reality of the inability to duplicate the original.. or, placing the Kc mind in a different container, as with the space ship controller. But the "good old days" would still be in the part's memory albeit in the new situation where the human body's capabilities/limitations aren't needed or useable.

PixyMisa
17th December 2009, 10:30 AM
As soon as a physically distinct object is created, it's something different. There is no mystical linkage involved. Kb and Kc are two entirely different entities as soon as there is more than one of them.

They cannot be identical because simply by virtue of being in different locations they are not the same thing.
That's not entirely right.

All neutrons are identical. If we observe two neutrons, and then at a later point, observe two neutrons, there is no way to determine if they are the same two neutrons.

Even then, neutrons only behave identically statistically.

You are, at least in part, arguing that co-ordinate translation changes identity. That's not a useful viewpoint, because then you aren't you.

Meadmaker
17th December 2009, 10:59 AM
Quite simply we just don't know yet though. It's possible that you're right (although even if you are right about how it works that doesn't mean scanning it is impossible). The jury is simply still out. It's obviously far far beyond the technology we have right now though.


Agreed.

Although if you could have an indestructible robot body that has all of the sensation of a bio body why wouldn't you?

Fear of the unknown? It sounds so appealing, in some sense, but, well, 'tain't natural.

Which is another way of saying that I'll stick with the status quo until I can examine all of the alternatives with a little bit more information.


But, I suppose if I could replace some of my body parts with others that had higher performance characteristics, but would otherwise be indistinguishable from "natural" parts, would I do it? Hmmm.....Viagra sells pretty well.....

Meadmaker
17th December 2009, 11:00 AM
You are, at least in part, arguing that co-ordinate translation changes identity. That's not a useful viewpoint, because then you aren't you.

I think the more important thing is that experience changes identity. Once you put two beings in two different places, their experiences, and therefore their identities, begin to diverge.

Beth
17th December 2009, 04:11 PM
To that I reply that my own estimates put the arrival time of such capability at less that 50 years from now. In fact such a thing might be possible within 20 years or so, and economic feasibility would follow within a few decades.


I don't know about the time frame - seems like it's been 20 years away my whole life. But I like to throw in an ability to go back in time and capture that information from those who have already died. Doesn't seem any more far fetched than the original assumption. That way, once you get there technologically, you could go back and get your deceased loved ones and bring them back too.

IMO, kellyb and kellyc would be different people, and neither would be disposable at any stage. The idea that they would have the same consciousness seems to deny any physical attribute to consciousness at all.

I agree. If they are truly identical copies, then I think that would follow.

No matter how warm and cuddly the idea may seem of having a SuperSkyDaddie who particular cares about little old me, it just ain't ever gonna be a good use of my time or thinking pretending it's true just 'cause I might want it to be. I don't see any difference in embracing the same fuzzy, rubbish thinking to have pie-in-the-sky dreams about some slim possiblity of transhumanism in the distant future.
I agree that they are much the same.

I dunno. It seems hundreds of years off, to me. I'm not entirely sure humans are smart enought to ever figure out how to do it, really.
But I do agree that at least in theory, it is possible.

I'm not sure it is possible, but I don't know of any reason why it's definitely not either. Meadmaker's analysis is fairly convincing, at least to the point that I don't think we really know enough about what makes each of us our own unique individual subconscious "me" to be able to determine whether or not it is possible.

Yes, that is why I want it to be true.

On the other hand, I believe it is true for reasons that have nothing to do with emotion.

Interesting. I believe you believe it is true for reasons that have nothing to do with emotion. I'm not as sure that it's true, but then again, I'm not you. All I can really say is that everything I believe in, my emotions are involved.

ETA: I find this a fascinating concept. It seems to me that a processing pattern that adequately captured "you" that could be duplicated on one or more substrates, such as a cloned body, a robot body or a simulated electronic body, would be the equivalent of an immortal soul. How would it be different from what is typically thought of as being the "soul"?

Maia
17th December 2009, 04:33 PM
???

What IS it with all of these consciousness threads? This has turned into another one! Should we just have an entire "consciousness" section and get it over with?

rocketdodger
17th December 2009, 04:46 PM
ETA: I find this a fascinating concept. It seems to me that a processing pattern that adequately captured "you" that could be duplicated on one or more substrates, such as a cloned body, a robot body or a simulated electronic body, would be the equivalent of an immortal soul. How would it be different from what is typically thought of as being the "soul"?

Well, the fundamental difference is that the processing pattern is theoretically something we can understand and hence modify to suit our whims.

Not so with a soul.

NewtonTrino
17th December 2009, 04:56 PM
Fear of the unknown? It sounds so appealing, in some sense, but, well, 'tain't natural.

Which is another way of saying that I'll stick with the status quo until I can examine all of the alternatives with a little bit more information.


But, I suppose if I could replace some of my body parts with others that had higher performance characteristics, but would otherwise be indistinguishable from "natural" parts, would I do it? Hmmm.....Viagra sells pretty well.....

Basically if this comes to pass then the people that don't participate will simply get selected out. The universe is a "change or die" kind of place.

I agree that it's completely unnatural and I think getting the psychology aspect of it worked out is going to be challenging. Imagine if we screw up the input into your brain and it feels like you are in massive pain all the time or something along those lines.

As virtual simulation tech continues to improve this will become the proving ground for these types of concepts.

NewtonTrino
17th December 2009, 04:59 PM
???

What IS it with all of these consciousness threads? This has turned into another one! Should we just have an entire "consciousness" section and get it over with?

I think it would be more interesting to just assume that consciousness is a physical understandable process and then consider the implications of the OP. Call it a thought experiment. ;)

Meadmaker
17th December 2009, 05:20 PM
Basically if this comes to pass then the people that don't participate will simply get selected out. The universe is a "change or die" kind of place.

That was one of the things I was thinking about. It's not like I would be able to get a new superbody, and others wouldn't. I might be forced into irrelevance as everyone else gets souped up silicon replacements for their wetware.

I think I'm glad I'm likely to be dead before I have to seriously consider it as anything other than a hypothetical.


On another note, there have been various discussions in the OP and afterward about "immortality". The sun will eventually burn out, and it's a pretty safe bet that something else will happen long before that that turns off the power to your new substrate. This could make you live a lot longer, but you are still going to die.

I Ratant
17th December 2009, 05:23 PM
... Hmmm.....Viagra sells pretty well.....
.
In my experience, any benefits are iffy. Most of the time zip occurs.

rocketdodger
17th December 2009, 05:47 PM
On another note, there have been various discussions in the OP and afterward about "immortality". The sun will eventually burn out, and it's a pretty safe bet that something else will happen long before that that turns off the power to your new substrate. This could make you live a lot longer, but you are still going to die.

This is such a stupid argument.

Here, let me ask you -- why don't you just let yourself die right now, since you are going to die someday anyway? Eh?

No, seriously, give me a reason why you don't just let yourself die.

Meadmaker
17th December 2009, 06:09 PM
This is such a stupid argument.


Thank you for sharing.:rolleyes:

Meadmaker
17th December 2009, 06:16 PM
No, seriously, give me a reason why you don't just let yourself die.


Why should I "let myself die"? I'm going to die one way or another, all in good time. If I find a way to transfer my consciousness to some other medium it will take longer.

The point is that we can extend life, and we will probably find new ways to extend life in the future. We can probably also find ways to extend health, or change to some other medium where "health" isn't quite the issue that it is with our current bodies. However, if you are looking for immortality, silicon won't give it to you. Your life will be longer, but finite.

NewtonTrino
17th December 2009, 06:52 PM
True immortality is more than likely not possible. How could you ever know because you can always die at some point in the future. Unless you are living in some sort of closed time loop or something... hmmm....

Beth
17th December 2009, 08:01 PM
Well, the fundamental difference is that the processing pattern is theoretically something we can understand and hence modify to suit our whims.

Not so with a soul.

I don't see this as any difference at all, much less a fundamental one. Why assume that a) the pattern that defines a unique human being will be understandable* to us or b) the human soul, whatever that might be, could not be understood or modified, not even theoretically. Seems to me a basic concept of Christianity is that a soul can be 'saved' and what that happens, it is modified.

*modified I don't have a problem with, although that gets into the issue of what exactly is it that defines us and if modifications would change that. But certainly assuming the technology to copy existed, the ability to modify that copy seems a minor addition. Understanding, on the other hand, may not be required in order to create a copy.

Belz...
18th December 2009, 04:41 AM
You are, at least in part, arguing that co-ordinate translation changes identity. That's not a useful viewpoint, because then you aren't you.

Well, you aren't what you were a second ago.

rocketdodger
18th December 2009, 08:22 AM
Why should I "let myself die"? I'm going to die one way or another, all in good time. If I find a way to transfer my consciousness to some other medium it will take longer.

The point is that we can extend life, and we will probably find new ways to extend life in the future. We can probably also find ways to extend health, or change to some other medium where "health" isn't quite the issue that it is with our current bodies. However, if you are looking for immortality, silicon won't give it to you. Your life will be longer, but finite.

And my point is that this is a strawman, even if you don't know it.

Because nobody "just doesn't want to die." It isn't that simple. There are reasons.

The primary reason being that death -- or rather, just an end to existence -- precludes an individual from experiencing stuff. And if that individual still has stuff they wish to experience, they will want to live longer.

So yes, actually, silicon will "give it to me," because I would be thankful for even an extra hundred years. Heck, most old people are thankful for even an extra year.

Meadmaker
18th December 2009, 08:23 AM
Here's something related to this thought experiment that I would like to bring up. Let's take it as given that this consciousness transfer to silicon is possible. I'm picturing a massively parallel system with billions of tiny processors, like our neurons, equipped with cameras and microphones and heat and touch sensors so it can experience the world in real time.

Now, I'm going to disconnect the cameras and such, and instead I'm going to create a new simulated world and feed the inputs from that simulation into the sensor inputs of siliconme. Siliconme would now be "living" in this new, simulated world. It would be aware. It would be interacting. It would be conscious, but in a completely different place. We could tap into siliconme's experience and see how he deals with the simworld if we wanted to, and we could verify that his external behavior is completely consistent with an aware, conscious, being.

So far, there's nothing outlandish about this. It's pretty standard science fiction fare.

Now, though, I'm going to do a hardware change. I'm going to port siliconme onto a powerful, but single processor computer. I'll port his simworld onto something similar. After the port is complete, the computations will be 100% identical to the computations performed by siliconme 1.0. However, they will not all be happening at the same speed or simultaneously. An observer would see a powerful computer making computations, but the observer couldn't tell immediately that there was a conscious entity involved, or if it was one of google's servers. Nevertheless, it seems to me that if the massively parallel real time version of siliconme was a conscious entity, the new serialized uniprocessor version would have to be as well.

The implication is that a complex system might have consciousness, but we might not be able to recognize that consciousness because its behavior is not what we associate with consciousness, or it might operate on scales of time and space that we cannot easily observe.

The universe is filled with highly complex, interacting, systems, only a few of which are brains.

rocketdodger
18th December 2009, 08:24 AM
I don't see this as any difference at all, much less a fundamental one. Why assume that a) the pattern that defines a unique human being will be understandable* to us or b) the human soul, whatever that might be, could not be understood or modified, not even theoretically. Seems to me a basic concept of Christianity is that a soul can be 'saved' and what that happens, it is modified.

*modified I don't have a problem with, although that gets into the issue of what exactly is it that defines us and if modifications would change that. But certainly assuming the technology to copy existed, the ability to modify that copy seems a minor addition. Understanding, on the other hand, may not be required in order to create a copy.

But the difference is that the Christian soul is modified according to the wishes of God, not according to the wishes of you.

rocketdodger
18th December 2009, 08:27 AM
Here's something related to this thought experiment that I would like to bring up. Let's take it as given that this consciousness transfer to silicon is possible. I'm picturing a massively parallel system with billions of tiny processors, like our neurons, equipped with cameras and microphones and heat and touch sensors so it can experience the world in real time.

Now, I'm going to disconnect the cameras and such, and instead I'm going to create a new simulated world and feed the inputs from that simulation into the sensor inputs of siliconme. Siliconme would now be "living" in this new, simulated world. It would be aware. It would be interacting. It would be conscious, but in a completely different place. We could tap into siliconme's experience and see how he deals with the simworld if we wanted to, and we could verify that his external behavior is completely consistent with an aware, conscious, being.

So far, there's nothing outlandish about this. It's pretty standard science fiction fare.

Now, though, I'm going to do a hardware change. I'm going to port siliconme onto a powerful, but single processor computer. I'll port his simworld onto something similar. After the port is complete, the computations will be 100% identical to the computations performed by siliconme 1.0. However, they will not all be happening at the same speed or simultaneously. An observer would see a powerful computer making computations, but the observer couldn't tell immediately that there was a conscious entity involved, or if it was one of google's servers. Nevertheless, it seems to me that if the massively parallel real time version of siliconme was a conscious entity, the new serialized uniprocessor version would have to be as well.

The implication is that a complex system might have consciousness, but we might not be able to recognize that consciousness because its behavior is not what we associate with consciousness, or it might operate on scales of time and space that we cannot easily observe.

The universe is filled with highly complex, interacting, systems, only a few of which are brains.

Yep.

And you can go two ways with this.

First, you can accept the logical conclusion of the premise -- that if strong AI is true then all sorts of things might be conscious.

Second, you can reject the logical conclusion, and thereby reject the premise -- this is what westprog has been harping about for literally the past two years on these forums. If strong AI is true, then all sorts of things might be conscious (and he/she doesn't like that idea, apparently).

Beth
18th December 2009, 09:23 AM
But the difference is that the Christian soul is modified according to the wishes of God, not according to the wishes of you.

Well, no. First of all, neither your nor I believe that to be the case and Christians would explain that the change in your soul that comes about with acceptance of Jesus as your savior is one that is requested of god, not imposed by god, thus the modification is according to your wishes.

Second of all, I don't think there could be any guarantee that under your system, that such modifications would be only according to the wishes of the individual and not some other intelligence (I won't say human since we are no longer discussing humans at this point). After all, if you were reinstantiated in a changed form that was not per your request, how would you know? And what could you do about it?

Third, it was merely an illustration of the fact there is no reason to believe that the soul cannot be modified while your pattern representing the "essence" on consciousness can. Getting back to my previous question and assuming you are correct and such a pattern of the "essense" of consciousness of a person could be extracted and reproduced, why wouldn't that pattern be considered the "soul" of that person?

It seems to me to fit the basic idea of what a "soul" is. It's eternal - after all patterns, like numbers, don't degrade and die. We all would possess such a pattern that is unique to each of us.

Frank Newgent
18th December 2009, 10:26 AM
Any junior stocks you recommend?

rocketdodger
18th December 2009, 10:58 AM
Well, no. First of all, neither your nor I believe that to be the case and Christians would explain that the change in your soul that comes about with acceptance of Jesus as your savior is one that is requested of god, not imposed by god, thus the modification is according to your wishes.

Second of all, I don't think there could be any guarantee that under your system, that such modifications would be only according to the wishes of the individual and not some other intelligence (I won't say human since we are no longer discussing humans at this point). After all, if you were reinstantiated in a changed form that was not per your request, how would you know? And what could you do about it?

Third, it was merely an illustration of the fact there is no reason to believe that the soul cannot be modified while your pattern representing the "essence" on consciousness can. Getting back to my previous question and assuming you are correct and such a pattern of the "essense" of consciousness of a person could be extracted and reproduced, why wouldn't that pattern be considered the "soul" of that person?

It seems to me to fit the basic idea of what a "soul" is. It's eternal - after all patterns, like numbers, don't degrade and die. We all would possess such a pattern that is unique to each of us.

Then yes, I agree with you, it fits the basic idea of a "soul."

Beth
18th December 2009, 11:06 AM
Then yes, I agree with you, it fits the basic idea of a "soul."


Cool. Nice to be in agreement about something.

westprog
18th December 2009, 11:25 AM
???

What IS it with all of these consciousness threads? This has turned into another one! Should we just have an entire "consciousness" section and get it over with?

This is covering a slightly different topic, though there's bound to be some merging. The basic concept is that if your consciousness is copied, and run on a computer, then that is you - and hence you can stand and look at the computer and no longer have to fear death because you will exist, potentially forever.

This assumes that Strong AI is true, and that it will be possible to create consciousness on a computer. It also assumes that it will be possible to create consciousness on a computer, the experience of which will be identical to that of a human being. That's a stronger version of Strong AI. Then it's assumed that if the "essence of consciousness" is copied from the person to a computer, then you effectively exist in two places at once, and it doesn't matter if you die, because you will live forever. That's the strongest version of Strong AI so far. However, Rocketdodger objected to my characterisation of Strongest AI, so whether he really thinks that we will not have to fear death or not is yet to be clarified.

So, while I don't accept Strong AI as proven or likely, I'm accepting it as the premise for this thread. I also don't accept Stronger AI, but again, I'm willing to hypothesise that it's true. In the context of this hypothetical situation, I would still regard my existence as being something entirely different from a computer running my consciousness, even if that consciousness were different from my own.

Piscivore
18th December 2009, 12:19 PM
It's eternal - after all patterns, like numbers, don't degrade and die.

You've never shaken an Etch-a-Sketch?

rocketdodger
18th December 2009, 12:43 PM
Rocketdodger objected to my characterisation of Strongest AI, so whether he really thinks that we will not have to fear death or not is yet to be clarified.


Well, I objected to the suggestion that if you can upload, then the original biological copy is useless and might as well be destroyed. Because the orginal copy is still you, in every meaning of the term, and I don't see why the fact that another version of you exists should have any bearing on whether the original version should get to continue existing.

But as for death -- of course we will always have to fear "an end to existence."

If upload technology is sufficient, though, we wouldn't have to fear death from damage to physical bodies, or things like that. At the very least, you could upload a copy of yourself every night, so if you got run over on the way home from work (or killed in a battle, whatever), you could "roll back" to that previous version. Yeah, you would loose some hours of experience, but that is much better than never existing again at all. But someday it would also be possible to just have a continual link between your physical body (if you even have one) and a "mirror" database somewhere, so that if your physical body dies there would be no loss of experience. That might not be a good thing, though, since I bet some deaths are very painful.

But since I agree with you that all information exists on a physical substrate of some sort, even if strongest AI is true there is always the worry that the last copy of the information might get damaged in some way. That is unavoidable. Plus there is that whole heat death of the universe thing, etc. So meadmaker is right about true immortality being a logically incoherent concept.

westprog
18th December 2009, 02:15 PM
Well, I objected to the suggestion that if you can upload, then the original biological copy is useless and might as well be destroyed. Because the orginal copy is still you, in every meaning of the term, and I don't see why the fact that another version of you exists should have any bearing on whether the original version should get to continue existing.

But as for death -- of course we will always have to fear "an end to existence."

If upload technology is sufficient, though, we wouldn't have to fear death from damage to physical bodies, or things like that. At the very least, you could upload a copy of yourself every night, so if you got run over on the way home from work (or killed in a battle, whatever), you could "roll back" to that previous version. Yeah, you would loose some hours of experience, but that is much better than never existing again at all. But someday it would also be possible to just have a continual link between your physical body (if you even have one) and a "mirror" database somewhere, so that if your physical body dies there would be no loss of experience. That might not be a good thing, though, since I bet some deaths are very painful.

But since I agree with you that all information exists on a physical substrate of some sort, even if strongest AI is true there is always the worry that the last copy of the information might get damaged in some way. That is unavoidable. Plus there is that whole heat death of the universe thing, etc. So meadmaker is right about true immortality being a logically incoherent concept.

As far as I can tell, the first paragraph contradicts the third and fourth.

Let me make it plain what my point of view is. The existence of any number of backups, no matter how up to date, which can be brought into existence if I should die, would have nothing to do with my own personal mortality. If I get killed, the existence of some other version of me might be pleasant to contemplate, but it wouldn't be me any more than offspring or created work. It might console me as I died, but I'd still be dead, and copies of me existing to the heat-death of the universe wouldn't change that.

rocketdodger
18th December 2009, 02:19 PM
As far as I can tell, the first paragraph contradicts the third and fourth.

Let me make it plain what my point of view is. The existence of any number of backups, no matter how up to date, which can be brought into existence if I should die, would have nothing to do with my own personal mortality. If I get killed, the existence of some other version of me might be pleasant to contemplate, but it wouldn't be me any more than offspring or created work. It might console me as I died, but I'd still be dead, and copies of me existing to the heat-death of the universe wouldn't change that.

Well, do you feel that way about going to sleep?

Ladewig
18th December 2009, 02:29 PM
I'm still not onboard with the issue being discussed, but I'd like to make ask a question.

1) Does destroying all existing silicon versions of a person rise to the level of murder?

westprog
18th December 2009, 02:53 PM
Well, do you feel that way about going to sleep?

I don't, because I have to go to sleep. I'd sooner not go into a coma if I could avoid it.

If I effectively die every time I go to sleep, then so be it. However, that seems unlikely, as the brain and mind continue to be active throughout.

kuroyume0161
18th December 2009, 02:59 PM
1. With up-to-current information on the complexity of the brain and neuron it is generally accepted that it would require many QB to store all of the intricate connections and states. 100 billion neurons with 1000-10000 connections each with details of millions per neuron. Do the math. ~10^20 bytes or more. I'll speculate more like 10^30 or 40.

2. Forget a couple of important factors did we? What about senses? Are they important? (YES!) As crude as it is, COG says much on this. What about all of the chemicals that the brain uses (dopamine and so on)? Do we just ignore them or do we need to recreate them digitally? What about a nervous system attached to the sensory equipment (skin, eyes, nose, tongue, ears)?

3. There are still major issues with whether or not a 'snapshot' of brain activity would be able to continue on as a dynamical system. It may require a certain amount of mapping states over a period of time to enforce an engram - if even 1. and 2. are possible or reasonable.

I love '2050'. More like 20500. We'll be extinct and noone will care. ;)

NewtonTrino
18th December 2009, 03:13 PM
1. With up-to-current information on the complexity of the brain and neuron it is generally accepted that it would require many QB to store all of the intricate connections and states. 100 billion neurons with 1000-10000 connections each with details of millions per neuron. Do the math. ~10^20 bytes or more. I'll speculate more like 10^30 or 40.


So 100,000,000 Gigabytes or so for your low quote. Doesn't sound too bad actually. Keep in mind that evolution has managed to pack this into the space in our skulls.



2. Forget a couple of important factors did we? What about senses? Are they important? (YES!) As crude as it is, COG says much on this. What about all of the chemicals that the brain uses (dopamine and so on)? Do we just ignore them or do we need to recreate them digitally? What about a nervous system attached to the sensory equipment (skin, eyes, nose, tongue, ears)?


The senses are going to be really hard to get right. We'll probably have to slow co-evolve them as we evolve the AI technology. We don't necessarily need to simulate things like dopamine if we can figure out equivalent algorithms. This should also help with the data storage issue.



3. There are still major issues with whether or not a 'snapshot' of brain activity would be able to continue on as a dynamical system. It may require a certain amount of mapping states over a period of time to enforce an engram - if even 1. and 2. are possible or reasonable.

I love '2050'. More like 20500. We'll be extinct and noone will care. ;)


2050 seems ambitious but I wouldn't rule it out completely.

westprog
18th December 2009, 03:18 PM
So 100,000,000 Gigabytes or so for your low quote. Doesn't sound too bad actually. Keep in mind that evolution has managed to pack this into the space in our skulls.


I don't think this is strictly true. We can throw a handful of sand in the air, and it would take a vast amount of computer effort to emulate the path of every grain. The fact that a system has a vast number of states doesn't mean that it's a device capable of storing and retrieving any of those states. How many states do the molecules in a thimble full of air have? Nature didn't need evolution to produce them.

AlBell
18th December 2009, 03:25 PM
Any junior stocks you recommend?
Those who plan to live forever better pick some damn good ones.

Beth
18th December 2009, 03:37 PM
It's eternal - after all patterns, like numbers, don't degrade and die.

You've never shaken an Etch-a-Sketch?

Many times. :D If I draw the number '1' on an etch-a-Sketch and then shake it into oblivion, has '1' been obliterated? No, only that instantiation of the concept has been destroyed. The concept itself is unaltered.

I think numbers and patterns can be considered eternal in a way that even galaxies are not. All physical matter changes, but the concept of '1' is an unchanging ideal. If, in fact, the 'essence' of our consciousness could be expressed as a pattern in different materials, as Rocketdodger is proposing, then I think that pattern is what is meant by the word 'soul'.

I think of the pattern as being both fixed and yet constantly fluctuating. I like the analogy and can think of living as the unfolding of that pattern. Free will could be thought of as the choices we make in living our lives - i.e. unfolding the pattern that we are born with. We are not completely free; there are constraints inherent in both the pattern that is us and the environment we live in. Yet it is not completely deterministic either. There would be many possible unfoldings in any pattern as complex as one capable of encoding the essence of a human being.

There's nothing supernatural about such speculations. Yet, it has a very definite correspondence with more mystical philosophies.

kuroyume0161
18th December 2009, 03:38 PM
To me, at least, we would be better served focusing our energies on 'brain transfer' to a clone of the donor. Ethical issues aside, it has the best chance of actually 'taking' (in the same sense as organ donation) and being practical (in a very sophisticated medical sense). Whether or not this involves a brain transplant (which I doubt would be the road to take) or some form of engram mapping from the donor to the clone (which is a better road if not completely sci-fi at the moment) is up to the complexities and technologies involved.

Yeah, you have to keep the fragile and mortal biological organism but you get a new one even if it is based upon the current model. By that time it may be possible to make genetic changes which remove certain flaws - again, yet to be realized.

rocketdodger
18th December 2009, 03:46 PM
I'm still not onboard with the issue being discussed, but I'd like to make ask a question.

1) Does destroying all existing silicon versions of a person rise to the level of murder?

That is up to the other silicon people.

But if you want to know "would you kill a real person to protect your silicon versions" the answer is a most definitive yes.

rocketdodger
18th December 2009, 03:48 PM
I don't, because I have to go to sleep. I'd sooner not go into a coma if I could avoid it.

If I effectively die every time I go to sleep, then so be it. However, that seems unlikely, as the brain and mind continue to be active throughout.

Yeah but your consciousness is not continuous.

The only difference here is that you have been programmed by evolution and your uprbringing to feel comfortable with this loss of continuity.

Piscivore
18th December 2009, 04:04 PM
Many times. :D If I draw the number '1' on an etch-a-Sketch and then shake it into oblivion, has '1' been obliterated? No, only that instantiation of the concept has been destroyed. The concept itself is unaltered.
"1" is the concept. The design on the screen is the instantiation.
"Road trip" is a concept. The specific trip I took with my brother from Minnesota to Arizona is an instantiation.
"Human" is the concept. You are the instantiation.

I think numbers and patterns can be considered eternal in a way that even galaxies are not. All physical matter changes, but the concept of '1' is an unchanging ideal. If, in fact, the 'essence' of our consciousness could be expressed as a pattern in different materials, as Rocketdodger is proposing, then I think that pattern is what is meant by the word 'soul'.
You're confusing the concept of a pattern with "instantiations". The "idea" of a christmas tree does not go away in late January, but unless one is terribly lazy the particular instantiation of that tree goes to the curb, and will never ever be back.

You're trying to equivocate the concept of "pattern" with particular, specific patterns in order to support the conclusion you want to be true- that your "pattern" is not unique, fragile, and finite.

I think of the pattern as being both fixed and yet constantly fluctuating.
Well, they can't be both.

I like the analogy and can think of living as the unfolding of that pattern. Free will could be thought of as the choices we make in living our lives - i.e. unfolding the pattern that we are born with.
There are many things free will "could be thought of as". Imagination is a wonderful thing. All you're doing here is fuzzying up definitions to get to the answer you like. If there is a "fixed" pattern "unfolding", then it cannot be changed. If it can be changed, it is not fixed.

We are not completely free; there are constraints inherent in both the pattern that is us and the environment we live in. Yet it is not completely deterministic either. There would be many possible unfoldings in any pattern as complex as one capable of encoding the essence of a human being.

There's nothing supernatural about such speculations. Yet, it has a very definite correspondence with more mystical philosophies.
Yes; they both require one to egregiously obfuscate so they may appear plausible.

Igor, you fool!
18th December 2009, 04:28 PM
"Permutation City" by Greg Egan. Also, his short story collection "Axiomatic". The quantum physics stuff is over my head but the consciousness stuff is fascinating. Egan is very much a proponent of strong AI and explores the concept of copying a human consciousness onto a computer and shaping, not only the virtual environment, but the consciousness itself. For example, one copy decides to become obsessed with carpentry. He alters the simulation of his brain and spends centuries happily carving chair legs. He retains enough "free will" (a whole other topic) to halt his obsession when he wants to try something else.

westprog
18th December 2009, 05:02 PM
Yeah but your consciousness is not continuous.

The only difference here is that you have been programmed by evolution and your uprbringing to feel comfortable with this loss of continuity.

Aspects of consciousness are not continuous. It's not clear whether other aspects persist. That's what memory is for.

Beth
18th December 2009, 05:06 PM
Yeah but your consciousness is not continuous.

The only difference here is that you have been programmed by evolution and your uprbringing to feel comfortable with this loss of continuity.

I don't know that explains it completely. The few times I've been placed under anesthesia for surgery, the discontinuity was very different from that of sleep. What you are talking about, I think it would be more like anesthesia than sleep.

"1" is the concept. The design on the screen is the instantiation. "Road trip" is a concept. The specific trip I took with my brother from Minnesota to Arizona is an instantiation. "Human" is the concept. You are the instantiation. Right.

You're confusing the concept of a pattern with "instantiations". The "idea" of a christmas tree does not go away in late January, but unless one is terribly lazy the particular instantiation of that tree goes to the curb, and will never ever be back.

You're trying to equivocate the concept of "pattern" with particular, specific patterns in order to support the conclusion you want to be true- that your "pattern" is not unique, fragile, and finite. No, I understand the distinction you are making, but if we could replicate the pattern that constitutes a human, we could easily do the same for the specific Christmas tree in my living room this year too.


Well, they can't be both.
Sure they can. :) Just like that famous picture can be both the vase and the profiles. Both interpretations are inherent to the whole. There is a part that is unchanging - a core that remains the same throughout all instantiations - and a part that is changing that reflects the uniqueness of any particular instantiation.

There are many things free will "could be thought of as". Imagination is a wonderful thing. All you're doing here is fuzzying up definitions to get to the answer you like. If there is a "fixed" pattern "unfolding", then it cannot be changed. If it can be changed, it is not fixed.

Yes; they both require one to egregiously obfuscate so they may appear plausible.

My, but you seem grumpy. :(

If we don't first imagine such futures, how will they ever come to pass?

kuroyume0161
18th December 2009, 05:30 PM
If we don't first imagine such futures, how will they ever come to pass?

I don't think that imagining 'star gates', teleportation devices, warp drives will make them any more likely to 'come to pass'. Sometimes our imaginations go beyond what is physically possible in this universe.

rocketdodger
18th December 2009, 05:35 PM
You're confusing the concept of a pattern with "instantiations". The "idea" of a christmas tree does not go away in late January, but unless one is terribly lazy the particular instantiation of that tree goes to the curb, and will never ever be back.

Yeah, but the question is whether an instantiation of the pattern that represents your consciousness is the important part.

Many of us here think it is the actual pattern, not the instantiation, that matters when it comes to consciousness. That is, different instances are still the same consciousness.

rocketdodger
18th December 2009, 05:38 PM
I don't think that imagining 'star gates', teleportation devices, warp drives will make them any more likely to 'come to pass'. Sometimes our imaginations go beyond what is physically possible in this universe.

Well, you are trivially wrong.

All devices must be imagined prior to their realization.

What you probably mean to say is that mental masturbation by people who will never contribute to the fields required, like most of us, won't make such devices any more likely to come to pass. And that is probably a true statement.

kuroyume0161
18th December 2009, 06:03 PM
Well, you are trivially wrong.

All devices must be imagined prior to their realization.

What you probably mean to say is that mental masturbation by people who will never contribute to the fields required, like most of us, won't make such devices any more likely to come to pass. And that is probably a true statement.

Show me the plans for that warp drive. ;)

All devices must be imagined, yes. Not all devices that are imagined will ever come to pass. Most are just silly. Think perpetual motion machines, for instance. I can imagine one but the laws of the universe forbid it (The Law of Conservation of Energy - which has never been observed to be violated). End of imagining that. Same for FTL and recreating living organisms by turning them into energy fields and reconstructing them elsewhere. FTL is a violation of all known physical laws. Teleportation, at best, would require several stars worth of direct power and computers capable of capturing innumerous (the number is so big that I can't even take a guess) quantum states. No number of contributions will assist in breaking the laws of physics (!).

Piscivore
18th December 2009, 10:15 PM
No, I understand the distinction you are making, but if we could replicate the pattern that constitutes a human, we could easily do the same for the specific Christmas tree in my living room this year too.
"replicate" =\= "the same specific X" It means you have two similar things. Like twins. Identical DNA, separate individuals.

Sure they can. :) Just like that famous picture can be both the vase and the profiles. Both interpretations are inherent to the whole. There is a part that is unchanging - a core that remains the same throughout all instantiations - and a part that is changing that reflects the uniqueness of any particular instantiation.
Except that picture is neither a vase nor two faces. It is an inherent property of our perceptual equipment that lets us perceive a vase or two profiles- not something inherent in the design on the paper. You're just describing an analogy for equivocation.

My, but you seem grumpy. :(
I'm sorry, I'm really not.

Piscivore
18th December 2009, 10:17 PM
Yeah, but the question is whether an instantiation of the pattern that represents your consciousness is the important part.

Many of us here think it is the actual pattern, not the instantiation, that matters when it comes to consciousness. That is, different instances are still the same consciousness.

And that notion is ridiculous. Identical twins disproves it right out of the gate.

Meadmaker
18th December 2009, 10:44 PM
1. With up-to-current information on the complexity of the brain and neuron it is generally accepted that it would require many QB to store all of the intricate connections and states. 100 billion neurons with 1000-10000 connections each with details of millions per neuron. Do the math. ~10^20 bytes or more. I'll speculate more like 10^30 or 40.


I think the math is wrong. Let us pretend that the existence of a connection is a defining characteristic of storing information in the brain. In reality, there is probably more to it than that, but we'll get to that later.

Each neuron can only connect to nearby neurons. Let us suppose that there are 100,000 neurons to which one neuron might connect. (That estimate seems high, but we'll go with it.)

If that's the case, then a given neuron can be either connected or not connected to another neuron. Two possibilities=1 bit. 100,000 bits per neuron. 10^17 total bits. I can buy a hard drive with 10^13 bits for 99 bucks. Total cost for enough memory to make a brain? 1 million bucks.

Silicon is smaller and faster than networks of brain cells. (And hard drives. What are they made of? It occurs to me I don't know. Must be iron and plasic, I guess.) We don't know how to make one, but we have the processing and memory power required. We just don't know how to put them together.

SezMe
18th December 2009, 11:03 PM
I don't know that explains it completely. The few times I've been placed under anesthesia for surgery, the discontinuity was very different from that of sleep. What you are talking about, I think it would be more like anesthesia than sleep.
How was it different? The times I've gone into and out of anesthesia don't seem, in retrospect, different than the sleeping/waking cycle.

Ladewig
18th December 2009, 11:05 PM
The topic does make for interesting gedanken experiments.

I wonder if someone's lost memories might be recovered by the scanning process.

Also, the silicon version of yourself may not earn enough money to keep all your memories and reasoning abilities at the current level. How difficult would it be to decide which of your memories would be deleted?

kellyb
19th December 2009, 12:12 AM
At the very least, you could upload a copy of yourself every night, so if you got run over on the way home from work (or killed in a battle, whatever), you could "roll back" to that previous version. Yeah, you would loose some hours of experience, but that is much better than never existing again at all.

Thinking about this, I'm starting to have plausibility issues with all this. It's difficult to articulate, so I can only express this through example.

Suppose you upload a copy one morning before going to work, and the copy is left "active" and going about it's business in the digital world or whatever while "you" go to work, and die in a traffic accident on the way.
"You" will never re-awaken ever again, even though your copy is still conscious. Right?

kuroyume0161
19th December 2009, 12:19 AM
I think the math is wrong. Let us pretend that the existence of a connection is a defining characteristic of storing information in the brain. In reality, there is probably more to it than that, but we'll get to that later.

Each neuron can only connect to nearby neurons. Let us suppose that there are 100,000 neurons to which one neuron might connect. (That estimate seems high, but we'll go with it.)

If that's the case, then a given neuron can be either connected or not connected to another neuron. Two possibilities=1 bit. 100,000 bits per neuron. 10^17 total bits. I can buy a hard drive with 10^13 bits for 99 bucks. Total cost for enough memory to make a brain? 1 million bucks.

Silicon is smaller and faster than networks of brain cells. (And hard drives. What are they made of? It occurs to me I don't know. Must be iron and plasic, I guess.) We don't know how to make one, but we have the processing and memory power required. We just don't know how to put them together.

You are incorrect about the current understanding of neurons. As I stated, current observations have neurons with more distinct information (in the millions) more than previously (naively) ASSUMED. The neural states as connections between neurons aren't the only constituents of neural 'state' (there is retained state information beyond). And the states are not digital binary values - they are analog triggers. As far as I am aware, this puts the entire 'science' of artificial intelligence back another century.

To put it another way: if your estimates were true we would have a silicon emulated brain of at least a mouse by now that could be embodied in a robot to do 'mouse things'. At my last check, we were closer to a cockroach (many, many, many times below the brain power of homo sapiens sapiens). Why is that?

I don't want to hear about IBM's cat. It is only 1 billion neurons with 10 trillion synapses. Can they show that is has the hunting skills of a cat? I doubt it (actually, I challenge it!). Probably licks itself and responds to fuzzy stimuli - not a real cat. And that is a one friggin super major computer at nearly 150,000 processors and 144 TB of memory. Piddling compared to what it will take to emulate a real complex organism. When IBM can download their supposed success into a robotic system that then has the actuation of the lifeform being emulated, I'll be impressed. So far, emulating all house cats, I remain unimpressed. ;P

GreyICE
19th December 2009, 12:43 AM
Here's something related to this thought experiment that I would like to bring up. Let's take it as given that this consciousness transfer to silicon is possible. I'm picturing a massively parallel system with billions of tiny processors, like our neurons, equipped with cameras and microphones and heat and touch sensors so it can experience the world in real time.

Now, I'm going to disconnect the cameras and such, and instead I'm going to create a new simulated world and feed the inputs from that simulation into the sensor inputs of siliconme. Siliconme would now be "living" in this new, simulated world. It would be aware. It would be interacting. It would be conscious, but in a completely different place. We could tap into siliconme's experience and see how he deals with the simworld if we wanted to, and we could verify that his external behavior is completely consistent with an aware, conscious, being.

So far, there's nothing outlandish about this. It's pretty standard science fiction fare.

Now, though, I'm going to do a hardware change. I'm going to port siliconme onto a powerful, but single processor computer. I'll port his simworld onto something similar. After the port is complete, the computations will be 100% identical to the computations performed by siliconme 1.0. However, they will not all be happening at the same speed or simultaneously. An observer would see a powerful computer making computations, but the observer couldn't tell immediately that there was a conscious entity involved, or if it was one of google's servers. Nevertheless, it seems to me that if the massively parallel real time version of siliconme was a conscious entity, the new serialized uniprocessor version would have to be as well.

The implication is that a complex system might have consciousness, but we might not be able to recognize that consciousness because its behavior is not what we associate with consciousness, or it might operate on scales of time and space that we cannot easily observe.

The universe is filled with highly complex, interacting, systems, only a few of which are brains.

Heh, you really have no idea.

Iterative processes do not look super complicated. They tend to be the computer doing many similar things many times. Google's servers are doing query routines, or website spidering, or perhaps context matching. It's really easy to recognize those abilities as simple (non-sentient) systems.

Sentient behavior is never so simple.

Complexity is not the only benchmark, but it's a good one. It's the same way you don't need to be able to read a computer file to tell its encrypted, or don't need to know how aliens communicate to know if they're using EM waves to do so.

rdaneel
19th December 2009, 01:57 AM
Thinking about this, I'm starting to have plausibility issues with all this. It's difficult to articulate, so I can only express this through example.

Suppose you upload a copy one morning before going to work, and the copy is left "active" and going about it's business in the digital world or whatever while "you" go to work, and die in a traffic accident on the way.
"You" will never re-awaken ever again, even though your copy is still conscious. Right?
This is why I find the whole idea of copying myself unsatisfying. I do not believe for an instant that a copy of me is me. As soon as the copy is separated, it becomes somebody else no matter how perfect it is.

However, I do think there is a way to sidestep this issue, just never disconnect.
Set up the hardware simulation as an expansion to your brain, extending your current consciousness into it and maintaining your persistence of self. The hardware would have the same kind of redundancy (or better) as your current wetware and if set up right, the inevitable end of your biological brain will cause you no more distress than the loss of thousands of braincells you already experience every day.

This is, of course, assuming that when we've got the technology to simulate human brains, we'll also have the technology to create such persistent connections.

kellyb
19th December 2009, 02:19 AM
This is why I find the whole idea of copying myself unsatisfying. I do not believe for an instant that a copy of me is me. As soon as the copy is separated, it becomes somebody else no matter how perfect it is.

However, I do think there is a way to sidestep this issue, just never disconnect.
Set up the hardware simulation as an expansion to your brain, extending your current consciousness into it and maintaining your persistence of self. The hardware would have the same kind of redundancy (or better) as your current wetware and if set up right, the inevitable end of your biological brain will cause you no more distress than the loss of thousands of braincells you already experience every day.

This is, of course, assuming that when we've got the technology to simulate human brains, we'll also have the technology to create such persistent connections.

Oooo...
I like it!
That sort of solves the paradox-like issue that was bugging me.
I feel like a total woo saying this, but it seems like if there's a divergence in continuity (or something) or simultaneous duplication of consciousness, it would...like, violate some principal of space-time or something and necessitate the existence of two, actually separate "individuals", both of whom are at risk of non-existence, and the threat of "death" not at all ameliorated by the existence of the other.

Beth
19th December 2009, 07:02 AM
"replicate" =\= "the same specific X" It means you have two similar things. Like twins. Identical DNA, separate individuals. A valid point, although even newborn identical twins would have unique consciousness patterns. Still, it's a good argument against the 'eternal life' claim. It depends on what is meant by 'you'. Is your consciousness capable of being separated from the physical body it inhabits? If yes, then that extraction, the pattern that encodes it, is what I would consider the soul. If no, then well, such a thing as a soul would not exist.

Except that picture is neither a vase nor two faces. It is an inherent property of our perceptual equipment that lets us perceive a vase or two profiles- not something inherent in the design on the paper. You're just describing an analogy for equivocation. An analogy yes, equivocation - maybe (pun intended :p) My point is that different POV's can lead to different answers to a seemingly blank-and-white question.

I'm sorry, I'm really not.
No problem. I have an inordinate fondness for grumpy old men, particularly if they are short, plump and balding.

I don't know that explains it completely. The few times I've been placed under anesthesia for surgery, the discontinuity was very different from that of sleep. What you are talking about, I think it would be more like anesthesia than sleep.
How was it different? The times I've gone into and out of anesthesia don't seem, in retrospect, different than the sleeping/waking cycle.

For me, the experience of anesthesia is very disorienting. One moment I'm in pre-opt with the anesthetist doing their thing. The next, I'm in a different place. There is no sensation of either falling asleep, waking up, or any time at all having passed. One blink I'm waiting for surgery, the next I'm in the recovery room, with no awareness of anything having happened or any time having passed. When I wake up from sleep, I'm always aware that I've been asleep.

I suppose that 'waking' up a silicon copy or cloned body might be like either experience though. Since it's never been done, who can say.

GreyICE
19th December 2009, 07:23 AM
Thinking about this, I'm starting to have plausibility issues with all this. It's difficult to articulate, so I can only express this through example.

Suppose you upload a copy one morning before going to work, and the copy is left "active" and going about it's business in the digital world or whatever while "you" go to work, and die in a traffic accident on the way.
"You" will never re-awaken ever again, even though your copy is still conscious. Right?

Okay, lets reverse the process, and download your brain back into a new body that's functionally identical to your old one.

Is it functionally any different than you going to sleep for 24 hours instead of heading off to work and getting hit by a car?

The question is if there's anything particularly you about you. The answer is, probably not.

Meadmaker
19th December 2009, 07:53 AM
You are incorrect about the current understanding of neurons. As I stated, current observations have neurons with more distinct information (in the millions) more than previously (naively) ASSUMED. The neural states as connections between neurons aren't the only constituents of neural 'state' (there is retained state information beyond). And the states are not digital binary values - they are analog triggers. As far as I am aware, this puts the entire 'science' of artificial intelligence back another century.

A memory cell in a computer is a complex analog circuit with many states, but only two of them matter for the purpose of computation. We really know very little about the equivalent information capacity of a neuron and/or its synapses.


I don't want to hear about IBM's cat. It is only 1 billion neurons with 10 trillion synapses. Can they show that is has the hunting skills of a cat? I doubt it (actually, I challenge it!).

Precisely. It may have the processing power equivalent to a cat's brain , but we don't know how to program it so that it behaves in anyway anything like a cat.

And, we aren't sure if it really has equivalent processing power, because we don't truly know what a neuron does or how it stores information.

My guess is that dedicated, purpose built silicon circuits will be more efficient than their protein based counterparts. I think that some day we will be able to mimic the function of a neuron using processors and memory, and when we do, those processors and memory will be smaller and use less energy than brain cells. However, right now, we don't actually know what an individual neuron really does, and we have no clue how to hook together a group of them to make emergent, intelligent seeming, behavior.

Ichneumonwasp
19th December 2009, 07:59 AM
Okay, lets reverse the process, and download your brain back into a new body that's functionally identical to your old one.

Is it functionally any different than you going to sleep for 24 hours instead of heading off to work and getting hit by a car?

The question is if there's anything particularly you about you. The answer is, probably not.


All very true, but what bothers people about this sort of issue is not that the other is identical, but that we have a localized feeling of "me" in this body. Any identical version of "me" in another body wouldn't exactly be me as far as my feelings go because of the way I value my current existence.

It isn't rational, but we aren't speaking rationality with this -- this is about one's feelings, which don't follow the same rules.

NewtonTrino
19th December 2009, 10:52 AM
Personally I think we are going to need several things before we get something like this working.

1st we are going to need a revolution in computing hardware. This could happen incrementally or it could happen quickly if someone makes a breakthrough in nano fabrication technology. I'm picturing a processor that looks like a solid chunk of diamond about a cubic inch in volume that contains a combination of processors and memory. I don't know if we try to cool it, if it runs on light or something or if we just let it get hot.

2nd we are going to need a revolution in software. This means new languages for better expressing parallel concepts as well as a true understanding of how the software of the human brain works. Not only do we have to understand how to simulate the brain but we need a good environment to figure out things like inputs. I expect virtual reality to be one of the routes to figuring this stuff out (and I expect it to mostly come out of games). We also need to figure out how to do things like instill instincts. Imagine being able to reprogram your brain in that way.

Again let me state that evolution has managed to pack this processing power into our heads. If the brain is simply a physical process I don't see any reason we can't simulate it. In fact once we understand how it works it's likely we will be able to improve on it.

Of course there are other even more fringe theories on how strong AI could come about. For example some people think it's theoretically possible that the network itself could somehow become conscious. Personally I think that's a crock but it's hard to rule anything out definitively given how little we understand the software of the brain.

NewtonTrino
19th December 2009, 10:57 AM
And yes, even if you make a copy of your brain YOU STILL DIE. The "new you" can certainly attend your funeral, and he/she will feel like a continuation of you. In reality though it's just a copy and you still die. Your thoughts go on in some form though and it will certainly feel like immortality to the other you.

Robert J. Sawyer wrote a sci-fi book called "Mindscan" about this very topic. It was well done and worth the read if you are interested in this topic.

rocketdodger
19th December 2009, 01:10 PM
And that notion is ridiculous. Identical twins disproves it right out of the gate.

I fail to see how individuals with the same DNA pattern being different after different existences have affected what emerges from that DNA pattern disproves it "right out of the gate."

rocketdodger
19th December 2009, 01:11 PM
Thinking about this, I'm starting to have plausibility issues with all this. It's difficult to articulate, so I can only express this through example.

Suppose you upload a copy one morning before going to work, and the copy is left "active" and going about it's business in the digital world or whatever while "you" go to work, and die in a traffic accident on the way.
"You" will never re-awaken ever again, even though your copy is still conscious. Right?

Yep.

Sucks, but there is really nothing you can do about it.

kellyb
19th December 2009, 01:52 PM
Okay, lets reverse the process, and download your brain back into a new body that's functionally identical to your old one.

Is it functionally any different than you going to sleep for 24 hours instead of heading off to work and getting hit by a car?



That just seems different, though. That makes it a continuation of a pre-existing consciousness as opposed to the creation of an additional, separate consciousness.

It seems like a physicist familiar with mathematical articulations of time and space might be able to explain the differences easily. Maybe.

GreyICE
19th December 2009, 02:04 PM
All very true, but what bothers people about this sort of issue is not that the other is identical, but that we have a localized feeling of "me" in this body. Any identical version of "me" in another body wouldn't exactly be me as far as my feelings go because of the way I value my current existence.

It isn't rational, but we aren't speaking rationality with this -- this is about one's feelings, which don't follow the same rules.

Of course not. Our survival instinct would be highly broken if we allowed ourselves to die because something that 'wasn't us' was going on, even if that 'not me' is in every way identical to 'me.'

That being said, I'd rather a 'not me' with all my memories wake up 24 hours from now if I'm hit by a car. And I'd be a lot more willing to do life-threatening things if I knew that a 'not me' would wake up with my memories -24 hours.

RecoveringYuppy
19th December 2009, 02:16 PM
And I'd be a lot more willing to do life-threatening things if I knew that a 'not me' would wake up with my memories -24 hours.

Visions of the world ending with the entire human race trapped on Groundhog day.

GreyICE
19th December 2009, 02:41 PM
That just seems different, though. That makes it a continuation of a pre-existing consciousness as opposed to the creation of an additional, separate consciousness.

It seems like a physicist familiar with mathematical articulations of time and space might be able to explain the differences easily. Maybe.
Lets say that the reports of your death were in error, and the personality backup created 24 hours ago existed simultaneously with your current existence, who miraculously survived that fire by diving out a window and falling 5 stories onto a passing hay truck.

From any perspective, this makes no difference to the -24 you, who is absolutely unchanged by whether or not you actually survived.

If the -24 you is the continuation of your existence when the -0 you is dead, what are they if the -0 you is still alive? What if the -0 you was put on a spaceship to Alpha Centauri, and would be out of contact for at least 2-3 centuries, would that be functionally different from the -0 you being dead?

NewtonTrino
19th December 2009, 03:08 PM
I wonder if we'll be able to pause consciousness or not. Is there something implicitly realtime about the process or can you slow it down.

westprog
19th December 2009, 03:27 PM
Yeah, but the question is whether an instantiation of the pattern that represents your consciousness is the important part.

Many of us here think it is the actual pattern, not the instantiation, that matters when it comes to consciousness. That is, different instances are still the same consciousness.

Why do we need any instantiations then?

Ichneumonwasp
19th December 2009, 03:50 PM
Of course not. Our survival instinct would be highly broken if we allowed ourselves to die because something that 'wasn't us' was going on, even if that 'not me' is in every way identical to 'me.'

That being said, I'd rather a 'not me' with all my memories wake up 24 hours from now if I'm hit by a car. And I'd be a lot more willing to do life-threatening things if I knew that a 'not me' would wake up with my memories -24 hours.


Which assumes that our survival instinct is a rational enterprise, which it is not.

Nature only has what it has to work with. We are not structured to deal emotionally with the idea that we could upload our memories into silicon and still exist. 'We" wouldn't exist any longer as far as 'we' are concerned emotionally.

Rationally, yes, it would make more sense for us to upload our memories; but that would be meaningless in an evolutionary sense also.

Remember that our existence is subjective; our consciousness is subjective. In fact, consciousness has only a subjective ontology; so 'I' would cease to exist at death from my perspective whether or not I upload my memories to another system. Doesn't mean I wouldn't do it, but I would still die.

westprog
19th December 2009, 04:13 PM
I don't want to hear about IBM's cat. It is only 1 billion neurons with 10 trillion synapses. Can they show that is has the hunting skills of a cat? I doubt it (actually, I challenge it!). Probably licks itself and responds to fuzzy stimuli - not a real cat. And that is a one friggin super major computer at nearly 150,000 processors and 144 TB of memory. Piddling compared to what it will take to emulate a real complex organism. When IBM can download their supposed success into a robotic system that then has the actuation of the lifeform being emulated, I'll be impressed. So far, emulating all house cats, I remain unimpressed. ;P

If it's not cooperating and doing what they want it to, isn't it behaving exactly like a cat?

westprog
19th December 2009, 04:17 PM
All very true, but what bothers people about this sort of issue is not that the other is identical, but that we have a localized feeling of "me" in this body. Any identical version of "me" in another body wouldn't exactly be me as far as my feelings go because of the way I value my current existence.

It isn't rational, but we aren't speaking rationality with this -- this is about one's feelings, which don't follow the same rules.

One might as well sleep on the edge of a cliff, because falling asleep is the same as dying anyway.

Ichneumonwasp
19th December 2009, 04:49 PM
One might as well sleep on the edge of a cliff, because falling asleep is the same as dying anyway.


Non-sequiturs are always fun.

First, sleep is not non-consciousness in all senses, but that is beside the point. You could just as easily invoke anesthesia. If I experience death, I still experience death. It isn't as though I wouldn't experience it if my memories were uploaded to another spot. The death would still be real and I would still have an emotional reaction to it.

If I created a new 'me' artificially, the new person would not be 'me'. I would still be here experiencing the world subjectively. The other 'sort-of-me' would be over there experiencing the world subjectively over there.

As I said earlier, it isn't as though I wouldn't want to do it, or I would reject the chance to upload my memories to continue life in some sense, but the emotional reaction one has to this scenario is not about rationality;it is emotional.


ETA:

But, if you want to equivocate over the various meanings of the word 'conscious', then that is fine.

rocketdodger
19th December 2009, 07:46 PM
Why do we need any instantiations then?

Because a pattern can't exist without any instances.

You are basically asking "why can't we exist as numbers in the void?" Well, because numbers don't exist in the void.

rocketdodger
19th December 2009, 07:55 PM
One might as well sleep on the edge of a cliff, because falling asleep is the same as dying anyway.

Well, what if a creature evolved to be implicitly comfortable with the whole "falling asleep is loosing continuity" thing, because from the standpoint of natural selection it is irrelevant whether there is actual continuity?

And what if such a creature also evolved to be implicitly uncomfortable with the whole "actual death" thing, because from the standpoint of natural selection actual death is a very bad event?

Then I would say that such a creature would treat falling asleep and actual death very differently, even though those events are equivalent for the moment to moment consciousness of the creature.

westprog
20th December 2009, 07:30 AM
Non-sequiturs are always fun.

First, sleep is not non-consciousness in all senses, but that is beside the point. You could just as easily invoke anesthesia. If I experience death, I still experience death. It isn't as though I wouldn't experience it if my memories were uploaded to another spot. The death would still be real and I would still have an emotional reaction to it.

If I created a new 'me' artificially, the new person would not be 'me'. I would still be here experiencing the world subjectively. The other 'sort-of-me' would be over there experiencing the world subjectively over there.


It would be in no sense immortality.

As to the question whether or not one is still the same person after sleeping, or going into a coma, or exchanging however many molecules - it feels like I'm the same person. Since being a person is about how it feels, it would be silly to not act on that basis.


As I said earlier, it isn't as though I wouldn't want to do it, or I would reject the chance to upload my memories to continue life in some sense, but the emotional reaction one has to this scenario is not about rationality;it is emotional.


ETA:

But, if you want to equivocate over the various meanings of the word 'conscious', then that is fine.

Ichneumonwasp
20th December 2009, 01:11 PM
It would be in no sense immortality.

As to the question whether or not one is still the same person after sleeping, or going into a coma, or exchanging however many molecules - it feels like I'm the same person. Since being a person is about how it feels, it would be silly to not act on that basis.


Of course. But the feeling happens in a particular location -- that is what subjectivity means (the feeling is available to me and not to you).

If 'I' were to upload my memories into another body, that other body wouldn't be 'me' from my perspective. 'I' cannot engage in subjectivity over there.

I would still feel as though I died and that was it, however much I know rationally that "I" now exist over there, for all intents and purposes.

It is precisely because being a person is based on feeling that the whole idea feels weird and somehow wrong even though I know intellectually that it is not (which is why I would still do it).
But my doing it is not an emotional response per se, so much as a rational one.

rocketdodger
20th December 2009, 06:43 PM
Of course. But the feeling happens in a particular location -- that is what subjectivity means (the feeling is available to me and not to you).

If 'I' were to upload my memories into another body, that other body wouldn't be 'me' from my perspective. 'I' cannot engage in subjectivity over there.

I would still feel as though I died and that was it, however much I know rationally that "I" now exist over there, for all intents and purposes.

It is precisely because being a person is based on feeling that the whole idea feels weird and somehow wrong even though I know intellectually that it is not (which is why I would still do it).
But my doing it is not an emotional response per se, so much as a rational one.

I really think people would just get used to it.

I mean, if you suggested sleep to an entity that previously had a continuous consciousness, they would react the same way we are reacting.

Yet, since every night we go to sleep and every morning we wake up with a sense of connection to that previous self, we have gotten used to it. It doesn't feel like "we" are ending each night, because we exist the next day. But it isn't really "us", it is a brand new instance created from the memories of the prior instance.

So I think we would get used to it.

Beerina
21st December 2009, 10:13 AM
I am aware that there are many individuals emotionally invested in the notion that strong AI is false. I can understand why -- the idea that our consciousness has some magical component that is beyond the boring and cold scientifically understood world is one that can be tempting.

But there are also very good emotional reasons to support the notion that strong AI is true.

Because if strong AI is true, the implications aren't limited to a realization that we are all just meatbags having the illusion of non-deterministic thought. There is so much more.

If strong AI is true, humans will someday be able to upload their consciousness into any suitable substrate. This is not only the closest thing to immortality that could be available in our universe, but it is also far better than immorality. The ability to upload our consciousness implies the ability to modify it as well -- in any way we desire. Being able to upload also implies the ability to travel at lightspeed between suitable locations. So if you are interested in living forever, or living in any way you could possibly think of, you should want strong AI to be true.

If it makes you feel any better, Searle makes a very strong point that consciousness, whatever it is, is nevertheless a real phenomenon, and therefore arises out of physics somehow.

And because of that, it cannot be a purely informational process. Information is pushing real atoms and electrons and energy around, and more accurately, an interpretation of it. Our consciousness is not an interpretation (these electrons and ions in the brain represent a "box with a ball in it").

So you won't be able to just "upload" your mind into a computer unless the computer is specifically designed to have hardware that lets a conscious mind (a real, physical phenomenon) arise out of it.


Now having said that, I don't think there is necessarily anything special that the conscious mind does that can't be adequately, and perhaps easily, simulated on the computer. But said simulation will not be conscious, while a computer with the special hardware would be.

In other words, your conscious mind is, evolutionarily, just another part of the data processing your brain does, and thus could be swapped out for an adequately programmed but non-conscious device. Keep in mind, though, that much of your thinking may actually be unconscious. How many times a day to you have a conscious thought of a sentence? Yet you did not ever consciously construct that sentence. I think people might be surprised at how much of intelligence actually derives unconsciously and is just passed in front of the conscious processor for brief evaluation.


As for immortality, yeah, I would want one of those computers with the special hardware. And I'd want the transfer to be my mind co-sharing consciousness with the hardware, then shutting down the brain while still attached. No just "copying" my memories then shutting off the body, because then I'm dead and a copy of me goes on to live forever.

Better yet, slowly replace neurons or whatever with manufactured parts that also generate the conscious experience. And without knowing it, your actual mind has transferred to hardware.



Hehe, I often joked to myself that if I were frozen, I would want an etched granite plate that said, "Do not resurrect until" and then a description of the stuff above. "You are not permitted to copy-and-destroy this physical brain."

Malerin
21st December 2009, 12:56 PM
If you believe you can upload "yourself", and if you believe in a cyclical universe (or very large multiverse), the you've already achieved a sort of immortality. Given enough cycles of Big Bang-Big Crunch, this particular universe will eventually repeat itself. Also, given a sufficiently large multiverse, there already exists another universe exactly like this one was 20/30/100 years ago, so that if you die in this one, the "you" in the other universe continues on.

I don't think personal continuity works with a multiverse because there are multiple simultaneous instances of you, but if the universe is cyclical, that problem doesn't arise. If I'm still "me" ten years from now (after all my atoms have been replaced by other atoms), would I still be "me" a quadrillion years from now, if a universe arises that is identical to this one?

Rairun
21st December 2009, 01:23 PM
I think people have issues with this because they think about the problem in these terms:


A
| \
A B


A = me
B = copy

Whereas in reality it works more or less like this:


A
/ \
B C


A = current me
B = future me
C = future me

That means both B and C are future versions of A. B can't claim to be A any more than C can. Conversely, when A decides to make a copy of himself, he knows that there will two instances of him in the future--not just the "real" person and a copy. There will be effectively two of him.

This is exactly what happens with identical twins. They are two separate people, but from the point of view of the undivided zygote, they are both future versions of it. And if we kill B, person A will still survive in the form of C.

roger
21st December 2009, 01:42 PM
I'm no more excited by uploading my brain than I'm excited by the fact that Joe Blow will survive my death. Neither experience is accessible to *this* consciousness. Of course, the consciousness that develops on the computer based on the copy of my brain will be going "yay! I'm immortal (relatively)!

OTOH, it would be extremely cool if I could copy myself out, and then remerge later, I would assume. Make 10 copies and you have effectively multiplied your remaining lifespan by 10, and it will be as if you experienced all 11 lives. Let some copies work, others bum around the world, and merge and create new copies every month or so. Of course, with the population density we already have, that's not going to be happening even if we solve the problems of not only copying, but of creating sleeves for the copied minds.

rocketdodger
21st December 2009, 02:59 PM
If you believe you can upload "yourself", and if you believe in a cyclical universe (or very large multiverse), the you've already achieved a sort of immortality. Given enough cycles of Big Bang-Big Crunch, this particular universe will eventually repeat itself. Also, given a sufficiently large multiverse, there already exists another universe exactly like this one was 20/30/100 years ago, so that if you die in this one, the "you" in the other universe continues on.

I don't think personal continuity works with a multiverse because there are multiple simultaneous instances of you, but if the universe is cyclical, that problem doesn't arise. If I'm still "me" ten years from now (after all my atoms have been replaced by other atoms), would I still be "me" a quadrillion years from now, if a universe arises that is identical to this one?

Yeah, I have thought about that before.

The sword cuts both ways, though.

For example, in an extremely large multiverse, there might be a version of me at some point that suddenly dies according to an extremely statistically unlikely event, that is rather painful. That would suck.

NewtonTrino
21st December 2009, 03:03 PM
Another interesting take on this is "The Metamorphosis of Prime Intellect"
http://www.kuro5hin.org/prime-intellect/

Ichneumonwasp
21st December 2009, 05:07 PM
I really think people would just get used to it.

I mean, if you suggested sleep to an entity that previously had a continuous consciousness, they would react the same way we are reacting.

Yet, since every night we go to sleep and every morning we wake up with a sense of connection to that previous self, we have gotten used to it. It doesn't feel like "we" are ending each night, because we exist the next day. But it isn't really "us", it is a brand new instance created from the memories of the prior instance.

So I think we would get used to it.


Let me make sure that we are both clear about what we are talking about.

The person who *is* the uploaded memories has nothing to get used to. That person would simply be "me" in another location.

It is the person who is dying who would not have the experience of actually being the "new me"; that is the person who would have difficulty with the process. I'm not sure there is anything to get used to. That person is going to be dead soon.

I am not arguing that anyone would not want to do this. But the process is emotionally more problematical than I think many want to admit. But that is because we are pretty screwed up creatures to begin with.

Third Eye Open
21st December 2009, 05:20 PM
I think people have issues with this because they think about the problem in these terms:


A
| \
A B


A = me
B = copy

Whereas in reality it works more or less like this:


A
/ \
B C


A = current me
B = future me
C = future me

That means both B and C are future versions of A. B can't claim to be A any more than C can. Conversely, when A decides to make a copy of himself, he knows that there will two instances of him in the future--not just the "real" person and a copy. There will be effectively two of him.

This is exactly what happens with identical twins. They are two separate people, but from the point of view of the undivided zygote, they are both future versions of it. And if we kill B, person A will still survive in the form of C.

This.

rocketdodger
21st December 2009, 05:36 PM
Let me make sure that we are both clear about what we are talking about.

The person who *is* the uploaded memories has nothing to get used to. That person would simply be "me" in another location.

It is the person who is dying who would not have the experience of actually being the "new me"; that is the person who would have difficulty with the process. I'm not sure there is anything to get used to. That person is going to be dead soon.

I am not arguing that anyone would not want to do this. But the process is emotionally more problematical than I think many want to admit. But that is because we are pretty screwed up creatures to begin with.

Right.

I am saying that after doing it a few times, you would be saying to yourself hey, I know that this instance of me won't continue after the process, but I remember thinking that before I did it the last time, and as far as I could tell nothing happened, so why worry about it...

That is why I brought up sleep. It is the same thing -- if you talked to a creature that never sleeped, it would probably be terrified to try it -- the loss of continuity would be very different than what it is used to. It might feel like loosing that continuity would be the end of itself, or whatever. But after it tried it, and woke up the next day like the rest of us, it would realize that there wasn't anything to get worried about. Yeah, the previous instance is now gone for all time, but who cares? And after doing that a few times, it would forget all about the fact that it was actually dying each time.

Ichneumonwasp
21st December 2009, 05:50 PM
Right.

I am saying that after doing it a few times, you would be saying to yourself hey, I know that this instance of me won't continue after the process, but I remember thinking that before I did it the last time, and as far as I could tell nothing happened, so why worry about it...

That is why I brought up sleep. It is the same thing -- if you talked to a creature that never sleeped, it would probably be terrified to try it -- the loss of continuity would be very different than what it is used to. It might feel like loosing that continuity would be the end of itself, or whatever. But after it tried it, and woke up the next day like the rest of us, it would realize that there wasn't anything to get worried about. Yeah, the previous instance is now gone for all time, but who cares? And after doing that a few times, it would forget all about the fact that it was actually dying each time.


Unless you question -- but am I really the same person? How would I know? I know who I am now, but is it really the same?

It's an emotional thing. I'm not sure that we would ever get used to it.

But, again, that is no argument against doing it. I certainly would. I'm just not sure I would ever get used to it or fully trust it. I have no way to test it. I would have to trust the process.

NewtonTrino
21st December 2009, 08:04 PM
It's an emotional thing. I'm not sure that we would ever get used to it.


If we know enough to do this then it's likely that we will have the ability to tweak your emotions as well. If it becomes too uncomfortable just turn up the "disembodied persona" knob in the UI. ;)

I certainly wouldn't call us human at this point. Really though being human does have it's downsides.

What are the long term implications of us being able to overclock our brain programs?

Malerin
21st December 2009, 08:21 PM
Yeah, I have thought about that before.

The sword cuts both ways, though.

For example, in an extremely large multiverse, there might be a version of me at some point that suddenly dies according to an extremely statistically unlikely event, that is rather painful. That would suck.

And there might be a future version of you also dying a very painful death. Of course, knowledge that you will be tortured in the immediate future would be very unsettling. Knowledge that a multiverse duplicate of you is being tortured doesn't even come close. Why does it make a difference?

IMO, personal identity is the hardest problem in philosophy. Consciousness pales in comparison.

PixyMisa
21st December 2009, 09:31 PM
And there might be a future version of you also dying a very painful death. Of course, knowledge that you will be tortured in the immediate future would be very unsettling. Knowledge that a multiverse duplicate of you is being tortured doesn't even come close. Why does it make a difference?

IMO, personal identity is the hardest problem in philosophy. Consciousness pales in comparison.
Personal identity is, like consciousness, a a solved problem in philosophy - though much work remains to be done scientifically.

Of course, solved problems don't bring tenure, so most philosophers are reluctant to grant that any problem is solved.

That joke I keep referring to:

The mathematics department is the second cheapest department in the university - they only require pencils, paper, and waste-paper baskets.

The philosophy department is the cheapest - all they need are pencils and paper.

Beth
21st December 2009, 09:35 PM
Unless you question -- but am I really the same person? How would I know? I know who I am now, but is it really the same?

It's an emotional thing. I'm not sure that we would ever get used to it.

But, again, that is no argument against doing it. I certainly would. I'm just not sure I would ever get used to it or fully trust it. I have no way to test it. I would have to trust the process.

This sounds an awful lot like having faith. Do you believe your eternal soul can continue on after you have died?

westprog
22nd December 2009, 07:37 AM
This.

It's actually more like:

A
/\
B C
! !
D E

As far as D is concerned, the fact that at some point in the past, he "was" A has nothing to do with it. Right now, he isn't A or E.

There's no more reason why he should feel that he is E than he should feel that he is his children, or his paintings. He'd like them to live on after him, sure, but they wouldn't be him.

Robin
22nd December 2009, 07:49 AM
I am aware that there are many individuals emotionally invested in the notion that strong AI is false.
I am not aware of any who are emotionally invested in the notion that strong AI is false.

I would say it is quite the opposite, the emotional investment come from supporters of a particular interpretation of Strong AI.
I can understand why -- the idea that our consciousness has some magical component that is beyond the boring and cold scientifically understood world is one that can be tempting.
No, on the contrary, most people who suggest that Strong AI may be wrong do so because they don't think it has a magical component.
But there are also very good emotional reasons to support the notion that strong AI is true.
Indeed there are, but that does not make it true.
To that I reply that my own estimates put the arrival time of such capability at less that 50 years from now. In fact such a thing might be possible within 20 years or so, and economic feasibility would follow within a few decades.
I think you are dreaming, even if the particular interpretation of Strong AI you are talking about is true the technology you talk of is a good deal more than 20 years away - it will take more than just computing power.
Hopefully I have shown that there are emotional reasons to support strong AI that are just as good, if not better, than those for opposing it.
As if that was ever in doubt.

rocketdodger
22nd December 2009, 09:20 AM
I am not aware of any who are emotionally invested in the notion that strong AI is false.

You have got to be kidding.

Rairun
22nd December 2009, 01:21 PM
It's actually more like:

A
/\
B C
! !
D E

As far as D is concerned, the fact that at some point in the past, he "was" A has nothing to do with it. Right now, he isn't A or E.

There's no more reason why he should feel that he is E than he should feel that he is his children, or his paintings. He'd like them to live on after him, sure, but they wouldn't be him.

Yes, that's true, but the issue is that none of those instances have any more reasons to feel like they are others. Yet... currently, A doesn't break down crying because he's going to die (i.e. become B). And B would fight for his right to die in the way he wants (i.e. become D).

The kind of technology we are discussing here won't just make us immortal. It will radically change the fiction of the self that we've created for ourselves. Unless we regulate the use of that technology very strictly so as not to disturb that notion, we will have to review the idea of what it is to be an individual.

No one wants to experience a painful death, but think about dying peacefully in your sleep. I don't think anyone will care about dying that way if they have a backup. When we sleep, we are replaced by another copy anyway. Given enough time, we'll see concurrent copies who weren't properly destroyed agreeing to be put to death, because they were already going to die a few hours later when they decided to go to bed.

If our culture evolved that way, I definitely don't think C would refuse to be destroyed if B existed, for example. No, they are not exactly the same, but that doesn't matter. When we have an accident and hit our heads, we often forget what we experienced a few minutes before the accident... but that small loss hardly troubles us. That means we already don't care if the person who has an accident survives, as long as there is someone who's similar enough to take their place.

Basically, I think that the people who say that B and C should see themselves as separate individuals are stopping themselves from asking the hard questions. Yes, technically they are separate people--but that's in no way different from how I'm different from who I was 5 minutes ago. We cannot function if we think of ourselves only as who we are right at this moment. Even if we consider selfhood an illusion, we have to think our past and future selves are "us" in order to live.

And if we are to think of other people as "I" in order to live, then there's no reason why we shouldn't think of our copies as us. Honestly, I don't think it would be a big deal to have A copied into B and C, then allow both to live for a day, and then kill B in their sleep. You have to remember that C will also naturally die in their sleep, and that E could be considered both B and C. Sure, E will be a tiny bit closer to C than B, but as shown in the accident scenario, we don't really care about short episodes of memory loss. It'd be the same thing.

Robin
22nd December 2009, 02:01 PM
I am not aware of any who are emotionally invested in the notion that strong AI is false.
You have got to be kidding.
Why would I be kidding?

Who is it who I am aware of who is emotionally invested in the notion that strong AI is false?

rocketdodger
22nd December 2009, 02:11 PM
Why would I be kidding?

Who is it who I am aware of who is emotionally invested in the notion that strong AI is false?

Well I would say that anyone who has ever said anything along the lines of "it is nonsense/poppycocks that a machine could ever be conscious," are emotionally invested.

You haven't ever run across someone who sounds like their notion of self is threatened by the idea that machines could have the same notion?

Off the top of my head I would put UndercoverElephant and Frank Newgent squarely in that category.

rocketdodger
22nd December 2009, 02:21 PM
As if that was ever in doubt.

I think it is for many people.

They think to themselves "hey, if machines can be conscious like I can, then that aspect of me is no longer as special in this universe," or something along those lines.

I just want to point out that there is a flipside. Yeah, you won't be as "special" in a metaphysical sense, but you now have many more options on the table, and a bunch of those options are far better than their religious analogs.

Zalbik
22nd December 2009, 02:29 PM
When we sleep, we are replaced by another copy anyway.
Personally, I think this is doubtful. It seems much more likely that our identity is "kept running" in some sort of low-level fashion that prevents the formation of memory.


If our culture evolved that way, I definitely don't think C would refuse to be destroyed if B existed, for example. No, they are not exactly the same, but that doesn't matter. When we have an accident and hit our heads, we often forget what we experienced a few minutes before the accident... but that small loss hardly troubles us.

Of course they would. There is an obvious distinction between C and B. C is going to be destroyed. The small loss does not bother us because it is a small loss. C's destruction is not the loss of a few minutes due to an accident, this is the loss of many potential years of experience that C would have, but now will not. The fact that B will have a separate set of experiences is irrelevant. Are you suggesting that long-term coma patients who awaken do not grieve over the loss of many years of experience?


Basically, I think that the people who say that B and C should see themselves as separate individuals are stopping themselves from asking the hard questions. Yes, technically they are separate people--but that's in no way different from how I'm different from who I was 5 minutes ago. We cannot function if we think of ourselves only as who we are right at this moment. Even if we consider selfhood an illusion, we have to think our past and future selves are "us" in order to live.

And again you are missing the point. B and C are physically different, and as such will not have exactly the same experiences simultaneously. C and B are forced by there physical independence into different choices in their lives...that is why C does not think of B as himself.


Honestly, I don't think it would be a big deal to have A copied into B and C, then allow both to live for a day, and then kill B in their sleep.
You don't believe there would be a huge moral issue with this?!?!

A thought experiment for you:
Take a one year old child A, and create copy B. Send B to a different country/culture to be raised with different experiences. Wait 20 years.

Should B be worried about his death? After all, there is copy A...
I would say of course he should be. Having a different background and upbringing reduces the identification between B and A.

But if B is worried at the 20 year point, should he not be more worried at the 0 year point? After all, he is losing an additional 20 years of experiences...

rocketdodger
22nd December 2009, 02:50 PM
Personally, I think this is doubtful. It seems much more likely that our identity is "kept running" in some sort of low-level fashion that prevents the formation of memory.

To me, that is self-contradictory.

How can self identity be maintained without memory?

Frank Newgent
22nd December 2009, 02:54 PM
You haven't ever run across someone who sounds like their notion of self is threatened by the idea that machines could have the same notion?


Who wouldn't be threatened by the notion of a giant robot on a shooting spree?

One intelligent form of life is probably already too many.

Zalbik
22nd December 2009, 03:05 PM
To me, that is self-contradictory.

How can self identity be maintained without memory?

The same way it is from 5 minutes ago until now. "Me" of 5 minutes ago did not have last 5 minutes of memory that "Me now" has. I'm pretty sure I'm the same person however.

Paul C. Anagnostopoulos
22nd December 2009, 03:17 PM
The same way it is from 5 minutes ago until now. "Me" of 5 minutes ago did not have last 5 minutes of memory that "Me now" has. I'm pretty sure I'm the same person however.
That's because you have the previous n - 5 minutes of memory in common with yourself-of-5-minutes-ago. If you didn't have that memory, then all you would have is a 5-minute-self.

Imagine this fellow if the retrograde amnesia went back 27 years instead of 11:

http://home.ku.edu.tr/~dyuret/pub/aiwp343/node5.html

~~ Paul

Zalbik
22nd December 2009, 03:33 PM
That's because you have the previous n - 5 minutes of memory in common with yourself-of-5-minutes-ago. If you didn't have that memory, then all you would have is a 5-minute-self.

~~ Paul

Yes, this is a huge part of what defines my identity. This is exactly why I would still fear death even if a copy of me lives on. I know that the copy's present/future experiences differ from my own.

As such, the "immortality" suggested by rocketdodger falls apart. The copy cannot have the same experiences as me. As well, the potential experiences "I" could have had independently of the copy eventually end.

rocketdodger
22nd December 2009, 05:00 PM
Yes, this is a huge part of what defines my identity. This is exactly why I would still fear death even if a copy of me lives on. I know that the copy's present/future experiences differ from my own.

As such, the "immortality" suggested by rocketdodger falls apart. The copy cannot have the same experiences as me. As well, the potential experiences "I" could have had independently of the copy eventually end.

Yes -- if a version of you ceases to exist, you will always loose potential experience.

But the point in the OP is that religious people just want the continued experience of a single person. They don't want 2x, or 3x, or whatever, which is what you would get if you made a copy of yourself.

So if you made a copy, and either of you died, you would go from having 2x potential experience to 1x potential experience -- a net loss of zero potential experience.

In other words, from a potential experience standpoint, it is at least as good as the afterlife.

Rairun
22nd December 2009, 06:01 PM
Personally, I think this is doubtful. It seems much more likely that our identity is "kept running" in some sort of low-level fashion that prevents the formation of memory.

Why exactly is it more likely? If there is no memory formation, then there's no consciousness nor identity. If you don't remember what happened one second ago, then there's no room for any sort of personality to arise.


Of course they would. There is an obvious distinction between C and B. C is going to be destroyed. The small loss does not bother us because it is a small loss. C's destruction is not the loss of a few minutes due to an accident, this is the loss of many potential years of experience that C would have, but now will not. The fact that B will have a separate set of experiences is irrelevant. Are you suggesting that long-term coma patients who awaken do not grieve over the loss of many years of experience?

Nope. C doesn't have "years of potential experiences". It only has the present moment, after which he will die and be spontaneously replaced by E.

And again you are missing the point. B and C are physically different, and as such will not have exactly the same experiences simultaneously. C and B are forced by there physical independence into different choices in their lives...that is why C does not think of B as himself.

I completely agree that B and C are physically different (if only slightly) and will have different experiences. But B is also slightly different from D, and C is slightly different from E, and each pair is considered to be the same person. My question is: why?

The problem here is that you're confusing conscious experience (the "I" in "I feel pain", for instance) with individual identify (the "I" in "I am his son", for example). The former denotes whatever is being currently experienced, and the latter is a piece of fiction that we use to link different experiences together. But there isn't actually anything there. It's just a convenient concept to make life easier.

My point is that according to the first sense of the word "I", person B isn't person D, and person C isn't person E. They are all different. But according to the second sense of the word, they can all be considered the same individual. In short, you either have to think that we die again and again, second after second, or you have to face the fact that individual identify is all about semantics. It doesn't refer to any sort of real boundary. There's no individual essence of any kind.

You don't believe there would be a huge moral issue with this?!?!

Not really, no. There would a huge problem concerning our legal system, though. We still have problems with stuff as simple as abortions.

A thought experiment for you:
Take a one year old child A, and create copy B. Send B to a different country/culture to be raised with different experiences. Wait 20 years.

Should B be worried about his death? After all, there is copy A...
I would say of course he should be. Having a different background and upbringing reduces the identification between B and A.

But if B is worried at the 20 year point, should he not be more worried at the 0 year point? After all, he is losing an additional 20 years of experiences...

After 20 years, I would consider both copies to be completely different people in terms of identity. Depending on the circumstances, people can experience substantial personality changes in a single week. But I wouldn't consider a few hours (or even a few days) of perfectly mundane tasks to be very important. If I had the past 4 days erased from my brain, I'd be a little bummed out, but I wouldn't be overly concerned. All I did was work. But if I had traveled to a new country or made a new friend, then yeah, I'd be more upset.

About your last question: no, I don't think he should be more worried. As I said before, if you take the first meaning of the word "I" into consideration, then it makes no sense to say that person B has 20 years of experiences ahead of him. Person B at the 0 year point only has the moment here and now. That's all. He will literally die in a couple of seconds and a new person will take his place.

But if you're using the word in its second sense, then there's no reason why person B couldn't think of the future version of person A in the same way he thinks about his own future version. After all, their starting point is the same, and there is no individual essence that connects each instance of "I" (in the first sense) over time. There really is no soul. The "I" (in the second sense) is an invention, and we get to decide what it is.

Zalbik
22nd December 2009, 06:26 PM
But the point in the OP is that religious people just want the continued experience of a single person. They don't want 2x, or 3x, or whatever, which is what you would get if you made a copy of yourself.

So if you made a copy, and either of you died, you would go from having 2x potential experience to 1x potential experience -- a net loss of zero potential experience.


But even with a copy, I don't get 2x or 3x or whatever potential experience. My copies get their own experiences, which doesn't help me much.

Zalbik
22nd December 2009, 06:55 PM
Why exactly is it more likely? If there is no memory formation, then there's no consciousness nor identity. If you don't remember what happened one second ago, then there's no room for any sort of personality to arise.
It's more likely based on the fact that subjects woken from sleep can often remember their immediate previous experiences, but these memories fade quickly with time. I should have more specifically said that there is little long-term memory formed during sleep.




Nope. C doesn't have "years of potential experiences". It only has the present moment, after which he will die and be spontaneously replaced by E.

So you are equating the word "die" with "change"?


I completely agree that B and C are physically different (if only slightly) and will have different experiences. But B is also slightly different from D, and C is slightly different from E, and each pair is considered to be the same person. My question is: why?

Because B and D share identical past experiences. Similarly with C and E. None of these share identical past experiences with A. B,C,D, and E all have the experience of coming into existence as a copy. A does not.


The problem here is that you're confusing conscious experience (the "I" in "I feel pain", for instance) with individual identify (the "I" in "I am his son", for example). The former denotes whatever is being currently experienced, and the latter is a piece of fiction that we use to link different experiences together. But there isn't actually anything there. It's just a convenient concept to make life easier.

No, I am not. I am speaking only of the conscious experience. I disagree that the conscious experience continually dies and is replaced. I can change a painting without destroying it. My identity can change without dieing.


In short, you either have to think that we die again and again, second after second, or you have to face the fact that individual identify is all about semantics. It doesn't refer to any sort of real boundary. There's no individual essence of any kind.

No, there is a boundary. The boundary of shared experience.


Not really, no. There would a huge problem concerning our legal system, though. We still have problems with stuff as simple as abortions.

Really, you see no difficulty in destroying a sentient being merely because a copy exists?



About your last question: no, I don't think he should be more worried. As I said before, if you take the first meaning of the word "I" into consideration, then it makes no sense to say that person B has 20 years of experiences ahead of him. Person B at the 0 year point only has the moment here and now. That's all. He will literally die in a couple of seconds and a new person will take his place.

I'll play the "we die every second" game for a bit. Should be not be concerned that we have terminated not only B but all of the potential successor's to B. i.e. the 20 years of B1, B2, B3....


After all, their starting point is the same, and there is no individual essence that connects each instance of "I" (in the first sense) over time. There really is no soul. The "I" (in the second sense) is an invention, and we get to decide what it is.
I agree, there is no soul. I also believe however that their is a continuity of experience that defines personal identity. The moment A is copied into B and C, this continuity is broken. B and C start forming their own personal experiences as separate identities from A. More importantly, they have the same cognitive ability and desire to live that A had. To assume B would be willing to give up his own continuation of these experiences merely because C and A exist seems strange.

RecoveringYuppy
22nd December 2009, 07:02 PM
In other words, from a potential experience standpoint, it is at least as good as the afterlife.

Current average life span is already as good as two or three afterlives.

westprog
23rd December 2009, 07:24 AM
When we sleep, we are replaced by another copy anyway. Given enough time, we'll see concurrent copies who weren't properly destroyed agreeing to be put to death, because they were already going to die a few hours later when they decided to go to bed.



We don't know this. In any case, it doesn't feel like a change. The vertical transition appears to preserve identity. Where the split happens, identity is not preserved. As has been noted, the question of identity is philosophically open. There is no definitive answer. The sensible approach is the one where we assume our feelings are correct. Assuming we die when we go to sleep, or from moment to moment, might be true but doesn't get us anywhere. Assuming that some hypothetical other person is us really is almost certainly not true, and doesn't match up with how we feel about it. So acting as if it were true is entirely pointless. Of course, since it's entirely hypothetical anyway, it's hardly an urgent question in practice.

GreyICE
23rd December 2009, 08:33 AM
We don't know this. In any case, it doesn't feel like a change. The vertical transition appears to preserve identity. Where the split happens, identity is not preserved. As has been noted, the question of identity is philosophically open. There is no definitive answer. The sensible approach is the one where we assume our feelings are correct. Assuming we die when we go to sleep, or from moment to moment, might be true but doesn't get us anywhere. Assuming that some hypothetical other person is us really is almost certainly not true, and doesn't match up with how we feel about it. So acting as if it were true is entirely pointless. Of course, since it's entirely hypothetical anyway, it's hardly an urgent question in practice.
But why would the perfect copy feel any different? There's no way he'd 'know' he's a copy immediately. In fact, if we were to make a perfect copy of you while you were sleeping, the only way the copy would know he's a copy is when he woke up in a different bedroom. There'd be no way for him to tell the difference between being copied, and being abducted.

rocketdodger
23rd December 2009, 10:03 AM
But why would the perfect copy feel any different? There's no way he'd 'know' he's a copy immediately. In fact, if we were to make a perfect copy of you while you were sleeping, the only way the copy would know he's a copy is when he woke up in a different bedroom. There'd be no way for him to tell the difference between being copied, and being abducted.

Yeah, this is the point that these people seem to be missing.

You could be killed in your sleep and replaced with a copy every night of your life and you would never know the difference.

But then if I told you what was actually happening while you are asleep you would throw a fit about not being the same person anymore? Huh?

This sounds suspiciously like the nonsense westprog spouts about how if we are in a simulation right now, then we are actually not real.

How can you accept something as an axiom and then not accept it once you learn more information? That isn't how axioms work!

If it is mathematically impossible to tell the difference, then it is .... mathematically impossible to tell the difference. Any aversion one has to such a process is simply not founded on reason.

If you have to die in your sleep to generate a copy (although nobody has said this is a requirement), and that copy will live 50 years longer than the original, and then after another 40 years you do it again, and so on, lemme tell you what is going to happen: you are going to do it, because everyone wants to live longer no matter what they say or believe (yes, suicide bombers want to continue existing, they just think it will actually happen in an afterlife), and after you do it the copy of you -- which is actually you, because as the copy you remember everything from the old you -- will look back and say "hey that wasn't so bad at all, I feel like the same person I was yesterday ..."

rocketdodger
23rd December 2009, 10:16 AM
But even with a copy, I don't get 2x or 3x or whatever potential experience. My copies get their own experiences, which doesn't help me much.

No. You are trying to accept only part of the OP without accepting it all.

The premise is that strong A.I. is true. This doesn't just imply that you can be copied -- in fact it has nothing to do with it. We can copy you into an identical biological body without strong AI being true.

It implies that at the moment of upload -- or copy, whatever you prefer -- "you" are genuinely in two places at once. Because at that moment the patterns of information processing that are you are being instantiated upon two substrates rather than just one.

Because of that, we can destroy as many instances as we wish and as long as at least one instance remains you still remain. In fact, we can destroy all the instances and as long as at least one representation that we can reconstruct an instance from remains, you remain.

Think of it this way -- strong AI implies that you can be the information being processed in a computer. If we pause the process, and copy all the data to other computers, we can restart the process on a different computer and you would continue there. If, before the un-pause, we destroyed the first computer, it wouldn't matter a bit to you because you were "spread" to other substrates before it happened.

Then, suppose the computer you are on printed out it's state and all the information necessary to regenerate the process of you. Then we could make a book that is the representation of you. It would be a very long book, but it would still be possible. And if that last computer of you was destroyed, then, before restarting you after the printing, you would exist in a state of stasis as the book. If we later input all the data stored in the book into another computer, along with the necessary algorithms (which would be included in the book), you would leave stasis and begin changing state again -- you would regain consciousness.

Now all of that might sound absurd, but it really is the implication of strong AI. So if you go with the premise of the OP, you pretty much gotta accept it.

gregthehammer
23rd December 2009, 11:09 AM
I've been following this thread for 6 pages now, trying to wrap my head around Rocket Dodger is saying. If I understand correctly, his argument is that our sense of self is tied to the software in the brain, therefore if we copy that software to other mediums, our sense of self lives on. (correct me if I am wrong).

My thoughts however tend to be that our identity is tied to the hardware. Oh sure, we can change out peripherals, we can swap monitors, we can even have our harddrive erased (as in the case of amnesia). However, once that CPU goes, we truly cease to exist. I suppose if we could figure out how to keep the cpu going indefinately we might be able to live on, I just dont agree that putting my memories and thoughts on another machine turns *me* from a pc to a mac.

Zalbik
23rd December 2009, 11:38 AM
No. You are trying to accept only part of the OP without accepting it all.

The premise is that strong A.I. is true. This doesn't just imply that you can be copied -- in fact it has nothing to do with it. We can copy you into an identical biological body without strong AI being true.

It implies that at the moment of upload -- or copy, whatever you prefer -- "you" are genuinely in two places at once. Because at that moment the patterns of information processing that are you are being instantiated upon two substrates rather than just one.

Because of that, we can destroy as many instances as we wish and as long as at least one instance remains you still remain. In fact, we can destroy all the instances and as long as at least one representation that we can reconstruct an instance from remains, you remain.

Think of it this way -- strong AI implies that you can be the information being processed in a computer. If we pause the process, and copy all the data to other computers, we can restart the process on a different computer and you would continue there. If, before the un-pause, we destroyed the first computer, it wouldn't matter a bit to you because you were "spread" to other substrates before it happened.

Then, suppose the computer you are on printed out it's state and all the information necessary to regenerate the process of you. Then we could make a book that is the representation of you. It would be a very long book, but it would still be possible. And if that last computer of you was destroyed, then, before restarting you after the printing, you would exist in a state of stasis as the book. If we later input all the data stored in the book into another computer, along with the necessary algorithms (which would be included in the book), you would leave stasis and begin changing state again -- you would regain consciousness.

Now all of that might sound absurd, but it really is the implication of strong AI. So if you go with the premise of the OP, you pretty much gotta accept it.

Nope, strong AI does not imply this at all. All it states is that machines can have perform tasks of intelligence equivalent to a human. It says nothing about the nature of that identity's intelligence.

There are basically two reasons I object to the idea that identity is maintained:

1) Objectively, there is a difference between destroying me and leaving a copy, and not destroying me in the first place. Regardless of how good the copy is, this copy undergone the same objective reality that would have occurred had "I" not been destroyed.

2) Self is a subjective experience. Upon self-reflection I don't feel as though the copy would be the same as me. Obviously the copy wouldn't feel that way either. Therefore it isn't me, because the sense of self is purely subjective.

You cannot reasonably claim: No, you are the same person, even though you don't think you are.

As to whether "I" remain after destroying the copies, that is entirely a confusion between types and tokens (http://plato.stanford.edu/entries/types-tokens/#DisBetTypTok

Yes, the "type" of me remains, but the "token" has been destroyed. Given that each token experiences a different subjective reality, I do not believe they can by any rational definition be called "me".

rocketdodger
23rd December 2009, 11:43 AM
I've been following this thread for 6 pages now, trying to wrap my head around Rocket Dodger is saying. If I understand correctly, his argument is that our sense of self is tied to the software in the brain, therefore if we copy that software to other mediums, our sense of self lives on. (correct me if I am wrong).

My thoughts however tend to be that our identity is tied to the hardware. Oh sure, we can change out peripherals, we can swap monitors, we can even have our harddrive erased (as in the case of amnesia). However, once that CPU goes, we truly cease to exist. I suppose if we could figure out how to keep the cpu going indefinately we might be able to live on, I just dont agree that putting my memories and thoughts on another machine turns *me* from a pc to a mac.

If you define yourself in a physical sense -- such as a pc vs a mac -- then you make a fair point.

But in my opinion that is the old way of looking at things. Already in my own generation there are individuals who aren't attached to their physical self and would be more than happy to change physical identities (myself included).

And the newer generations, born into the computing age, are even more prone to such sentiments. How many young people spend just as much time, if not more, managing their various online identities as they spend managing their physical identity?

roger
23rd December 2009, 11:43 AM
I've been following this thread for 6 pages now, trying to wrap my head around Rocket Dodger is saying. If I understand correctly, his argument is that our sense of self is tied to the software in the brain, therefore if we copy that software to other mediums, our sense of self lives on. (correct me if I am wrong).

My thoughts however tend to be that our identity is tied to the hardware. Oh sure, we can change out peripherals, we can swap monitors, we can even have our harddrive erased (as in the case of amnesia). However, once that CPU goes, we truly cease to exist. I suppose if we could figure out how to keep the cpu going indefinately we might be able to live on, I just dont agree that putting my memories and thoughts on another machine turns *me* from a pc to a mac.Not exactly (to first paragraph).

The claim is simply that an organized system like our brain is self aware.

From that everything else follows.

1) there is no 'me' as such. Just a system that is self aware.

2) the sense of 'me' is the result of the self aware system having memory - it remembers previous states of the self aware system.

3) Any duplication of the system will feel like a 'me' - as it will necessarily remember the previous states of the system.

4) we are biologically evolved to preserve our bodies - thus we have an emotional attachment to preserving the self aware system we call 'me'

5) However, there is no 'me' to preserve as such. If you were rendered brain dead, then revived (not currently medically possible, I believe, but it seems possible in theory), the self aware system you call 'me' would leap up in great joy yelling "I'm alive!!!!". That system would not mourn the death of 'me'. It might mourn the loss of time if the period of brain death was prolonged (didn't get to see your kids grow up, lost 20 years worth of possible experiences, etc). But it would never feel that it had 'died'.

6) so, if you copied your brain into another system, that system would think it was 'me' just like you think you are 'me' (boy, the language gets clumsy). There is nothing in either system that is privelged - the self aware system in the biological body will still think "i'm me", the self aware system in the computer brain will also think it's 'me'. There's no difference other than they are running on different substrates.

The easiest way to get your head wrapped around this is to forget about yourself for a second, and think about two random people. Does 'steve' feel like he died when 'Ted' buys it? No. If you suddenly transferred Ted's brain state and memories into Steve's head would that system think "I'm Ted"? Yes. If you killed that Steve, would the body call Ted feel like he was killed? No. If the Steve body was aware of his impending death, would he be afraid and sad? Yes? Would he be comforted by the fact that the Ted body is continuing? No. Does the Steve body perceive what the Ted body is thinking? No.

Just self aware systems doing what they do. There is no actual difference between a self being copied into a computer, and a self being copied 'in time', except we are biologically evolved to accept the latter as 'me' and the former as not 'me'. The only thing that makes my self aware system think of itself as 'me' is the continuity of memory. I remember starting this post, I remember X, Y, and Z. But the self aware system that started this post no longer exists (for reasonable definitions of 'exists'). It just happens that the system that remembers starting the post is still running on my hardware because of our biology.

However, none of that means that we are suddenly going to accept non-existence. I'm not going to allow my brain to be uploaded to a computer, and then have this body shut off. Nope. Cause this self aware system will continue to run, and it will most definitely prefer to keep running, having no access to the self aware process running on a different substrate.

Piscivore
23rd December 2009, 11:48 AM
But in my opinion that is the old way of looking at things. Already in my own generation there are individuals who aren't attached to their physical self and would be more than happy to change physical identities (myself included).
"want to" =\= possible. Wishful thinking.

And the newer generations, born into the computing age, are even more prone to such sentiments. How many young people spend just as much time, if not more, managing their various online identities as they spend managing their physical identity?
Equivocation.

GreyICE
23rd December 2009, 11:58 AM
"want to" =\= possible. Wishful thinking. Wishful thinking? But why? It's not like we're speculating about hyperspace. We have robotic limbs. We have humanoid robots. We have computers. We have data storage devices. We have good evidence our capabilities in every field will continue to grow.

Why, exactly, is digitalizing intelligence and using that to alter our physicaility 'wishful thinking?'

Piscivore
23rd December 2009, 12:01 PM
Wishful thinking? But why? It's not like we're speculating about hyperspace. We have robotic limbs. We have humanoid robots. We have computers. We have data storage devices. We have good evidence our capabilities in every field will continue to grow.

Why, exactly, is digitalizing intelligence and using that to alter our physicaility 'wishful thinking?'

The wishful thinking is in considering "would be more than happy to change physical identities" as evidence that it is possible.

rocketdodger
23rd December 2009, 12:01 PM
Nope, strong AI does not imply this at all. All it states is that machines can have perform tasks of intelligence equivalent to a human. It says nothing about the nature of that identity's intelligence.

Being able to perform tasks of intelligence equivalent to a human implies subjective experience equivalent to a human.


There are basically two reasons I object to the idea that identity is maintained:

1) Objectively, there is a difference between destroying me and leaving a copy, and not destroying me in the first place. Regardless of how good the copy is, this copy undergone the same objective reality that would have occurred had "I" not been destroyed.

Yes I agree. But objectivity doesn't mean squat when it comes to subjective experience. If it seems like you are the same, then you are subjectively the same, because that is the definition of subjective -- how things seem.

2) Self is a subjective experience. Upon self-reflection I don't feel as though the copy would be the same as me. Obviously the copy wouldn't feel that way either. Therefore it isn't me, because the sense of self is purely subjective.

Only if you are awake and aware at the time it is done and you have a frame of reference.

The spacewalking astronaut scenario shows that you are wrong outside of that constraint:

Suppose you are an astronaut and you went on a spacewalk from your starship in the blackest of the empty portion of space. There are no points of reference, and since there is no light, you also cannot see the starship. The only light is small beacons on your spacesuit.

Now suppose the starship tries to beam you back aboard, but the transporter malfunctions and instead of moving you to the starship it instead places an absolutely perfect copy of you right in front of you, facing you.

Can you devise a way to determine whether you are the original astronaut or the copy?

Likewise, if you were copied when you are asleep, and either you or the copy is randomly destroyed before waking, can you devise a way to determine whether you are the original or the copy?

You cannot reasonably claim: No, you are the same person, even though you don't think you are.

That is not what I am claiming. I am claiming that after doing so, copy you wouldn't give a hoot that you aren't the original because according to copy you existence goes on uninterrupted.

And after enough times being put through the experience, you would start to think it was just as trivial as sleep, because every single time the instance that survives has an uninterrupted view of the past.

As to whether "I" remain after destroying the copies, that is entirely a confusion between types and tokens (http://plato.stanford.edu/entries/types-tokens/#DisBetTypTok

Yes, the "type" of me remains, but the "token" has been destroyed. Given that each token experiences a different subjective reality, I do not believe they can by any rational definition be called "me".

The idea of type vs. token breaks down since we are talking about persistent singletons and deep copies.

Is a token still a token if there is only one, and each time the token changes the type is changed accordingly?

Is one token still different from another token if they are identical on all revelant levels?

rocketdodger
23rd December 2009, 12:03 PM
The wishful thinking is in considering "would be more than happy to change physical identities" as evidence that it is possible.

Which is exactly what this thread is not about.

And that has already been explained to you.

But you know what, if you enjoy dragging red herrings around, don't let me stop you.

EDIT: Oh wait, I just noticed your title. It fits.

Zalbik
23rd December 2009, 12:04 PM
I've been following this thread for 6 pages now, trying to wrap my head around Rocket Dodger is saying. If I understand correctly, his argument is that our sense of self is tied to the software in the brain, therefore if we copy that software to other mediums, our sense of self lives on. (correct me if I am wrong).

My thoughts however tend to be that our identity is tied to the hardware. Oh sure, we can change out peripherals, we can swap monitors, we can even have our harddrive erased (as in the case of amnesia). However, once that CPU goes, we truly cease to exist. I suppose if we could figure out how to keep the cpu going indefinately we might be able to live on, I just dont agree that putting my memories and thoughts on another machine turns *me* from a pc to a mac.

I agree actually. I believe his entire idea of moving the software onto a different platform denies the fact that there is no difference between the brain and the mind.

To attempt to move the mind without duplicating it's brain is doomed to failure. The idea that a machine will be able to duplicate a brain in the next 100 years is ludicrous.

gregthehammer
23rd December 2009, 12:11 PM
Not exactly (to first paragraph).

However, none of that means that we are suddenly going to accept non-existence. I'm not going to allow my brain to be uploaded to a computer, and then have this body shut off. Nope. Cause this self aware system will continue to run, and it will most definitely prefer to keep running, having no access to the self aware process running on a different substrate.

I think that is kinda where my mind is at with this, since I find the notion of non-existance so ...suffocating... I think i'll prefer to wait in line and see how the *upload process* goes before committing myself to it. ;)

rocketdodger
23rd December 2009, 12:12 PM
However, none of that means that we are suddenly going to accept non-existence. I'm not going to allow my brain to be uploaded to a computer, and then have this body shut off. Nope. Cause this self aware system will continue to run, and it will most definitely prefer to keep running, having no access to the self aware process running on a different substrate.

But over time the version of you that continues will almost trick itself into not worrying about the shutoff of the other versions.

I mean, why would you worry, since every time you went through the process before you were the version that survived?

Zalbik
23rd December 2009, 12:25 PM
Being able to perform tasks of intelligence equivalent to a human implies subjective experience equivalent to a human.


There are basically two reasons I object to the idea that identity is maintained:



Yes I agree. But objectivity doesn't mean squat when it comes to subjective experience. If it seems like you are the same, then you are subjectively the same, because that is the definition of subjective -- how things seem.



Only if you are awake and aware at the time it is done and you have a frame of reference.

The spacewalking astronaut scenario shows that you are wrong outside of that constraint:

Suppose you are an astronaut and you went on a spacewalk from your starship in the blackest of the empty portion of space. There are no points of reference, and since there is no light, you also cannot see the starship. The only light is small beacons on your spacesuit.

Now suppose the starship tries to beam you back aboard, but the transporter malfunctions and instead of moving you to the starship it instead places an absolutely perfect copy of you right in front of you, facing you.

Can you devise a way to determine whether you are the original astronaut or the copy?

Likewise, if you were copied when you are asleep, and either you or the copy is randomly destroyed before waking, can you devise a way to determine whether you are the original or the copy?



That is not what I am claiming. I am claiming that after doing so, copy you wouldn't give a hoot that you aren't the original because according to copy you existence goes on uninterrupted.

And after enough times being put through the experience, you would start to think it was just as trivial as sleep, because every single time the instance that survives has an uninterrupted view of the past.



The idea of type vs. token breaks down since we are talking about persistent singletons and deep copies.

Is a token still a token if there is only one, and each time the token changes the type is changed accordingly?

Is one token still different from another token if they are identical on all revelant levels?

Ahh...maybe I have just been confused by your use of the word "me" (yes, it is annoyingly confusing).

Let's refer to an individual as X. Future copies of that individual are X2, X3, X4, etc.

All I am claiming is that X will not suddenly stop caring about his own existence simply because he knows X2, X3, X4 will be created. Sure, X2, X3, X4 will see themselves as a continuation of X, but X knows that his personal experiences will eventually end. There will be a point during X's life where he will stop being.

X2, X3, etc. will similarly feel this same fear.

It seems what you are advocating is as long as X2 is creatable, X would suddenly not care about whether or not he continues to live. If he does care, then the creation of X2, X3, etc is not immortality.

Yes X2, X3, etc have an uninterrupted view of the past, but they would still have the knowledge that for every single one of them, there would be an impending interruption in their future.

In the astronaut example, Astronaut A1 and A2 have no way of telling "who is real". Obviously both are real, and both have the same "right" to claim to be the real Astronaut. This doesn't mean one of them is suddenly going to say "Well, since we're both the same anyways, I guess I'm redundant" and poke a hole in his spacesuit.

You seem to think X2, X3, etc are going to do just that however. That they are willing to go through life, sleep, be destroyed, happy in the knowledge that their copies are going to be created.

GreyICE
23rd December 2009, 12:32 PM
The wishful thinking is in considering "would be more than happy to change physical identities" as evidence that it is possible.

No, it is evidence that once we have the technology, people will use it.

The evidence that it will occur is a logical progression of our current technology. Could some unforeseen problem stop it? Maybe. But at the moment it is mostly an engineering problem.

roger
23rd December 2009, 12:48 PM
But over time the version of you that continues will almost trick itself into not worrying about the shutoff of the other versions.

I mean, why would you worry, since every time you went through the process before you were the version that survived?That version of me ain't ever going to get the chance to exist if it involves turning off this self aware system.

Malerin
23rd December 2009, 12:56 PM
You could be killed in your sleep and replaced with a copy every night of your life and you would never know the difference.


Well, I think you would know it because I believe in an afterlife.

But if atheism is true, and I killed you, YOU ROCKETDODGER, in your sleep, you would never know it.

rocketdodger
23rd December 2009, 12:56 PM
Ahh...maybe I have just been confused by your use of the word "me" (yes, it is annoyingly confusing).

Let's refer to an individual as X. Future copies of that individual are X2, X3, X4, etc.

All I am claiming is that X will not suddenly stop caring about his own existence simply because he knows X2, X3, X4 will be created. Sure, X2, X3, X4 will see themselves as a continuation of X, but X knows that his personal experiences will eventually end. There will be a point during X's life where he will stop being.

X2, X3, etc. will similarly feel this same fear.

It seems what you are advocating is as long as X2 is creatable, X would suddenly not care about whether or not he continues to live. If he does care, then the creation of X2, X3, etc is not immortality.

Yes X2, X3, etc have an uninterrupted view of the past, but they would still have the knowledge that for every single one of them, there would be an impending interruption in their future.

In the astronaut example, no obviously Astronaut A1 and A2 have no way of telling "who is real". Obviously both are real, and both have the same "right" to claim to be the real Astronaut. This doesn't mean one of them is suddenly going to say "Well, since we're both the same anyways, I guess I'm redundant" and poke a hole in his spacesuit.

You seem to think X2, X3, etc are going to do just that however. That they are willing to go through life, sleep, be destroyed, happy in the knowledge that their copies are going to be created.

No. I think that the transition from X -> X2 will be very scary, and probably very sad. Because yes, X is going to die (with one exception, explained below). So up until the time of copy, X will be somewhat more depressed than if it was going to genuinely continue forever in some magical way.

But once that has been done, X2 is slightly acclimated to the process because from its point of view, its existence has been continued. And prior to X2->X3, it might feel a little sad, but it will also remember that X->X2 wasn't nearly as bad as it thought. Because it thought of X->X2 as an end when it was X, but then realized X->X2 is trivial when it was X2. And that memory will make it a little more bearable.

Continue with that pattern over and over -- eventually, X2235266 is going to approach X2235266->X2235267 and think "whatever, I have been through 2235265 copy processess and each one has been a trivial continuation of my existence, so I no longer imagine the process as my death at all."

And the exception is if the destruction of X occurs before any relevant state of X diverges from X2. At the point of copy, X == X2. If you destroy X while X == X2, then X really does continue on rather than just being copied.

rocketdodger
23rd December 2009, 12:57 PM
That version of me ain't ever going to get the chance to exist if it involves turning off this self aware system.

So you won't do what Hugh Jackman did in "The Prestige," then.

rocketdodger
23rd December 2009, 12:59 PM
Well, I think you would know it because I believe in an afterlife.

But if atheism is true, and I killed you, YOU ROCKETDODGER, in your sleep, you would never know it.

Yes.

I should have worded it differently.

If you exist today, and you existed yesterday, there is no way to know whether you were killed in your sleep last night and replaced with a perfect copy.

Third Eye Open
23rd December 2009, 01:04 PM
Well, I think you would know it because I believe in an afterlife.

But if atheism is true, and I killed you, YOU ROCKETDODGER, in your sleep, you would never know it.

There could be an afterlife without a god or gods.

Malerin
23rd December 2009, 01:12 PM
Yes.

I should have worded it differently.

If you exist today, and you existed yesterday, there is no way to know whether you were killed in your sleep last night and replaced with a perfect copy.

Ah, but there's a huge assumption there. That I can be killed in my sleep, replaced by a duplicate, and I would continue to exist. If you word it differently:

If you your duplicate exists today, and you existed yesterday, there is no way to know whether you were killed in your sleep last night and replaced with a perfect copy.

In that case, we're back to killing you in your sleep and you not knowing the difference.

roger
23rd December 2009, 01:15 PM
No. I think that the transition from X -> X2 will be very scary, and probably very sad. Because yes, X is going to die (with one exception, explained below). So up until the time of copy, X will be somewhat more depressed than if it was going to genuinely continue forever in some magical way.

But once that has been done, X2 is slightly acclimated to the process because from its point of view, its existence has been continued. And prior to X2->X3, it might feel a little sad, but it will also remember that X->X2 wasn't nearly as bad as it thought. Because it thought of X->X2 as an end when it was X, but then realized X->X2 is trivial when it was X2. And that memory will make it a little more bearable.

Continue with that pattern over and over -- eventually, X2235266 is going to approach X2235266->X2235267 and think "whatever, I have been through 2235265 copy processess and each one has been a trivial continuation of my existence, so I no longer imagine the process as my death at all."

And the exception is if the destruction of X occurs before any relevant state of X diverges from X2. At the point of copy, X == X2. If you destroy X while X == X2, then X really does continue on rather than just being copied.I think we can fairly claim you are just making that up.

Fact is, we have no idea how this would play out. Memory may be able to override a billion or so years of evolutionary conditioning, but then again, it may not.

Zalbik
23rd December 2009, 01:19 PM
No. I think that the transition from X -> X2 will be very scary, and probably very sad. Because yes, X is going to die (with one exception, explained below). So up until the time of copy, X will be somewhat more depressed than if it was going to genuinely continue forever in some magical way.
...
And the exception is if the destruction of X occurs before any relevant state of X diverges from X2. At the point of copy, X == X2. If you destroy X while X == X2, then X really does continue on rather than just being copied.

Dunno what to say other than I disagree that X# would ever become acclimatized as long as he knew what the process entailed. I believe X5000 would always be aware that what he goes through in becoming X5001 is not the same as what he remembers of X4999 -> X5000.

I think he would always know that X4999 actually died at some point...something which X5000 has never experienced.

Similarly with the exception. It makes no difference when the copy is performed.

If it did, X should have no fear of having the copy created, waiting 10 minutes, then dieing. After all, he's only really losing 10 minutes of his life, isn't he?

In the exception case, X != X2. X2 is a copy by definition. The presumes the possibility of X continuing without destruction. This destruction is what X objects to.

rocketdodger
23rd December 2009, 02:21 PM
Ah, but there's a huge assumption there. That I can be killed in my sleep, replaced by a duplicate, and I would continue to exist.

Alright lets try version #3: If you exist today, and you existed yesterday as far as you can subjectively tell, there is no way for you to know whether you are a perfect duplicate of the entity that existed yesterday or actually the entity that existed yesterday.

rocketdodger
23rd December 2009, 02:25 PM
I think we can fairly claim you are just making that up.

Fact is, we have no idea how this would play out. Memory may be able to override a billion or so years of evolutionary conditioning, but then again, it may not.

Well, it does in cases much more extreme than this, so I don't see why this would be a problem.

People try all sorts of things that their evolved monkey brain is yelling at them "dangerous! not good! not safe!" and as soon as they learn their monkey brain is wrong they pretty much ignore it from then on.

Robin
23rd December 2009, 02:25 PM
Wishful thinking? But why? It's not like we're speculating about hyperspace. We have robotic limbs. We have humanoid robots. We have computers. We have data storage devices. We have good evidence our capabilities in every field will continue to grow.

Why, exactly, is digitalizing intelligence and using that to alter our physicaility 'wishful thinking?'
Well, give me a rough outline of the technology that will do this (even supposing that this interpretation of Strong AI is in fact valid) and when it might be available.

rocketdodger
23rd December 2009, 02:36 PM
Dunno what to say other than I disagree that X# would ever become acclimatized as long as he knew what the process entailed. I believe X5000 would always be aware that what he goes through in becoming X5001 is not the same as what he remembers of X4999 -> X5000.

I think he would always know that X4999 actually died at some point...something which X5000 has never experienced.

Similarly with the exception. It makes no difference when the copy is performed.

If it did, X should have no fear of having the copy created, waiting 10 minutes, then dieing. After all, he's only really losing 10 minutes of his life, isn't he?

It doesn't matter what happens to the original, because that memory isn't going to survive.

I don't know how else to put this. If your rational brain knows that every time you walk into a certain room you die, but you have walked into it many many times and you don't die, you walk out the other side, your emotional brain is going to stop listening to your rational brain. It is just that simple. You know you die, but it certainly doesn't feel like dying, so the fear of the process disappears.

In the exception case, X != X2. X2 is a copy by definition. The presumes the possibility of X continuing without destruction. This destruction is what X objects to.

Is a copy of a novel a different story than all the rest of the copies? Is a copy of an image a different image than all the rest of the copies?

porch
23rd December 2009, 02:46 PM
I'm going viral ASAP. I shall not be contained.

westprog
23rd December 2009, 03:19 PM
I've been following this thread for 6 pages now, trying to wrap my head around Rocket Dodger is saying. If I understand correctly, his argument is that our sense of self is tied to the software in the brain, therefore if we copy that software to other mediums, our sense of self lives on. (correct me if I am wrong).

My thoughts however tend to be that our identity is tied to the hardware. Oh sure, we can change out peripherals, we can swap monitors, we can even have our harddrive erased (as in the case of amnesia). However, once that CPU goes, we truly cease to exist. I suppose if we could figure out how to keep the cpu going indefinately we might be able to live on, I just dont agree that putting my memories and thoughts on another machine turns *me* from a pc to a mac.

If we are actually the process of the brain/body, then that is a different thing to a pattern of information.

In any case, all we can say for the moment is that we simply don't know.

Rairun
27th December 2009, 02:11 AM
Okay, I haven't been online for a few days, and I have no time to answer to the responses to my posts right now. But hopefully this will address what I think is the main problem with the position I argue against:

Ah, but there's a huge assumption there. That I can be killed in my sleep, replaced by a duplicate, and I would continue to exist.

[...]

In that case, we're back to killing you in your sleep and you not knowing the difference.

This position doesn't seem to question what "I" means at all. As far as I can tell, the reasoning goes more or less like this:

1. Yes, A (original) and B (copy) display the same properties.
2. BUT it doesn't feel right for me to say that A is B because of the break in continuity.
3. Therefore I adopt continuity as the defining characteristic of personhood.

The issue I have with this is that continuity (as the defining characteristic of personhood) is completely arbitrary. It's so arbitrary that we can't even tell if we are continuous or not.

We feel we are continuous because we are self-referential systems that can hold memories. That's a fact, and it's made obvious when we consider that B (the copy) feels as though she is continuous, even though she isn't. So we have this bizarre situation in which B argues that personhood is defined by continuity even though she isn't continuous at all! There is only the illusion that she is.

So yeah, this is what I see here: a bunch of people who think personhood is defined by continuity just because they have an incredibly strong feeling that they are continuous. It's exactly the same problem we have with the Ship of Theseus, and the answer is incredibly simple. The Ship of Theseus is the same ship, or a completely different ship, depending on your definitions. If you think the most important features of the ship are its parts, then it's not the same ship. If you think the most important feature is its design, then it's the same ship.

And that kind of disagreement is fine, because it's not a disagreement at all. If you think your copy wouldn't be you because you define yourself in terms of real continuity, I wouldn't disagree with that. There's nothing to disagree with. It's a personal choice. What I really have a problem with is that you only define yourself in terms of real continuity because of your strong sense of psychological continuity.

It just makes you sound like a victim of social indoctrination, which teaches us that the "I" is actually a real, distinct thing. When it's shown not to exist, you try to place it somewhere where it can't be dispelled, and the place of choice seems to be continuity. But the fact still remains that real continuity plays absolutely no role in the way we feel about ourselves and interact with the world. Psychological continuity, sure, I'll give you that. But not real continuity. That's just a random attribute that people are attached to for no other reason than it reinforces their preconceived notions about the self.