At what point does something become sentient or conscious? Is an animal sentient? What about an insect? A microscopic organism? A single-celled organism? This leads me to my main idea. Could a computer ever be sentient? The Blue Brain project anticipates to simulate a human brain by 2020. And I don't just mean an artificial neural network; they are simulating biological neurons. Could there ever be a test for sentience? And I don't mean something weak like the Turing test. If the Blue Brain project were to succeed and simulate a human brain, what right would we have to turn it on and off at will?
That's a huge topic to cover, and I do not think everyone will ever fully agree on any of the questions you ask. Personally, I fully believe it is possible for a simulated brain to equal or surpass our intelligence. Many people have a problem with this belief because they believe in the necessity of a soul for true sentience/intelligence. Since I do not believe in soul, I don't really have an issue with it.

I also do not have a problem with turning off a simulated human brain. If it is simply switched off and erased, then it does not suffer, it simply loses intelligence that would have (presumably) built up from interactions. If the system were simply switched off and not erased, then it would really be no different from sleeping and waking up since the it could probably start from its prior state.

If this is a bit nonsensical, my excuse is I'm watching Lost in my other window Very Happy
Cybermen Razz Although, they are part human.
Harq wrote:
That's a huge topic to cover, and I do not think everyone will ever fully agree on any of the questions you ask. Personally, I fully believe it is possible for a simulated brain to equal or surpass our intelligence. Many people have a problem with this belief because they believe in the necessity of a soul for true sentience/intelligence. Since I do not believe in soul, I don't really have an issue with it.

I also do not have a problem with turning off a simulated human brain. If it is simply switched off and erased, then it does not suffer, it simply loses intelligence that would have (presumably) built up from interactions. If the system were simply switched off and not erased, then it would really be no different from sleeping and waking up since the it could probably start from its prior state.

If this is a bit nonsensical, my excuse is I'm watching Lost in my other window Very Happy


Brain-death =/= sleep.
I think that Skynet could happen.
Pseudoprogrammer wrote:
At what point does something become sentient or conscious?


That is two different questions:
Conscious is when an organism is awake and responsive. The machine equivalent would basically be having active I/O, although that is a pretty loose equivalent.

Sentient is the ability to feel or perceive. To experience things, more or less.

Quote:
Is an animal sentient? What about an insect? A microscopic organism? A single-celled organism?


Yes, yes, yes, and yes.

Quote:
This leads me to my main idea. Could a computer ever be sentient? The Blue Brain project anticipates to simulate a human brain by 2020. And I don't just mean an artificial neural network; they are simulating biological neurons. Could there ever be a test for sentience? And I don't mean something weak like the Turing test. If the Blue Brain project were to succeed and simulate a human brain, what right would we have to turn it on and off at will?


Sure, although I expect what you really want to discuss is sapience, intentionality, and self-awareness. Being sentient isn't difficult or meaningful in the context you are talking about.

Reapex wrote:
Brain-death =/= sleep.


Suspending an AI is also not brain death, as the state would be preserved. There is no biological equivalent of a suspended program.

Quote:
I think that Skynet could happen.


Heh, no, it can't.
Reapex wrote:
Brain-death =/= sleep.


You'll note that I compared saving a state and switching it off to sleep, not turning it off and erasing everything. I guess the comparison isn't quite valid anyway, as when we sleep we actually process information.
low power mode would be sleeping.
Harq wrote:
Many people have a problem with this belief because they believe in the necessity of a soul for true sentience/intelligence. Since I do not believe in soul, I don't really have an issue with it.


Why can't a machine have a soul? Why can't every spark of sapience be part of the universe as a whole, and thus be as eternal as the universe itself?
Pseudoprogrammer wrote:
At what point does something become sentient or conscious? Is an animal sentient? What about an insect? A microscopic organism?
Generally considered no, definitely not, and definitely not, respectively.
Pseudoprogrammer wrote:
A single-celled organism? This leads me to my main idea. Could a computer ever be sentient?
Absolutely.
Pseudoprogrammer wrote:
The Blue Brain project anticipates to simulate a human brain by 2020. And I don't just mean an artificial neural network; they are simulating biological neurons. Could there ever be a test for sentience?
I think to an extent, but I believe that a perfect reproduction of self-awareness would be functionally indistinguishable from actual self-awareness, so it becomes an impossibility to try to differentiate between the two.
Pseudoprogrammer wrote:
And I don't mean something weak like the Turing test. If the Blue Brain project were to succeed and simulate a human brain, what right would we have to turn it on and off at will?


DShiznit wrote:
Why can't a machine have a soul? Why can't every spark of sapience be part of the universe as a whole, and thus be as eternal as the universe itself?
Because we don't know what defines a soul. Is it, as some religions believe, something intangible within the physical frameworks of our minds, a ghost in the machine? Or is it simply an evolutionary mechanism, retained by survival of the fittest, to protect ourselves against harm by ourselves and to give us motivation to defend ourselves against others?
KermMartian wrote:
Pseudoprogrammer wrote:
At what point does something become sentient or conscious? Is an animal sentient? What about an insect? A microscopic organism?
Generally considered no, definitely not, and definitely not, respectively.
Pseudoprogrammer wrote:
A single-celled organism? This leads me to my main idea. Could a computer ever be sentient?
Absolutely.


How is a single-celled organism sentient but an insect is not?

Also, it would seem that whether or not a specific living organism is sentient is a matter of opinion, rather than concrete fact.
Ultimate Dev'r wrote:
KermMartian wrote:
Pseudoprogrammer wrote:
At what point does something become sentient or conscious? Is an animal sentient? What about an insect? A microscopic organism?
Generally considered no, definitely not, and definitely not, respectively.
Pseudoprogrammer wrote:
A single-celled organism? This leads me to my main idea. Could a computer ever be sentient?
Absolutely.


How is a single-celled organism sentient but an insect is not?
I believe that's a quote error on Kerm's part.
That was indeed a quote error. I think you knew that I was agreeing that a computer could be sentient, not a single-celled organism. Ultimate Dev'r, ComicIDIOT, what are your views on the subject/
KermMartian wrote:
Because we don't know what defines a soul. Is it, as some religions believe, something intangible within the physical frameworks of our minds, a ghost in the machine? Or is it simply an evolutionary mechanism, retained by survival of the fittest, to protect ourselves against harm by ourselves and to give us motivation to defend ourselves against others?

Or additionally the belief that you can have a physical brain and a spiritual soul without resorting to Ghost-in-the-machine dualism where the two are fundamentally separate entities.

Grant, I'll send you a copy of my dad's book "The Mind and The Machine" when it goes to print next year.
KermMartian wrote:
That was indeed a quote error. I think you knew that I was agreeing that a computer could be sentient, not a single-celled organism. Ultimate Dev'r, ComicIDIOT, what are your views on the subject/


Sentience (and what is/isn't sentient) seems to be as much a philosophical debate as it is a scientific one.

Currently I am of the opinion that anything living is sentient to some extent.
I'm sure it will be feasible at some point. I doubt as early as 2020. We have neither the computational ability, nor the adequate understanding of neuroscience.

It would probably be a matter of understanding the seed / algorithm that causes a human mind to develop the way it does; then representing this process in computational form as a basis for "growing" an artificial intelligence. The outcome should somehow be applied to a body composed of synthetic materials and nanotechnologies that perform the same functions as an organic body. This would produce humans from synthetic components, rather than some sense of robots. There would be a few obvious advantages: No need for nourishment, oxygen or other such dependencies; but a synthetic body would require some source of energy, and occasional maintenance. Synthetic bodies would also be immune to organic infections - bacteria, disease, etc. No mitochondria - no aging. Sufficient nanotechnology could also mean synthetic bodies are flexible enough to undergo radical alterations in form, (e.g., changing facial appearance at a whim) as well as regenerate damage.

I don't think trying to reproduce a human mind inside of a computer based on observation of human behavior is even a worthwhile endeavor. It would be like trying to emulate humanity. That will never practically work. If we're going to produce something human-like, we may as well use actual, organic blueprints.

I also think there's an important philosophical issue, here. If we produce artificial intelligence in some form that has no needs - other than maybe computer maintenance - then its sentient existence is entirely pointless. Some A.I. based on human-like intelligence would need some sort of primitive drive in accordance with our own goals. This creates another potential problem, though. If we develop seed A.I., (that is, A.I. that can improve upon itself) it may create a point of technological singularity, progressing so quickly that we can no longer understand it. It may then decide humanity is standing in the way of further progress, or that there is no need to share available resources with humans; thus, we should be wiped out.

I'm more in favor of transitioning humans into synthetic lifeforms. It's much safer. If our computerized brains allow us to see radical advancements in a short amount of time, at least these advancements are based on the best interests of humanity.
Zera wrote:
I'm sure it will be feasible at some point. I doubt as early as 2020. We have neither the computational ability, nor the adequate understanding of neuroscience.

It would probably be a matter of understanding the seed / algorithm that causes a human mind to develop the way it does; then representing this process in computational form as a basis for "growing" an artificial intelligence. The outcome should somehow be applied to a body composed of synthetic materials and nanotechnologies that perform the same functions as an organic body. This would produce humans from synthetic components, rather than some sense of robots. There would be a few obvious advantages: No need for nourishment, oxygen or other such dependencies; but a synthetic body would require some source of energy, and occasional maintenance. Synthetic bodies would also be immune to organic infections - bacteria, disease, etc. No mitochondria - no aging. Sufficient nanotechnology could also mean synthetic bodies are flexible enough to undergo radical alterations in form, (e.g., changing facial appearance at a whim) as well as regenerate damage.

I don't think trying to reproduce a human mind inside of a computer based on observation of human behavior is even a worthwhile endeavor. It would be like trying to emulate humanity. That will never practically work. If we're going to produce something human-like, we may as well use actual, organic blueprints.

I also think there's an important philosophical issue, here. If we produce artificial intelligence in some form that has no needs - other than maybe computer maintenance - then its sentient existence is entirely pointless. Some A.I. based on human-like intelligence would need some sort of primitive drive in accordance with our own goals. This creates another potential problem, though. If we develop seed A.I., (that is, A.I. that can improve upon itself) it may create a point of technological singularity, progressing so quickly that we can no longer understand it. It may then decide humanity is standing in the way of further progress, or that there is no need to share available resources with humans; thus, we should be wiped out.
Indeed, which is a classic situation in dystopic science fiction about the singularity, and (I believe) one of the motivations behind the idea of the Three Laws of Robotics, to keep any sentient AI as subservient to humans.

Zera wrote:
I'm more in favor of transitioning humans into synthetic lifeforms. It's much safer. If our computerized brains allow us to see radical advancements in a short amount of time, at least these advancements are based on the best interests of humanity.
I agree that besides being safer, that's probably also a more immediately-realizable goal, as insanely complex as it is, duplicating something extant is generally easier than synthesizing a completely new version of the same.

Edit: And by the way, very nicely worded and very well-thought-out post.
Comment censored for content.
DShiznit wrote:
Comment censored for content.
Oh grow up. Rolling Eyes
KermMartian wrote:
... ComicIDIOT, what are your views on the subject/
A late response on my part.

With the advances we have had, computers are becoming more and more enabled. Now we're starting to see the major game companies (i.e. Sony & Microsoft) adding video inputs, while Nintendo Wii uses a peculiar form video input (in a form of 3D motion tracking.)

While Video input is nothing completly new (SixSense Technology) but the implementation in PlayStation & XBOX consoles is really going to propel Gesture Driven Interfaces (GDI's? Very Happy)

And as netbook and laptop computers evolve with more processors I believe we will start seeing wider angle and higher quality web cams being coupled with them.

Computers are sentient. They perceive and they react. They make decisions, though restricted ones. Computer sentience is only restricted by the software and the hardware.

    User wrote:
    comicIDIOT wrote:
    Computer sentience is only restricted by the software and hardware.
    That makes no sense! FAIL
    Sure, if you don't think about it. Computer sentience is only as complex as the program/operating system can allow. Which in turn is limited by the hardware it's run on.

    User wrote:
    comicIDIOT wrote:
    ... while Nintendo Wii uses a peculiar form video input (in a form of 3D motion tracking.)
    The Wii doesn't see in 3D.
    Before you make such an accusation. Stop and think why the sensor bar is about 9 inches long.

    In the linked YouTube video, you'll hear the Wii remote has a camera. Though, the 3D affects are the same because the parallax between the two sources of IR LEDs will differ just as two cameras viewing one source would differ.
I think what the real question being asked here is whether or not computers will become sapient, that is, self-aware, e.g. "Does this unit have a soul?"
  
Register to Join the Conversation
Have your own thoughts to add to this or any other topic? Want to ask a question, offer a suggestion, share your own programs and projects, upload a file to the file archives, get help with calculator and computer programming, or simply chat with like-minded coders and tech and calculator enthusiasts via the site-wide AJAX SAX widget? Registration for a free Cemetech account only takes a minute.

» Go to Registration page
Page 1 of 2
» All times are UTC - 5 Hours
 
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum

 

Advertisement