On Feb 29, 11:58*am, Short Wave Sportfishing
wrote:
On Fri, 29 Feb 2008 08:19:39 -0500, "jamesgangnc"
wrote:
"Wayne.B" wrote in message
.. .
On Thu, 28 Feb 2008 11:43:56 -0800 (PST), jamesgangnc
wrote:
Back to computers: Read up on "the Turing Test" for some fresh
insights:
http://en.wikipedia.org/wiki/Turing_test
As you can see, this discussion has been going on for a long time. I
would postulate that Kasparov's automated opponent has already passed
the test within its limited realm. At some point, and it may have
already started, computers will be expertly programmed to simulate
feelings, emotion and creative thought. When the simulations become
so well done that world class experts can't tell the difference, what
do you have then?
Simulating human behavior is far from possessing human characteristics.
Agreed but the point of the Turing test is that if the simulation is
so well done that an expert can not reliably tell the difference, then
intelligence exists.
That was what they believed at the time. *I don't think anyone seriously
buys that anymore and no significant efforts in the ai world today are
trying to pass the turing test. *The turing test is a pretty old definition
of intelligence. *And it all depends on your definition of intelligence..
The original topic was skynet, the fictional suggestion that once a certain
level of computation capability is passed the machine becomes self aware and
decides to destroy mankind *Is self awareness a quality of intelligence?
What exactly is self awareness? *Does a program that could pass the turing
test also self aware? *Is your pet intelligent but just not as intelligent
as us?
Must. *Not. *Reply. *Must. *Resist. *Replying.
Ah hell... *:)
I still think it's a question of defnition. *If humankind can wrap
it's collective brain around a concept that will accept intelligence
or a form of consciousness without those features that define us (even
as we struggle to define it outselves as you said), then that will be
the definition.
Consider this - myth brought us Golems, Afreets and Frankenstiens are
all visions of life other than ours. *In a sense, Golems, Afreets and
Frankenstiens are extensions of human fear of being duplicated (or
reanimated in the case of Frankenstein). *Zombies, ghouls and in
general the undead also are part of these fears.
In short, humans don't wish to be duplicated in any form even if it is
amoral "life" which means that unless and until humans can accept that
other forms of intelligence and consciourness exist. *With respect to
the example I provided, all are humanoid in some fashion and operate
on a logic system that is foreign. *However, who is to say that the
thought *process of an Afreet isn't just a different order of morality
and consciousness? *
I'll give you an example of what I mean. We extend the definition of
"life" to single celled organisims. *A single celled organism can't
make decisions based on a logic tree and simply exist. *On the other
hand, a computer can, and does, make decisionsl based on an ordered
logic system based largely on what we believe human thought does.
How can one non-functional low order form be considered "life" and the
other fully functional higher order form not? *I would posit that it's
a bias by organic creatures against those that are not organic - that
even at this early stage of computational "intelligence", computers
are life.
With respect to my dogs, one of them is smarter than I am. *:)- Hide quoted text -
- Show quoted text -
I can expand my definition of life to include computers. And your
dog.
I'm also ok with computers being intelligent. And your dog being
intelligent. And most of the posters in this news group being
intelligent.
I'm stalled at self aware and consciousness. I believe those are
qualities outside being able to make logical decisions. I think your
dog is self aware. I think most people that have spent time
interacting with animals recognizes that they do have some self
awareness. Humans did not cross some mytical boundary to become self
aware. Your computer is not self aware. I do not deny the
possibility of mechanical consciousness. But I do not believe that
the present direction of modern computers is leading to such a being.
While it is difficult to agree on the definition of consciousness, I
think the majority of people will want to include characteristics that
are outside simply functioning and making logic tree decisions.
Characteristics that for lack of a better word we call human. You
have to work inside the range of reasonable definitions.