Should we worry about sentient AI?

Because one day, perhaps very far in the future, there probably will be a sentient AI. How do I know that? Because it is demonstrably possible for mind to emerge from matter, as it did first in our ancestors’ brains. Unless you insist human consciousness resides in an immaterial soul, you ought to concede it is possible for physical stuff to give life to mind.

https://www.theguardian.com/books/2...-sentient-machines-ai-artificial-intelligence

First, I think our tech is a long, long way from creating sentient beings.

Second, our tech is certainly capable of making machines capable of mimicking humans. Applying the Turing Test helps determine if they are human or machine.

https://plato.stanford.edu/entries/turing-test/
The phrase “The Turing Test” is also sometimes used to refer to certain kinds of purely behavioural allegedly logically sufficient conditions for the presence of mind, or thought, or intelligence, in putatively minded entities. So, for example, Ned Block’s “Blockhead” thought experiment is often said to be a (putative) knockdown objection to The Turing Test. (Block (1981) contains a direct discussion of The Turing Test in this context.) Here, what a proponent of this view has in mind is the idea that it is logically possible for an entity to pass the kinds of tests that Descartes and (at least allegedly) Turing have in mind—to use words (and, perhaps, to act) in just the kind of way that human beings do—and yet to be entirely lacking in intelligence, not possessed of a mind, etc.


https://www.techtarget.com/searchenterpriseai/definition/Turing-test
 
Wouldn't it really depend on what motivates that AI?

For example, if it put a premium on survival it might well eradicate anything it sees as threat if it can.

On the other hand, if it puts a premium on gaining data and information it might work towards figuring out how to upload organic brains to its database.

There's a lot of ways this could go.
 
Wouldn't it really depend on what motivates that AI?

For example, if it put a premium on survival it might well eradicate anything it sees as threat if it can.

On the other hand, if it puts a premium on gaining data and information it might work towards figuring out how to upload organic brains to its database.

There's a lot of ways this could go.

Good points and agreed. People tend to fear what they'd do. The whole "alien space invasion" fears are based, in part, on what the Euros did to the Indians and Africans.

People also fear the unknown so fearing technology they don't understand is natural.....and also indicates their stupidity.
 
Because one day, perhaps very far in the future, there probably will be a sentient AI. How do I know that? Because it is demonstrably possible for mind to emerge from matter, as it did first in our ancestors’ brains. Unless you insist human consciousness resides in an immaterial soul, you ought to concede it is possible for physical stuff to give life to mind.

https://www.theguardian.com/books/2...-sentient-machines-ai-artificial-intelligence

Yes.

because its actually programmed by it's creators to destroy humanity.

i suggest watching The Terminator to undertand.

There Is No Fate.
 
Don't know.
Don't much care.

And even if it happens?
I HIGHLY doubt it will while I am still around.
So...it's someone else's problem.
 
Thanks for showing your ignorance.

Fine.

Show us all the link to 100% proof that sentient AI will be coming in the next 40 years or so?

Which you won't.

Because no such link exists.


Damn man...all you seem to do is troll and make a fool out of yourself.

Your life must REALLY suck.

Bye troll.
 
Back
Top