Can Artificial Intelligence have free will?

Puhleeeeze. How lame.

No. ChatGPT sees everything Google sees.

That's your best? I stand by my statement...you have no IT training.

ChatGPT does not see everything Google sees. In fact, it doesn't see anything at all unless a human puts a pointer on data for it...it sees that and that's all it sees.
 
They are not web browsers at all.
AI is a programming technique. It involves setting up a 'neural net' (or matrix) that can be used to influence results of the program, and the net itself is modified via an automated feedback loop, designed by the programmers.

AI has no sentience, and is not capable of sentience, any more than your average brick.

There is no such fucking thing as a "neural net" and all a matrix is is a dBase thing.

Tell me you don't know jack shit about IT without saying it.
 
While what you say is theoretically possible, it would mean Amazon itself would have be involved in such a break-in. That would be financial suicide. People would leave AWS and no longer use Alexa. Such a tyrannical government taking over Amazon in that way would be pretty damn obvious.
Yes, absolutely correct. I am not suggesting anyone FEAR and PANIC even though it might already be too late! I'm suggesting merely that JesusAI has a point in that Alexa could be used for that purpose and that it would be "totalitarian" (although that isn't exactly the word I would use). Everything you wrote on the matter was totally spot on. I didn't see how you two were disagreeing.
 
Totally correct, within your context.


Totally correct, within your context.

I think you are both talking past each other to a certain extent. Alexa swaps control for convenience with the user. Alexa comes into control while the user comes into convenience. The user, however, never sees who is controlling Alexa, nor who is listening to everything Alexa hears, nor who is receiving and processing all the data Alexa sends out onto the internet. The user can see the settings he has chosen, but cannot see into Alexa to verify that his settings are what are actually in effect. Alexa can also be hacked, allowing the hacker to then control everything controlled by Alexa.

A totalitarian regime could use Alexa to advance a totalitarian agenda. A totalitarian regime could also simply use Alexa to bring convenience into their homes. As Into the Night mentioned, Alexa is just a voice recognition interface tied to a controller application ... with an internet connection.

my context is supreme.

the current instances of AI ARE being put to nefarious purpose, making them evil.
 
Or:
* the fans
* the USB connectors
* the audio connectors
* the LEDS on the front of the case?
* the reset button
* the power button
* the Bluetooth network (if used)
* the web
* email
* SSL
* a programming language
* a database language
* a protocol
* the cloud
* ChatGPT
* the master oscillator
* the PCI buss
* a charger cable
* an error message

Just thought I'd add to the pile of stuff that is not the 'three parts of a computer', yet they are all parts of a computer system.

Obviously, the Wannabe Creep just has no idea what he's talking about (again). He found something on Google or Wikipedia and tries to 'talk smart' while looking dumb.

yet i make you look foolish daily.
 
Yes, absolutely correct. I am not suggesting anyone FEAR and PANIC even though it might already be too late! I'm suggesting merely that JesusAI has a point in that Alexa could be used for that purpose and that it would be "totalitarian" (although that isn't exactly the word I would use). Everything you wrote on the matter was totally spot on. I didn't see how you two were disagreeing.

His trouble was that he was blaming solely AI for supporting tyranny and even making the argument that all AI does so, when ANY program can be used to support tyranny. It's a problem with a compositional error.
 
im not arguing both sides of a paradox.

im saying things used as evil are defacto evil as well during that brief time.

You made a paradox in #468:
1) AI is not evil.
2) All instances of AI are evil.

Which is it, dude?

The only way to clear any paradox is to choose ONE and only ONE argument, and utterly discard the other and never use it again.
Be aware the choosing argument 2) is a compositional error fallacy.
 
Back
Top