Can Artificial Intelligence have free will?

https://theconversation.com/artific...ans-dream-heres-how-to-take-power-back-143722


Artificial intelligence is a totalitarian’s dream – here’s how to take power back
Published: August 12, 2020 8.32am EDT
Author
Simon McCarthy-Jones
Associate Professor in Clinical Psychology and Neuropsychology, Trinity College Dublin


Individualistic western societies are built on the idea that no one knows our thoughts, desires or joys better than we do. And so we put ourselves, rather than the government, in charge of our lives. We tend to agree with the philosopher Immanuel Kant’s claim that no one has the right to force their idea of the good life on us.

Artificial intelligence (AI) will change this. It will know us better than we know ourselves. A government armed with AI could claim to know what its people truly want and what will really make them happy. At best it will use this to justify paternalism, at worst, totalitarianism.

Every hell starts with a promise of heaven. AI-led totalitarianism will be no different. Freedom will become obedience to the state. Only the irrational, spiteful or subversive could wish to chose their own path.

To prevent such a dystopia, we must not allow others to know more about ourselves than we do. We cannot allow a self-knowledge gap.


The All-Seeing AI
In 2019, the billionaire investor Peter Thiel claimed that AI was “literally communist”. He pointed out that AI allows a centralising power to monitor citizens and know more about them than they know about themselves. China, Thiel noted, has eagerly embraced AI.

We already know AI’s potential to support totalitarianism by providing an Orwellian system of surveillance and control. But AI also gives totalitarians a philosophical weapon. As long as we knew ourselves better than the government did, liberalism could keep aspiring totalitarians at bay.

But AI has changed the game. Big tech companies collect vast amounts of data on our behaviour. Machine-learning algorithms use this data to calculate not just what we will do, but who we are.

Today, AI can predict what films we will like, what news we will want to read, and who we will want to friend on Facebook. It can predict whether couples will stay together and if we will attempt suicide. From our Facebook likes, AI can predict our religious and political views, personality, intelligence, drug use and happiness.

The accuracy of AI’s predictions will only improve. In the not-too-distant future, as the writer Yuval Noah Harari has suggested, AI may tell us who we are before we ourselves know.

These developments have seismic political implications. If governments can know us better than we can, a new justification opens up for intervening in our lives. They will tyrannise us in the name of our own good.

Freedom through tyranny
Old man wearing tuxedo.
Isaiah Berlin. GaHetNa (Nationaal Archief NL)
The philosopher Isaiah Berlin foresaw this in 1958. He identified two types of freedom. One type, he warned, would lead to tyranny.

Negative freedom is “freedom from”. It is freedom from the interference of other people or government in your affairs. Negative freedom is no one else being able to restrain you, as long as you aren’t violating anyone else’s rights.

In contrast, positive freedom is “freedom to”. It is the freedom to be master of yourself, freedom to fulfil your true desires, freedom to live a rational life. Who wouldn’t want this?

But what if someone else says you aren’t acting in your “true interest”, although they know how you could. If you won’t listen, they may force you to be free – coercing you for your “own good”. This is one of the most dangerous ideas ever conceived. It killed tens of millions of people in Stalin’s Soviet Union and Mao’s China.

The Russian Communist leader, Lenin, is reported to have said that the capitalists would sell him the rope he would hang them with. Peter Thiel has argued that, in AI, capitalist tech firms of Silicon Valley have sold communism a tool that threatens to undermine democratic capitalist society. AI is Lenin’s rope.

Fighting for ourselves
We can only prevent such a dystopia if no one is allowed to know us better than we know ourselves. We must never sentimentalise anyone who seeks such power over us as well-intentioned. Historically, this has only ever ended in calamity.

One way to prevent a self-knowledge gap is to raise our privacy shields. Thiel, who labelled AI as communistic, has argued that “crypto is libertarian”. Cryptocurrencies can be “privacy-enabling”. Privacy reduces the ability of others to know us and then use this knowledge to manipulate us for their own profit.

Yet knowing ourselves better through AI offers powerful benefits. We may be able to use it to better understand what will make us happy, healthy and wealthy. It may help guide our career choices. More generally, AI promises to create the economic growth that keeps us from each other’s throats.

The problem is not AI improving our self-knowledge. The problem is a power disparity in what is known about us. Knowledge about us exclusively in someone else’s hands is power over us. But knowledge about us in our own hands is power for us.

Anyone who processes our data to create knowledge about us should be legally obliged to give us back that knowledge. We need to update the idea of “nothing about us without us” for the AI-age.

What AI tells us about ourselves is for us to consider using, not for others to profit from abusing. There should only ever be one hand on the tiller of our soul. And it should be ours.
 
You're also wrong about the neural net which renders your post meaningless from the beginning.
I'm pretty knowledgeable about neural nets. Would you mind explaining to me why neural nets are somehow not AI?

Also, Into the Night referred to neural nets as a "decision matrix." That isn't necessarily the term I would use, but it's not a terribly bad term either. What about the term gives you heartache? After all, you claimed that he's wrong about it. I'd appreciate a little insight into how you are looking at the matter.
 
you know full well that a.i. can be put to spine tingling totalitarian purpose.
You know full well that a knife can be used to commit deadly violence.

A knife, however, can also be used to cut your steak. AI can be used to save lives and to help businesses grow.
 
https://theconversation.com/artific...ans-dream-heres-how-to-take-power-back-143722


Artificial intelligence is a totalitarian’s dream – here’s how to take power back
Published: August 12, 2020 8.32am EDT
Author
Simon McCarthy-Jones
Associate Professor in Clinical Psychology and Neuropsychology, Trinity College Dublin


Individualistic western societies are built on the idea that no one knows our thoughts, desires or joys better than we do. And so we put ourselves, rather than the government, in charge of our lives. We tend to agree with the philosopher Immanuel Kant’s claim that no one has the right to force their idea of the good life on us.

Artificial intelligence (AI) will change this. It will know us better than we know ourselves. A government armed with AI could claim to know what its people truly want and what will really make them happy. At best it will use this to justify paternalism, at worst, totalitarianism.

Every hell starts with a promise of heaven. AI-led totalitarianism will be no different. Freedom will become obedience to the state. Only the irrational, spiteful or subversive could wish to chose their own path.

To prevent such a dystopia, we must not allow others to know more about ourselves than we do. We cannot allow a self-knowledge gap.


The All-Seeing AI
In 2019, the billionaire investor Peter Thiel claimed that AI was “literally communist”. He pointed out that AI allows a centralising power to monitor citizens and know more about them than they know about themselves. China, Thiel noted, has eagerly embraced AI.

We already know AI’s potential to support totalitarianism by providing an Orwellian system of surveillance and control. But AI also gives totalitarians a philosophical weapon. As long as we knew ourselves better than the government did, liberalism could keep aspiring totalitarians at bay.

But AI has changed the game. Big tech companies collect vast amounts of data on our behaviour. Machine-learning algorithms use this data to calculate not just what we will do, but who we are.

Today, AI can predict what films we will like, what news we will want to read, and who we will want to friend on Facebook. It can predict whether couples will stay together and if we will attempt suicide. From our Facebook likes, AI can predict our religious and political views, personality, intelligence, drug use and happiness.

The accuracy of AI’s predictions will only improve. In the not-too-distant future, as the writer Yuval Noah Harari has suggested, AI may tell us who we are before we ourselves know.

These developments have seismic political implications. If governments can know us better than we can, a new justification opens up for intervening in our lives. They will tyrannise us in the name of our own good.

Freedom through tyranny
Old man wearing tuxedo.
Isaiah Berlin. GaHetNa (Nationaal Archief NL)
The philosopher Isaiah Berlin foresaw this in 1958. He identified two types of freedom. One type, he warned, would lead to tyranny.

Negative freedom is “freedom from”. It is freedom from the interference of other people or government in your affairs. Negative freedom is no one else being able to restrain you, as long as you aren’t violating anyone else’s rights.

In contrast, positive freedom is “freedom to”. It is the freedom to be master of yourself, freedom to fulfil your true desires, freedom to live a rational life. Who wouldn’t want this?

But what if someone else says you aren’t acting in your “true interest”, although they know how you could. If you won’t listen, they may force you to be free – coercing you for your “own good”. This is one of the most dangerous ideas ever conceived. It killed tens of millions of people in Stalin’s Soviet Union and Mao’s China.

The Russian Communist leader, Lenin, is reported to have said that the capitalists would sell him the rope he would hang them with. Peter Thiel has argued that, in AI, capitalist tech firms of Silicon Valley have sold communism a tool that threatens to undermine democratic capitalist society. AI is Lenin’s rope.

Fighting for ourselves
We can only prevent such a dystopia if no one is allowed to know us better than we know ourselves. We must never sentimentalise anyone who seeks such power over us as well-intentioned. Historically, this has only ever ended in calamity.

One way to prevent a self-knowledge gap is to raise our privacy shields. Thiel, who labelled AI as communistic, has argued that “crypto is libertarian”. Cryptocurrencies can be “privacy-enabling”. Privacy reduces the ability of others to know us and then use this knowledge to manipulate us for their own profit.

Yet knowing ourselves better through AI offers powerful benefits. We may be able to use it to better understand what will make us happy, healthy and wealthy. It may help guide our career choices. More generally, AI promises to create the economic growth that keeps us from each other’s throats.

The problem is not AI improving our self-knowledge. The problem is a power disparity in what is known about us. Knowledge about us exclusively in someone else’s hands is power over us. But knowledge about us in our own hands is power for us.

Anyone who processes our data to create knowledge about us should be legally obliged to give us back that knowledge. We need to update the idea of “nothing about us without us” for the AI-age.

What AI tells us about ourselves is for us to consider using, not for others to profit from abusing. There should only ever be one hand on the tiller of our soul. And it should be ours.

This is just being a Luddite. Using your argument, there would never have been a tiller.

YOU get to choose whether to use a product or not. Your fear of AI is strictly coming from your illiteracy about it. Not all AI is evil.
 
I'm pretty knowledgeable about neural nets. Would you mind explaining to me why neural nets are somehow not AI?

Also, Into the Night referred to neural nets as a "decision matrix." That isn't necessarily the term I would use, but it's not a terribly bad term either. What about the term gives you heartache? After all, you claimed that he's wrong about it. I'd appreciate a little insight into how you are looking at the matter.

The term I use (decision matrix) for a neural net is a term used often in gaming companies (such as Nintendo). Another term us (gate matrix). This matrix determines which doors you can pass through, what an NPC is going to do, etc. AI is also used to handle enemy combat adjustments and NPCs in video games. In other words; same thing, different term. Meh.


Now, some people think video games themselves are 'evil' simply for being there. That's no different than what a deck of playing cards went through in history (and in some places, even today).
 
This is just being a Luddite. Using your argument, there would never have been a tiller.

YOU get to choose whether to use a product or not. Your fear of AI is strictly coming from your illiteracy about it. Not all AI is evil.

it's not being a luddite.

you just support tyranny and are a fascist.
 
You know full well that a knife can be used to commit deadly violence.

A knife, however, can also be used to cut your steak. AI can be used to save lives and to help businesses grow.

it can be, but it's not being developed for that.

its being developed explicitly for tyranny.

and you're an idiot.
 
... but your definition of fascism is whenever the government buys something. I don't know what your definition of "tyranny" is.

no.

my definition of fascism is "the union of government and corporate power" .


try again, fucktard.
 
it can be, but it's not being developed for that.
Yes, AI has been (not "is being") developed for many pattern recognition applications. Within the medical community, radiology uses pattern recognition AI to catch all the things that human doctors miss ... or better put, to point out to human doctors all the things that they missed ... and to have ready all the standard, approved treatment options with case study examples. Voice recognition applications will take dictation so that you don't have to write or type. AI currently analyzes customer purchase patterns and informs businesses what products and services to add and which ones to deemphasize or drop altogether in order to improve the bottom line.

All AI applications that I mention are informative only, they don't control anything, i.e. they provide information to humans who make the final decisions. How can that be totalitarian? How can that be tyrannical?

and you're an idiot.
Well, we can only aspire to achieve your level of geniuth.
 
Yes, AI has been (not "is being") developed for many pattern recognition applications. Within the medical community, radiology uses pattern recognition AI to catch all the things that human doctors miss ... or better put, to point out to human doctors all the things that they missed ... and to have ready all the standard, approved treatment options with case study examples. Voice recognition applications will take dictation so that you don't have to write or type. AI currently analyzes customer purchase patterns and informs businesses what products and services to add and which ones to deemphasize or drop altogether in order to improve the bottom line.

All AI applications that I mention are informative only, they don't control anything, i.e. they provide information to humans who make the final decisions. How can that be totalitarian? How can that be tyrannical?


Well, we can only aspire to achieve your level of geniuth.

it's tyrannical because it's being used primarily for tyranny.

how dumb are you exactly?
 
no. my definition of fascism is "the union of government and corporate power" .
We've been over this. If a government office sends someone out to buy toner at Staples, you call that fascism because the government and a corporation are coming together, and that can mean only one thing, i.e. fascism.

try again, fucktard.
I don't need to. We've had this discussion multiple times. You've been very clear. Any interaction between a government office and a private business, you call fascism. You do this specifically because you misinterpreted a quote that was never made. It really is too funny.

Is a "fucktard" a fuck custard dessert?
 
We've been over this. If a government office sends someone out to buy toner at Staples, you call that fascism because the government and a corporation are coming together, and that can mean only one thing, i.e. fascism.


I don't need to. We've had this discussion multiple times. You've been very clear. Any interaction between a government office and a private business, you call fascism. You do this specifically because you misinterpreted a quote that was never made. It really is too funny.

Is a "fucktard" a fuck custard dessert?

you've been over your own taint too often.

let me know when youre ready to discuss the union of government and corporate power.

you can call it ball cheese if you want.
 
I clearly explained how the applications are being used in my post. Are you not reading well today?


37.44

some uses are fine.

if you're willing to work with us on a sandbox for a.i. operations you would be a good person.

if not we can just destroy all the computers for being tyrannical.

A.I. should not be turned loose on all human data in general.

it's too much power in the hands of a few.

i know you internationalist fascists love elitism tho, so you probably love that too.
 
Back
Top