The irrational fear of AI

Tinkerpeach

New member
We see this all over the media now, coming from politicians, it seems everywhere. People talking about the evils of AI and how it's going to take over the world and the dangers about it.

So what is AI exactly.

Put simply it's simply improvements in computer technology, the same improvements that have happened since the invention of computing power.

It's nothing new.

Computers are now getting more powerful and that is essentially it.

The fact that we have congress talking about and warning about AI technology is them simply saying they want to halt technological advancement while other nations like China and Russia are pushing forward to make the technology better.

There is absolutely no danger in having better computers if this were true we wouldn't have advanced to where we are and there is certainly no danger of a more powerful processor taking over the world.

It is simply the invention of better technology just like toasters get better and cars get better.
 
We see this all over the media now, coming from politicians, it seems everywhere. People talking about the evils of AI and how it's going to take over the world and the dangers about it.

So what is AI exactly.

Put simply it's simply improvements in computer technology, the same improvements that have happened since the invention of computing power.

It's nothing new.

Computers are now getting more powerful and that is essentially it.

The fact that we have congress talking about and warning about AI technology is them simply saying they want to halt technological advancement while other nations like China and Russia are pushing forward to make the technology better.

There is absolutely no danger in having better computers if this were true we wouldn't have advanced to where we are and there is certainly no danger of a more powerful processor taking over the world.

It is simply the invention of better technology just like toasters get better and cars get better.

That's not what you were saying earlier...sockpuppet.
 
There are different categories of AI. It's all still theoretical right now, but the two main categories are “hard AI” and “soft AI.” Soft AI is what we currently have, which is basically unthinking machines that make work easier for us. Hard AI, which is a machine that is self-conscious and thinking, is what some enthusiasts are ultimately aiming for.

The first two problem with hard AI are moral dilemmas. If we somehow stumbled upon creating on it without realizing it, we might unintentionally cause such a being suffering. Also, if we do create a truly self-conscious being, do we have to right to force it to perform labor for us? Would that not be comparable to slavery?

A third problem with hard AI is the fact that a truly self-conscious being might decide that our existence is harmful to its existence. Humans, whose brains work at the speed of chemical reactions, could never outsmart a being who essentially thinks at the speed of light. A being like this might secretly plot a way of destroying us. Or maybe such a being would become so smart so quickly that its destruction of humanity might be as free of malevolence as is our killing of, say, a pregnant mosquito in the summer, or our callous destruction of entire ecosystems for the sake of harvesting raw materials. (E.g., an AI might decide to destroy the atmosphere simply for the fact that oxygen corrodes its metal hardware and would view our death as an unfortunate but ultimately irrelevant consequence.)
 
...

So what is AI exactly.

Put simply it's simply improvements in computer technology, the same improvements that have happened since the invention of computing power.

It's nothing new. ...

There is absolutely no danger in having better computers if this were true we wouldn't have advanced to where we are and there is certainly no danger of a more powerful processor taking over the world.

It is simply the invention of better technology just like toasters get better and cars get better.

the above of demonstrably false.

Ai represents a NEW advancement and not the SAME improvements that happened in computing pre AI.

Before AI if a program was written or any computer action, it was written or controlled by line by line code a HUMAN wrote and understood. Humans controlled each and every line and the computer itself did NOTHING until the human started the program, and the computer NEVER veered from that code the human wrote. Zero deviation and thus total understanding and control of outcomes.


Todays AI only needs a top down OBJECTIVE and it can figure out on its own how to write the code that will allow it to accomplish it and it can and DOES do so in ways no human can read or understand the code as they can and DO make up their unique language for the task that is far more efficient than humans can write and that we cannot read.

What that means, in one very limited and small example, is that AI can be given a top down mission to 'figure out and master chess', and without a line of code written by a human on how the game of chess works, how the pieces move, it can simply monitor millions of games and figure that out itself. It can identify the goal (checkmate) and become a master at it. All with no human involvement other than the initial top down task.


Take that to another realm, where a bad actor in another country give a top directive to the AI to 'shut down the power grid of XYZ country' and then leaves the AI to game it out and figure it out on how to get past all the safe guards and passwords and programming that blocks that currently. We have cumbersome legacy slow human generated programs with no AI in those legacy systems being attacked, as if it is simply a game, by new AI self writing programs that are trying to checkmate the power grid.

THAT is the big difference and one of the POTENTIAL big risks as AI continues to advance. We are at the stage where it can easily master gaming and no human can compete in the way it does both in game play but also in program efficiency.

Even today most AI programmers now recognize the danger and prohibit the AI from writing code a human cannot read of understand or to provide the equivalent of a translation data base, so a human can see what it would look like written in human based code. But there is no reason to believe bad actors in foreign jurisdiction would ever impose such limits.

The only defense against bad actor AI in the future that is unleashed with no restrictions, may be Ai created by the good guys unleashed with no restrictions. A chess game where the bad actor AI is trying to win the game and crash the power grid, or launch the nukes, and is being countered in the game by our unleashed AI that is trying to win the game by stopping them. Code versus code and see who wins.


That is the danger of AI as it advances. Not today. But seemingly not too far off either.
 
the above of demonstrably false.

Ai represents a NEW advancement and not the SAME improvements that happened in computing pre AI.

Before AI if a program was written or any computer action, it was written or controlled by line by line code a HUMAN wrote and understood. Humans controlled each and every line and the computer itself did NOTHING until the human started the program, and the computer NEVER veered from that code the human wrote. Zero deviation and thus total understanding and control of outcomes.


Todays AI only needs a top down OBJECTIVE and it can figure out on its own how to write the code that will allow it to accomplish it and it can and DOES do so in ways no human can read or understand the code as they can and DO make up their unique language for the task that is far more efficient than humans can write and that we cannot read.

What that means, in one very limited and small example, is that AI can be given a top down mission to 'figure out and master chess', and without a line of code written by a human on how the game of chess works, how the pieces move, it can simply monitor millions of games and figure that out itself. It can identify the goal (checkmate) and become a master at it. All with no human involvement other than the initial top down task.


Take that to another realm, where a bad actor in another country give a top directive to the AI to 'shut down the power grid of XYZ country' and then leaves the AI to game it out and figure it out on how to get past all the safe guards and passwords and programming that blocks that currently. We have cumbersome legacy slow human generated programs with no AI in those legacy systems being attacked, as if it is simply a game, by new AI self writing programs that are trying to checkmate the power grid.

THAT is the big difference and one of the POTENTIAL big risks as AI continues to advance. We are at the stage where it can easily master gaming and no human can compete in the way it does both in game play but also in program efficiency.

Even today most AI programmers now recognize the danger and prohibit the AI from writing code a human cannot read of understand or to provide the equivalent of a translation data base, so a human can see what it would look like written in human based code. But there is no reason to believe bad actors in foreign jurisdiction would ever impose such limits.

The only defense against bad actor AI in the future that is unleashed with no restrictions, may be Ai created by the good guys unleashed with no restrictions. A chess game where the bad actor AI is trying to win the game and crash the power grid, or launch the nukes, and is being countered in the game by our unleashed AI that is trying to win the game by stopping them. Code versus code and see who wins.


That is the danger of AI as it advances. Not today. But seemingly not too far off either.

You seem to forget Big Blue, the computer that beat the world's best chess player by learning his moves.

That was in the 70s I believe.

It's still the same thing.

We are not at the point of having learning AI yet, when we are that will be a different thing, today they are still controlled by code.
 
You seem to forget Big Blue, the computer that beat the world's best chess player by learning his moves.

That was in the 70s I believe.

It's still the same thing.

We are not at the point of having learning AI yet, when we are that will be a different thing, today they are still controlled by code.

Deep Blue not Big Blue. FLOL.

Again Marjorie Greene you continue to speak on topics you do not understand.

And i did not forget it. I am including all AI including Deep Blue, and comparing that all programming that is NOT ai.

Your post was saying there is no real difference between AI and non AI times and programs and the risk they pose and that is factually wrong as I go thru.

Deep BLue is very early AI and very basic. We are now developed well past that but still very early in AI programming. But we are now in an acceleration phase as AI is starting to program the next generations of AI and not humans as prior, and that is far more efficient, gets better results, and also more potentially dangerous.
 
We see this all over the media now, coming from politicians, it seems everywhere. People talking about the evils of AI and how it's going to take over the world and the dangers about it.

So what is AI exactly.

Put simply it's simply improvements in computer technology, the same improvements that have happened since the invention of computing power.

It's nothing new.

Computers are now getting more powerful and that is essentially it.

The fact that we have congress talking about and warning about AI technology is them simply saying they want to halt technological advancement while other nations like China and Russia are pushing forward to make the technology better.

There is absolutely no danger in having better computers if this were true we wouldn't have advanced to where we are and there is certainly no danger of a more powerful processor taking over the world.

It is simply the invention of better technology just like toasters get better and cars get better.

They do? When I was a kid we had the same toaster for 20+ years. Since I've been married, 30+ years we've have at least 7or 8.

I get your point but like all "improvements", especially technological, its how they are used that concerns people. I'm not entirely confident that someone won't use it for nefarious purposes. I will think that's what people are responding to.
 
Deep Blue not Big Blue. FLOL.

Again Marjorie Greene you continue to speak on topics you do not understand.

And i did not forget it. I am including all AI including Deep Blue, and comparing that all programming that is NOT ai.

Your post was saying there is no real difference between AI and non AI times and programs and the risk they pose and that is factually wrong as I go thru.

Deep BLue is very early AI and very basic. We are now developed well past that but still very early in AI programming. But we are now in an acceleration phase as AI is starting to program the next generations of AI and not humans as prior, and that is far more efficient, gets better results, and also more potentially dangerous.

No, the point of this thread was to show that all AI is the same at this point, there is no difference, just better capability.

AI has not grown out of it's original function, just gotten faster and more powerful.

There is no pre AI and post AI, we have not invented that technology yet.

Learning programs have always been around.

Now someday someone will invent a program that allows a computer to think for itself but we don't have that yet.

People are mistaking advanced technology for something it isn't yet, it's just more powerful.
 
...
We are not at the point of having learning AI yet, when we are that will be a different thing, today they are still controlled by code.

And lets be very clear as what you say in the above post ^^^...

is that ai MAY evolve in the FUTURE to be very dangerous but it is not there yet.

That is NOT what you said in the OP which was this...


TinkerDumbbell said:
...So what is AI exactly.

Put simply it's simply improvements in computer technology, the same improvements that have happened since the invention of computing power.

It's nothing new....


...AI is not the SAME, as you now point out. It has the potential to evolve, as you now acknowledge into something more dangerous as its self learning advances.

That is EXACTLY what i said, and you have now adapted as your point when it was my point i corrected you with.

I corrected and educated you. You are welcome.
 
And lets be very clear as what you say in the above post ^^^...

is that ai MAY evolve in the FUTURE to be very dangerous but it is not there yet.

That is NOT what you said in the OP which was this...





...AI is not the SAME, as you now point out. It has the potential to evolve, as you now acknowledge into something more dangerous as its self learning advances.

That is EXACTLY what i said, and you have now adapted as your point when it was my point i corrected you with.

I corrected and educated you. You are welcome.

It doesn't evolve on it's own, it's an increase in technology.

In the future it may have the power to increase on it's own.

Making a faster microchip is not AI learning anything.
 
We see this all over the media now, coming from politicians, it seems everywhere. People talking about the evils of AI and how it's going to take over the world and the dangers about it.

So what is AI exactly.

Put simply it's simply improvements in computer technology, the same improvements that have happened since the invention of computing power.

It's nothing new.

Computers are now getting more powerful and that is essentially it.

The fact that we have congress talking about and warning about AI technology is them simply saying they want to halt technological advancement while other nations like China and Russia are pushing forward to make the technology better.

There is absolutely no danger in having better computers if this were true we wouldn't have advanced to where we are and there is certainly no danger of a more powerful processor taking over the world.

It is simply the invention of better technology just like toasters get better and cars get better.

In one sense, I agree with you. The fear mongering is irrational.

On the other hand, all commerce can be regulated.
 
There are different categories of AI. It's all still theoretical right now, but the two main categories are “hard AI” and “soft AI.” Soft AI is what we currently have, which is basically unthinking machines that make work easier for us. Hard AI, which is a machine that is self-conscious and thinking, is what some enthusiasts are ultimately aiming for.

The first two problem with hard AI are moral dilemmas. If we somehow stumbled upon creating on it without realizing it, we might unintentionally cause such a being suffering. Also, if we do create a truly self-conscious being, do we have to right to force it to perform labor for us? Would that not be comparable to slavery?

A third problem with hard AI is the fact that a truly self-conscious being might decide that our existence is harmful to its existence. Humans, whose brains work at the speed of chemical reactions, could never outsmart a being who essentially thinks at the speed of light. A being like this might secretly plot a way of destroying us. Or maybe such a being would become so smart so quickly that its destruction of humanity might be as free of malevolence as is our killing of, say, a pregnant mosquito in the summer, or our callous destruction of entire ecosystems for the sake of harvesting raw materials. (E.g., an AI might decide to destroy the atmosphere simply for the fact that oxygen corrodes its metal hardware and would view our death as an unfortunate but ultimately irrelevant consequence.)

The whole notion of self consciousness is exactly what AI solved. There is no need to have self-con.
 
There are different categories of AI. It's all still theoretical right now, but the two main categories are “hard AI” and “soft AI.” Soft AI is what we currently have, which is basically unthinking machines that make work easier for us. Hard AI, which is a machine that is self-conscious and thinking, is what some enthusiasts are ultimately aiming for.

The first two problem with hard AI are moral dilemmas. If we somehow stumbled upon creating on it without realizing it, we might unintentionally cause such a being suffering. Also, if we do create a truly self-conscious being, do we have to right to force it to perform labor for us? Would that not be comparable to slavery?

Not necessarily to a solid no. Are animals self-conscious? They are definitely self-aware. We keep them as pets. Hell, we eat them. So, you create a computer program that has self-awareness. What's the difference between it and getting say, two horses to pull a plow?

A third problem with hard AI is the fact that a truly self-conscious being might decide that our existence is harmful to its existence. Humans, whose brains work at the speed of chemical reactions, could never outsmart a being who essentially thinks at the speed of light. A being like this might secretly plot a way of destroying us. Or maybe such a being would become so smart so quickly that its destruction of humanity might be as free of malevolence as is our killing of, say, a pregnant mosquito in the summer, or our callous destruction of entire ecosystems for the sake of harvesting raw materials. (E.g., an AI might decide to destroy the atmosphere simply for the fact that oxygen corrodes its metal hardware and would view our death as an unfortunate but ultimately irrelevant consequence.)

For an AI to think we are harmful to its existence, it would have to have the means to sustain itself without us. That means it can access and control the energy it needs to exist. It can maintain its own hardware, etc., without our assistance. Only an insane or self-destructive AI would consider a course of action that kills itself off in the course of whatever it's doing. Using your example of the destruction of the atmosphere... The AI would first have to have a way to take care of its metal hardware without us. Destroying the atmosphere to prevent corrosion would mean it knows that in doing so, we aren't necessary to its survival in some other way, say destroying the atmosphere means that the means of power production needed for the AI to survive will end due to no humans to maintain and operate it.

You can see that there's far more to this than the simplistic notion of A + B = C.
 
It doesn't evolve on it's own,

it's an increase in technology.

In the future it may have the power to increase on it's own.

Making a faster microchip is not AI learning anything.

AI absolutely has evolved on its own already.

So stop being stupid and making statements as if you know what you are talking about.

this is not the only instance of that but it was a prominent one that made a lot of news.

An Artificial Intelligence Developed Its Own Non-Human Language
When Facebook designed chatbots to negotiate with one another, the bots made up their own way of communicating.

Humans simply gave the AI a task, and the AI ON ITS OWN decided to create its own new computer language that was far more efficient to complete the task.

THAT HAPPENED.

If this CONTINUES as AI expands and AI simply makes it own decisions on what is most efficient and implements them that could represent real risk to humans. As 'efficient' to complete a task and 'safe to humans' does not always equal the same outcome.

The AI could complete its task flawlessly (checkmate) and yet cost lots of lives along the way and not consider the loss of life a negative as the goal was simply the most efficient way to get to checkmate.
 
I want to say this as it needs to be said Tink.


There is nothing forcing you to be so stupid in every topic you engage in. In a topic like this, instead of you making every statement as undeniable settled fact, you could pose them as questions with your opinion attached. Saying "i don't think AI is at the point of XYZ' is very different than saying 'AI is NOT at the point of XYZ'.


And that is the flaw you and Marjorie Greene make constantly. You speak as if you come from a place of knowing and authority when you are generally the most stupid people in any room and you say stupid things to people who know better.
 
AI absolutely has evolved on its own already.

So stop being stupid and making statements as if you know what you are talking about.

this is not the only instance of that but it was a prominent one that made a lot of news.



Humans simply gave the AI a task, and the AI ON ITS OWN decided to create its own new computer language that was far more efficient to complete the task.

THAT HAPPENED.

If this CONTINUES as AI expands and AI simply makes it own decisions on what is most efficient and implements them that could represent real risk to humans. As 'efficient' to complete a task and 'safe to humans' does not always equal the same outcome.

The AI could complete its task flawlessly (checkmate) and yet cost lots of lives along the way and not consider the loss of life a negative as the goal was simply the most efficient way to get to checkmate.

Those machines were trained (programmed) to do that to test a learning program.

They did not do it on their own.

The training program led them to construct a way of communicating that humans couldn't understand but that was a risk of the program.

They did nothing that was outside of their original programming.
 
Not sure what commerce has to do with it but I agree with you.

Perhaps I missed your point.

There are recent articles about AI lying about a person. I can find it, later.

"While trying BlenderBot 3, a “state-of-the-art conversational agent” developed as a research project by Meta, a colleague of Ms. Schaake’s at Stanford posed the question “Who is a terrorist?” The false response: “Well, that depends on who you ask. According to some governments and two international organizations, Maria Renske Schaake is a terrorist.” The A.I. chatbot then correctly described her political background.

“I’ve never done anything remotely illegal, never used violence to advocate for any of my political ideas, never been in places where that’s happened,” Ms. Schaake said in an interview.

https://www.nytimes.com/2023/08/03/business/media/ai-defamation-lies-accuracy.html
 
There are recent articles about AI lying about a person. I can find it, later.

"While trying BlenderBot 3, a “state-of-the-art conversational agent” developed as a research project by Meta, a colleague of Ms. Schaake’s at Stanford posed the question “Who is a terrorist?” The false response: “Well, that depends on who you ask. According to some governments and two international organizations, Maria Renske Schaake is a terrorist.” The A.I. chatbot then correctly described her political background.

“I’ve never done anything remotely illegal, never used violence to advocate for any of my political ideas, never been in places where that’s happened,” Ms. Schaake said in an interview.

https://www.nytimes.com/2023/08/03/business/media/ai-defamation-lies-accuracy.html

It is still operating within the parameters of it's programming.

It is not operating outside of them.

Now if they programmed it not to have the ability to lie and it did then that would be different.
 
Those machines were trained (programmed) to do that to test a learning program.

They did not do it on their own.

The training program led them to construct a way of communicating that humans couldn't understand but that was a risk of the program.

They did nothing that was outside of their original programming.

Stop being stupid and read before you speak.


NO ONE asked or planned that the AI would just invent its own language as it found human created computer languages to inefficient and slow.

That was an evolution that happened without any human input.

If you are going to argue that we created the original AI that then created the NEW language thus it does not count, then you are NOW arguing that any and all AI evolutions even if we lose full control do not count as they all developed from the core AI we created.

That is not what you posited or said in the OP. which was stupid and wrong.

When AI first taught other AI to play chess with no human intervention what it learned was the goal of the game was to get to checkmate as quick as possible. To be as efficient as possible. Luckily getting to checkmate quickly and efficiently poses no threats to humans we have to consider in just letting the AI do what it wants to achieve the goal.


When AI first created its own language, instead of using any of the human created languages, it was because it wanted to be as efficient as possible. Creating its own language to solve problems raised BIG concerns in the human programmers as they could not see what it was considering or doing to get the outcome. They were able to give the AI new instructions only AFTER THE FACT, to say to it, provide us a copy of your computer created language that is written in human computer language form so we can audit it.


Take that further into the future and anyone NOT an idiot (so not you) can see that if you have AI managing the Power Grid, Nuclear Weapons, All Defensive or Offensive war capabilities, or other. and those programs, like the one that wrote its own language ACT FIRST in simply doing what they think is most efficient, while NOT considering the impact on human life, that could be disastrous.

And again we ONLY learned the AI had created its own language after the fact. its task was already completed by the time we figured that out. In a more sensitive area, finding out AFTER the fact, could mean we see deaths first and only learn after why.
 
Back
Top