The irrational fear of AI

It is still operating within the parameters of it's programming.

It is not operating outside of them.

Now if they programmed it not to have the ability to lie and it did then that would be different.

You are factually wrong.

They are expanding upon their base programming and going outside it, in many instances.

AI has the ability to self teach and self learn and self update once its initially created.

So what you said in the OP that AI is no different than prior computer languages is just factually wrong as those did not self learn. They were limited to what was in their code only and could never add or subtract from that.

Again, creating its own language, when no human ever told it to, was something the AI simply realized on its own would make it more efficient to do the task it was asked to do. If AI can simply decide to create its own language because that helps it finish its task more efficiently what else that it determines would help it finish its task more efficiently in the future might it do without asking first?

Would an all AI road construction machine crew decide to bulldoze a house instead of going around it to lay the new road as that is the most efficient path between Point A and B? Will it care about the human in the house? Or just care about how efficient it was in doing the requested job?


I'll repeat as it needs repeating.


I want to say this as it needs to be said Tink.


There is nothing forcing you to be so stupid in every topic you engage in. In a topic like this, instead of you making every statement as undeniable settled fact, you could pose them as questions with your opinion attached. Saying "i don't think AI is at the point of XYZ' is very different than saying 'AI is NOT at the point of XYZ'.


And that is the flaw you and Marjorie Greene make constantly. You speak as if you come from a place of knowing and authority when you are generally the most stupid people in any room and you say stupid things to people who know better.
 
It is still operating within the parameters of it's programming.

It is not operating outside of them.

Now if they programmed it not to have the ability to lie and it did then that would be different.

My point is that if you slander someone you can sue. So you should be able to sue AI system
 
My point is that if you slander someone you can sue. So you should be able to sue AI system

Only if that AI system is standalone. If the AI were owned by a corporation, individual, etc., then that corporation or individual would be responsible for the AI's actions. It would in effect be no different than if a human employee did the same thing.
 
My point is that if you slander someone you can sue. So you should be able to sue AI system

That will probably happen in the future along with a whole new set of laws about AI and eventually the debate about whether AI is entitled to rights.

Neither you or I will be here for that but it will happen one day.
 
You are factually wrong.

They are expanding upon their base programming and going outside it, in many instances.

AI has the ability to self teach and self learn and self update once its initially created.

So what you said in the OP that AI is no different than prior computer languages is just factually wrong as those did not self learn. They were limited to what was in their code only and could never add or subtract from that.

Again, creating its own language, when no human ever told it to, was something the AI simply realized on its own would make it more efficient to do the task it was asked to do. If AI can simply decide to create its own language because that helps it finish its task more efficiently what else that it determines would help it finish its task more efficiently in the future might it do without asking first?

Would an all AI road construction machine crew decide to bulldoze a house instead of going around it to lay the new road as that is the most efficient path between Point A and B? Will it care about the human in the house? Or just care about how efficient it was in doing the requested job?


I'll repeat as it needs repeating.

The ability to learn is their programming, that's what they are created to do.

They don't do it on their own without being given the code to do so.

Now if your laptop starts talking to you then that would be a different story.
 
The ability to learn is their programming, that's what they are created to do.

They don't do it on their own without being given the code to do so.

Now if your laptop starts talking to you then that would be a different story.

Which was NOT in prior programming languages.

That ability to LEARN is what is new and it keeps getting more expansive.

So going back to your opening post where you stated there was no difference between AI and thus nothing to be concerned about as compared to prior program languages you are 100% wrong.

Once AI became capable of learning new things that no human taught it, that would take it beyond it original programming, that raises reasons to be concerned.

Pretty much all the top Tech CEO's who have spoken on this topic have said the same thing. 'This is something to be concerned about'. Tink comes in and says 'no its not'.

Why?



...

I want to say this as it needs to be said Tink.


There is nothing forcing you to be so stupid in every topic you engage in. In a topic like this, instead of you making every statement as undeniable settled fact, you could pose them as questions with your opinion attached. Saying "i don't think AI is at the point of XYZ' is very different than saying 'AI is NOT at the point of XYZ'.


And that is the flaw you and Marjorie Greene make constantly. You speak as if you come from a place of knowing and authority when you are generally the most stupid people in any room and you say stupid things to people who know better.
 
Which was NOT in prior programming languages.

That ability to LEARN is what is new and it keeps getting more expansive.

So going back to your opening post where you stated there was no difference between AI and thus nothing to be concerned about as compared to prior program languages you are 100% wrong.

Once AI became capable of learning new things that no human taught it, that would take it beyond it original programming, that raises reasons to be concerned.

Pretty much all the top Tech CEO's who have spoken on this topic have said the same thing. 'This is something to be concerned about'. Tink comes in and says 'no its not'.

Why?



...

The philosophical questions about what constitutes thought, sentience, and consciousness are best left to philosophers. Even so, we can very confidently say: the answer is no. The Artificial intelligence systems of 2022 can’t think.

Contract AI computerizes the tedious and laborious tasks associated with contract analysis. This speeds up the analysis process and gives you time to think.


https://www.docusign.com/blog/can-c...o, can computers think?,say: the answer is no.

To put this simply, they aren't learning anything new, they are figuring out how to compute quicker which many people mistake as intelligence.
 
The philosophical questions about what constitutes thought, sentience, and consciousness are best left to philosophers. Even so, we can very confidently say: the answer is no. The Artificial intelligence systems of 2022 can’t think.

Contract AI computerizes the tedious and laborious tasks associated with contract analysis. This speeds up the analysis process and gives you time to think.


https://www.docusign.com/blog/can-c...o, can computers think?,say: the answer is no.

To put this simply, they aren't learning anything new, they are figuring out how to compute quicker which many people mistake as intelligence.

No, not what AI is. Not just word processing.
 
The philosophical questions about what constitutes thought, sentience, and consciousness are best left to philosophers. Even so, we can very confidently say: the answer is no. The Artificial intelligence systems of 2022 can’t think.

Contract AI computerizes the tedious and laborious tasks associated with contract analysis. This speeds up the analysis process and gives you time to think.


https://www.docusign.com/blog/can-c...o, can computers think?,say: the answer is no.

To put this simply, they aren't learning anything new, they are figuring out how to compute quicker which many people mistake as intelligence.

More stupidity by you.

We are not discussing 'thinking' nor 'sentience'.

We are discussing the ability of todays AI to self program by LEARNING on their own thru observation and other techniques.

Back to your claim in the opening post that todays AI is not any different than prior learning languages.

...However, AlphaGo revealed it was possible to forgo human knowledge -- instead, the AI learned by playing against itself repeatedly, relying on a strategy known as reinforcement learning to explore through trial and error which actions were best at winning rewards....


cite


No prior computer language could teach itself a complex game like 'Go' or any game.

Each and every move was prior programmed in along with volumes of strategies. that is how prior versions of chess games before Deep Blue learned.

With todays AI (or machine learning) no humans need input any of the instructions of the game nor moves of the game nor rules of the game. The AI can simply be programmed to watch and figure out the desired outcome and figure out how to optimize it.


That is how you end up with a Chess program no human can beat. A new computer language no human can read.


So again back your OP and you saying it is not different than prior languages. You are 100%. It is very different. these things being done now could NEVER be done prior and there is risk in allowing AI to continue to evolve and figure out how to accomplish tasks without human input or our ability to even know what it is doing, until it is done.

be less stupid.,
 
More stupidity by you.

We are not discussing 'thinking' nor 'sentience'.

We are discussing the ability of todays AI to self program by LEARNING on their own thru observation and other techniques.

Back to your claim in the opening post that todays AI is not any different than prior learning languages.




No prior computer language could teach itself a complex game like 'Go' or any game.

Each and every move was prior programmed in along with volumes of strategies. that is how prior versions of chess games before Deep Blue learned.

With todays AI (or machine learning) no humans need input any of the instructions of the game nor moves of the game nor rules of the game. The AI can simply be programmed to watch and figure out the desired outcome and figure out how to optimize it.


That is how you end up with a Chess program no human can beat. A new computer language no human can read.


So again back your OP and you saying it is not different than prior languages. You are 100%. It is very different. these things being done now could NEVER be done prior and there is risk in allowing AI to continue to evolve and figure out how to accomplish tasks without human input or our ability to even know what it is doing, until it is done.

be less stupid.,

Again, the ability to learn through observation is part of their programming.

What is so hard for you to understand about that?

They are not creating that ability on their own, it has to be given to them by humans.
 
We see this all over the media now, coming from politicians, it seems everywhere. People talking about the evils of AI and how it's going to take over the world and the dangers about it.

So what is AI exactly.

Put simply it's simply improvements in computer technology, the same improvements that have happened since the invention of computing power.

It's nothing new.

Computers are now getting more powerful and that is essentially it.

The fact that we have congress talking about and warning about AI technology is them simply saying they want to halt technological advancement while other nations like China and Russia are pushing forward to make the technology better.

There is absolutely no danger in having better computers if this were true we wouldn't have advanced to where we are and there is certainly no danger of a more powerful processor taking over the world.

It is simply the invention of better technology just like toasters get better and cars get better.

Like all things- they can be good- and they can be bad! Like people!

Good people are going to use good things in good ways- Bad people are going to use good things in BAD WAYS!

So, do not ever underestimate what human beings are capable of. Because Human Beings can be some of the most horrible and evil things on this earth!
 
Again, the ability to learn through observation is part of their programming.

What is so hard for you to understand about that?

They are not creating that ability on their own, it has to be given to them by humans.

I've never disagreed with that so nothing is wrong with me.

What is wrong with you is your are stupid.

Prior computer languages to AI had 'no ability to learn thru observation'.

You stated in the OP there was no real difference between prior languages and AI when the difference is ....'the ability to learn thru observation'.

So what the fuck is wrong with you when you are stating NOW what is different when you said initially nothing was different.


That difference is very important and poses potential risk prior languages did not. Exactly the opposite of what you said in your OP.

Be less stupid.
 
I've never disagreed with that so nothing is wrong with me.

What is wrong with you is your are stupid.

Prior computer languages to AI had 'no ability to learn thru observation'.

You stated in the OP there was no real difference between prior languages and AI when the difference is ....'the ability to learn thru observation'.

So what the fuck is wrong with you when you are stating NOW what is different when you said initially nothing was different.


That difference is very important and poses potential risk prior languages did not. Exactly the opposite of what you said in your OP.

Be less stupid.

Like I said, as our technology advanced we created programming to allow them to learn through observation, they do not create that themselves.
 
Like all things- they can be good- and they can be bad! Like people!

Good people are going to use good things in good ways- Bad people are going to use good things in BAD WAYS!

So, do not ever underestimate what human beings are capable of. Because Human Beings can be some of the most horrible and evil things on this earth!

Actually that would be insects if you want to get technical.

It's estimated that there is more killing in an English Hedgerow in a single day then all humans killed in violence in our entire history.
 
Actually that would be insects if you want to get technical.

It's estimated that there is more killing in an English Hedgerow in a single day then all humans killed in violence in our entire history.

Well, Insects will inherit the Earth someday!

Man will eventually kill itself out, and then the roaches will take over!
 
Well, Insects will inherit the Earth someday!

Man will eventually kill itself out, and then the roaches will take over!

Actually from what I've read they say rats will become the next intelligent species, not insects if we die out.

If you have better information on that I would love to hear it though.
 
Like all things- they can be good- and they can be bad! Like people!

Good people are going to use good things in good ways- Bad people are going to use good things in BAD WAYS!

So, do not ever underestimate what human beings are capable of. Because Human Beings can be some of the most horrible and evil things on this earth!

it is not even that so much.

At least humans do have a basic understanding of moral decisions even if we do make terrible choices at the time.

Imagine self driving cars, who like the chess program decide, in self teaching the HIGHEST priority is getting from A to B in the fastest and most efficient manner.

Its self programming does not tell it 'avoiding damage to the vehicle matters' or 'damage to humans or animals matters'.

Just as the chess game figures out 'checkmate' is the highest goal, self driving AI figures out 'getting from A to B efficiently' is the goal.

We humans then only learn of this once it is on the road and we start ADDING in rules for it to consider that prioritize things it did not prioritize.

That is how most AI now is learning up front. We, humans, have learned we cannot program efficient up front, so we let the AI program it upfront and then we go in and correct it after with our considerations.

But we do not know what we do not know, until we see it happen. We can guess at some of the safe guards needed and what it might do and get out front, but there is a risk we miss key things and only learn about them after a big OOPS.

No one ever guessed the computer would simply create its own brand new language that no human could read or understand. that we had no way to audit. The computer just decided our language was not efficient and it could write a better one.

What else might Ai do in the future, without telling us and implement with us only learning about it after the fact and building in safe guards after the fact???

it scared the crap out of the AI programmers when they realized the AI had written and executed it own language without any of them telling it to do so. But after the fact they simply wrote an order for it to provide a human readable language copy of what it wrote.
 
Like I said, as our technology advanced we created programming to allow them to learn through observation, they do not create that themselves.
LIES.

This is what you said..


TinkLiar said:
...So what is AI exactly.

Put simply it's simply improvements in computer technology, the same improvements that have happened since the invention of computing power.

It's nothing new.
...

That is not the same at all.
 
it is not even that so much.

At least humans do have a basic understanding of moral decisions even if we do make terrible choices at the time.

Imagine self driving cars, who like the chess program decide, in self teaching the HIGHEST priority is getting from A to B in the fastest and most efficient manner.

Its self programming does not tell it 'avoiding damage to the vehicle matters' or 'damage to humans or animals matters'.

Just as the chess game figures out 'checkmate' is the highest goal, self driving AI figures out 'getting from A to B efficiently' is the goal.

We humans then only learn of this once it is on the road and we start ADDING in rules for it to consider that prioritize things it did not prioritize.

That is how most AI now is learning up front. We, humans, have learned we cannot program efficient up front, so we let the AI program it upfront and then we go in and correct it after with our considerations.

But we do not know what we do not know, until we see it happen. We can guess at some of the safe guards needed and what it might do and get out front, but there is a risk we miss key things and only learn about them after a big OOPS.

No one ever guessed the computer would simply create its own brand new language that no human could read or understand. that we had no way to audit. The computer just decided our language was not efficient and it could write a better one.

What else might Ai do in the future, without telling us and implement with us only learning about it after the fact and building in safe guards after the fact???

it scared the crap out of the AI programmers when they realized the AI had written and executed it own language without any of them telling it to do so. But after the fact they simply wrote an order for it to provide a human readable language copy of what it wrote.

All this makes sense! I wish I knew more about it, But AI is still experimental at this stage, and I just say, I'm all for it, until it goes bad, if it should!

If it helps us reach technologies that we can't even imagine yet, great. But, let's just be cautions along the way!

That's really all I even know what to say about it, given my lack of knowledge on the matter.

But thanks!
 
Back
Top