evince
Truthmatters
Don’t keep putting your sack in the blender when you juicejuicing shrinks your sack.
Don’t keep putting your sack in the blender when you juicejuicing shrinks your sack.
begrudging +1 for you, b word.Don’t keep putting your sack in the blender when you juice
No, AI is not reliable. You can peruse and parse any number of articles by computer experts, etc., that show it isn't. For the most part, AI accuracy and reliability on what it says about something varies wildly from around 30 to 80% It often uses unreliable sources, and can "hallucinate" answers out of thin air.AI is reliable now. You aren't seriously one of those boomers who's afraid of technology and insists on writing a paper check to pay your phone bill, are you? AI generally does a better, more thorough, and much quicker review of extensive sources to produce a reliable response. You're thumbing through card catalogs in the MAGA idiocy section.
Come on that was comedy goldbegrudging +1 for you, b word.
New nick name for you ball shredderbegrudging +1 for you, b word.
A thing of beautyDon’t keep putting your sack in the blender when you juice
I did.Come on that was comedy gold
You served it the fuck up for me
It almost makes me like you a little
Yea, I'm talkin' about you and it ain't anything good!I heard that.
I'm jelly.Smoke a bowl time
I deserve a reward
I'm plotting revenge.Yea, I'm talkin' about you and it ain't anything good!![]()
I'll take your word for it, although I am entirely dubious that two people on this board share your extreme opinions and a similar writing style. In any case, the "defending the TACO administration at the expense of personal credibility" positions are pretty well isolated to a certain group of people.I don't have any "sock accounts." I see no reason to have one or more. I don't change usernames either, another meaningless stupidity many here carry out regularly.
AI can make mistakes. Quality varies. It should not replace research or critical thinking.No, AI is not reliable. You can peruse and parse any number of articles by computer experts, etc., that show it isn't. For the most part, AI accuracy and reliability on what it says about something varies wildly from around 30 to 80% It often uses unreliable sources, and can "hallucinate" answers out of thin air.
There is demonstrated programming bias in most AI systems as well. Google AI shows a marked tendency towards the political Left in its answers, for example.
Yes, it is quicker than doing it yourself and for many people it has become a shortcut or crutch for their own inability to do research on something, particularly on the fly.
Relying on it is the idiocy.
My view is, that AI, like that of any other op ed "fact checker" out there, shouldn't be taken at their word. Their opinion--and it is opinion--is no better or worse than any other. If I toss in some hyperbole and exaggeration into the opinions I foist, I expect the reader to recognize those for what they are rather than instantly take them as intended truths. Many times, in human conversation, such use of language is a way to emphasize something or bring attention to a specific point. AI doesn't handle the nuisances of human conversation well. It responds like a small child would taking everything literally.AI can make mistakes. Quality varies. It should not replace research or critical thinking.
Accuracy depends on the model, the task, and the prompt quality. Your range is suspiciously vague. There isn't a single global "AI accuracy percentage".
AI models don't browse the web by default and pick random sources. They learn patterns from large datasets. They don't always know when they don't know and they may generate plausible-but-wrong answers, but that's a design limitation, not "using bad sources".
Bias is a real research topic, but the jump from "bias exists" to "therefore it's unreliable" is a leap.
"Relying is idiocy" is a big exaggeration. People rely on tools all the time. Blind trust is bad, but total dismissal is equally bad.
Your strangely forceful/emotional position seems to be driven more by skepticism and frustration than balanced analysis. AI is not fully reliable, but it's extremely useful when used critically. It should complement human thinking, not replace it.
You’re treating factual correction as opinion, but that’s not how evidence works.My view is, that AI, like that of any other op ed "fact checker" out there, shouldn't be taken at their word. Their opinion--and it is opinion--is no better or worse than any other. If I toss in some hyperbole and exaggeration into the opinions I foist, I expect the reader to recognize those for what they are rather than instantly take them as intended truths. Many times, in human conversation, such use of language is a way to emphasize something or bring attention to a specific point. AI doesn't handle the nuisances of human conversation well. It responds like a small child would taking everything literally.
At least you are now admitting AI has issues and shouldn't simply be taken as accurate and correct on face value.
You’re mixing together several different issues, accuracy, hallucinations, and bias, and treating them as if they invalidate any factual correction.No, AI is not reliable. You can peruse and parse any number of articles by computer experts, etc., that show it isn't. For the most part, AI accuracy and reliability on what it says about something varies wildly from around 30 to 80% It often uses unreliable sources, and can "hallucinate" answers out of thin air.
There is demonstrated programming bias in most AI systems as well. Google AI shows a marked tendency towards the political Left in its answers, for example.
Yes, it is quicker than doing it yourself and for many people it has become a shortcut or crutch for their own inability to do research on something, particularly on the fly.
Relying on it is the idiocy.
You’re treating factual correction as opinion, but that’s not how evidence works.
The difference is that Google AI takes anything less than 100% as "unverified."If someone makes a verifiable claim about history, law, or classification, it can be checked against sources. That’s not opinion, that’s verification.
Saying Google AI is imperfect doesn’t change whether the specific claims you made were accurate.Except Google AI often uses no facts in its answers.
The difference is that Google AI takes anything less than 100% as "unverified."
#1 pointing out only those parts that confirmed his bias and deliberately avoiding those parts that negated his narrative.He was doing what Fox "News" does -- pointing out only those parts that confirmed his bias and deliberately avoiding those parts that negated his narrative.
Doesn'ti t bother you that people listen to this rubbish and avoid immunizations because of it? Does it matter if a few kids die and others are permanently damaged because they weren't protected from preventable diseases?