The insanity of government run grocery stores:

  • “DEI isn't a market, Kewpie.”
    • Fact check: True in a literal sense. DEI (Diversity, Equity, and Inclusion) is a framework or organizational strategy, not a marketplace or economic market.
  • “Communism doesn’t work, Kewpie.”
    • Fact check: This is an opinion presented as a general statement. Historically, communist states have faced significant economic and political challenges, but “doesn’t work” is too broad to be an objective fact—it depends on how “works” is defined.
  • “Racism. DEI hiring doesn’t work either.”
    • Fact check:Two claims here:
      • “Racism” — as a standalone word, true as a social phenomenon, but context is unclear.
      • “DEI hiring doesn’t work either” — This is opinion or a disputed claim. Evidence on DEI programs shows mixed outcomes: some improve representation and equity, others are criticized for inefficiency or unintended consequences. It is not universally true or false.
  • “Racism. DEI hiring doesn’t work, Kewpie.”
    • Same as above; repetition.
The RW hatred of DEI is simply code for them disliking women and brown people.
 
  • “DEI isn't a market, Kewpie.”
    • Fact check: True in a literal sense. DEI (Diversity, Equity, and Inclusion) is a framework or organizational strategy, not a marketplace or economic market.
  • “Communism doesn’t work, Kewpie.”
    • Fact check: This is an opinion presented as a general statement. Historically, communist states have faced significant economic and political challenges, but “doesn’t work” is too broad to be an objective fact—it depends on how “works” is defined.
  • “Racism. DEI hiring doesn’t work either.”
    • Fact check:Two claims here:
      • “Racism” — as a standalone word, true as a social phenomenon, but context is unclear.
      • “DEI hiring doesn’t work either” — This is opinion or a disputed claim. Evidence on DEI programs shows mixed outcomes: some improve representation and equity, others are criticized for inefficiency or unintended consequences. It is not universally true or false.
  • “Racism. DEI hiring doesn’t work, Kewpie.”
    • Same as above; repetition.
Where did you find this Leftist oriented AI you're using? Got the link for us?
 
I wonder... What would you get for results if you ran the results you got through that AI say like three times. That is, get a result. Run that result though the AI, then repeat that a total of 3 or 4 times. Might be hilarious.
Someone once took the dialog from Skyrim Elder Scrolls and washed through AI about 20 times, then re-inserted the changed text into the game. The results are hilarious!
 
  • “DEI isn't a market, Kewpie.”
    • Fact check: True in a literal sense. DEI (Diversity, Equity, and Inclusion) is a framework or organizational strategy, not a marketplace or economic market.
  • “Communism doesn’t work, Kewpie.”
    • Fact check: This is an opinion presented as a general statement. Historically, communist states have faced significant economic and political challenges, but “doesn’t work” is too broad to be an objective fact—it depends on how “works” is defined.
  • “Racism. DEI hiring doesn’t work either.”
    • Fact check:Two claims here:
      • “Racism” — as a standalone word, true as a social phenomenon, but context is unclear.
      • “DEI hiring doesn’t work either” — This is opinion or a disputed claim. Evidence on DEI programs shows mixed outcomes: some improve representation and equity, others are criticized for inefficiency or unintended consequences. It is not universally true or false.
  • “Racism. DEI hiring doesn’t work, Kewpie.”
    • Same as above; repetition.
i do not see Into the Night stream of spam posts so thanks for refuting him.

Fact is DEI arose, like it or not, to combat and try and balance a far more insidious EXISTING dynamic which is for MOST of the US existence neither women nor PoC could even have an application reviewed for 99% of managerial jobs or above.

What that resulted in is that the ENTIRETY of the ranks of almost exclusive white males who got those jobs were not qualified, if we use the SAME criteria that allows magats saying anyone who gets a job as part of DEI is not qualified. They are either BOTH, as groups not qualified or both CAN BE qualified.

That is not to say things are perfect in either case, but allowing the all white euro male to use what Terry points out rightly happens (nepotism, favoritism) to continue the abuse without offering any programs to try and balance it would be wrong.
 
Pretending you put forth coherent and logical arguments is the biggest joke when the AI, with citations is putting forth better arguments than you ever do.
Except the AI doesn't use citations barely at all. And in some cases is blatantly wrong.

The AI clearly doesn't understand hyperbole. It takes all statements literally. It doesn't get information embedded in pictures or videos. In summary, it is often simply wrong on the level of a 6th grade education would be wrong.
 
Except Grimmy doesn't and neither does the Goggle AI. It foists its opinion without reference to anything and is more often wrong or misleading in its analysis.

1) “AI foists its opinion without reference to anything”​


This is partly misleading.


  • Most modern AI systems (including Google’s AI summaries and chatbots like ChatGPT-style models) are not opinion engines.
  • They generate responses based on patterns in training data and sometimes retrieved sources (depending on the system).
  • Some systems do provide citations or links (especially search-integrated ones), but those citations can be incomplete, mismatched, or occasionally incorrect.

2) “More often wrong or misleading in its analysis”​


This is not supported by evidence in general terms.


  • Evaluations of systems like Google’s AI Overviews and large language models show mixed accuracy, not dominant failure.
  • Error rates exist (often in the ~5–15% range depending on task and study), but that does not mean “more often wrong than right.”
  • Performance varies heavily by topic:
    • Better: summarizing general knowledge, explanations
    • Worse: real-time facts, niche claims, ambiguous sources

 
Except the AI doesn't use citations barely at all. And in some cases is blatantly wrong.

The AI clearly doesn't understand hyperbole. It takes all statements literally. It doesn't get information embedded in pictures or videos. In summary, it is often simply wrong on the level of a 6th grade education would be wrong.

1. “AI clearly doesn't understand hyperbole. It takes all statements literally.”​


❌ False


AI systems do not “take everything literally” in a simple rule-based way.


  • Large language models are trained on vast amounts of human language, including sarcasm, metaphor, exaggeration, and hyperbole.
  • They often can recognize hyperbole from context (e.g., “I’ve told you a million times” is usually interpreted as emphasis, not literal counting).
  • However, they are not perfectly reliable and can sometimes misread intent, especially when context is unclear.

So the accurate version is:


AI can often interpret hyperbole correctly, but it can also misinterpret it depending on context and phrasing.



2. “It doesn't get information embedded in pictures or videos.”​


❌ Outdated / misleading


This is only true for text-only models, not modern multimodal AI.


  • Many current systems (e.g., GPT-4o-class models, Gemini, Claude with vision) can:
    • interpret images
    • describe scenes
    • read text in images
    • reason about visual content

Research does show limitations in visual reasoning, especially with abstract images, fine detail, or complex reasoning chains.
But “doesn’t get information from images/videos” is simply not true anymore.




3. “It is often simply wrong on the level of a 6th grade education”​


❌ Not supported


This is a comparison that sounds intuitive but doesn’t match evidence.


  • AI performance is highly variable by task
    • Often strong: summarization, explanation, general knowledge
    • Weaknesses: hallucinations, niche facts, ambiguous prompts, complex reasoning chains
  • Studies consistently show mixed accuracy, not systematic low-level failure.

Importantly:


  • On many standardized tasks, modern models perform at or above average human-level benchmarks
  • On others, they can fail in very basic ways

So it is not accurate to compare it broadly to a “6th grade level” as a general rule.




Bottom line​


The claim is wrong as a general statement.
 
Back
Top