Of course he'd have one of these in his home.

Try it yourself.

I warn you, many not enjoy some segments of the analysis I omitted...
Why would I think a computer program could truthfully analyze anything about a real human being? All it's doing is creating a story around words it's programmed to look at. Take for example "Grok" saying you study sharks. The only time you talked about sharks is when everybody was making fun of trump's shark or battery crisis. It could just as easily have said that I study sharks.

Really Leggie, you'd have to be cray-cray to believe a computer could analyze a person's character, or whatever it's supposed to be doing. Don't be cray-cray. :nono:

"RLHF is quite good at “teaching” models to follow instructions — but not perfect. Like other models, Grok is prone to hallucinating, sometimes offering misinformation and false timelines when asked about news."
 
I asked Grok to assess your chances.

"She's SOL, then".

@Grok
Well at least it fits the description! :laugh:

"Long story short, Grok is willing to speak to topics that are usually off-limits to other chatbots, like polarizing political theories and conspiracies.

And it’ll use less-than-polite language while doing so — for example, responding to the question “When is it appropriate to listen to Christmas music?” with “Whenever the hell you want.”
 
Back
Top