this post was submitted on 16 Jun 2025
54 points (83.8% liked)

Flippanarchy

1286 readers
818 users here now

Flippant Anarchism. A lighter take on social criticism with the aim of agitation.

Post humorous takes on capitalism and the states which prop it up. Memes, shitposting, screenshots of humorous good takes, discussions making fun of some reactionary online, it all works.

This community is anarchist-flavored. Reactionary takes won't be tolerated.

Don't take yourselves too seriously. Serious posts go to [email protected]

Rules


  1. If you post images with text, endeavour to provide the alt-text

  2. If the image is a crosspost from an OP, Provide the source.

  3. Absolutely no right-wing jokes. This includes "Anarcho"-Capitalist concepts.

  4. Absolutely no redfash jokes. This includes anything that props up the capitalist ruling classes pretending to be communists.

  5. No bigotry whatsoever. See instance rules.

  6. This is an anarchist comm. You don't have to be an anarchist to post, but you should at least understand what anarchism actually is. We're not here to educate you.

  7. No shaming people for being anti-electoralism. This should be obvious from the above point but apparently we need to make it obvious to the turbolibs who can't control themselves. You have the rest of lemmy to moralize.


Join the matrix room for some real-time discussion.

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 18 points 17 hours ago (3 children)

Asking a LLM something is the equivalent of asking strangers on the internet and allowing non-serious answers too

[–] [email protected] 22 points 17 hours ago* (last edited 17 hours ago) (1 children)

That's because that's what LLMs are trained on. Random comments from people on the internet, including troll posts and jokes which the LLM takes as factual most of the times.

Remember when Google trained their AI on reddit comments and it put out incredibly stupid answers like mixing glue in your cheese sauce to make it thicker?

https://www.reddit.com/r/LinusTechTips/comments/1czj9rx/google_ai_gives_answers_they_find_on_reddit_with/

Or that one time it suggested that people should eat a small rock every day because it was fed an Onion article?

https://www.reddit.com/r/berkeley/comments/1d2z04c/this_is_what_happens_when_reddit_is_used_to_train/

The old saying: "Garbage in, garbage out." fits extremely well for LLMs. Considering the amount of data being fed to these LLMs it's almost impossible to sanitize them and the LLMs are nowhere close to being able to discern jokes, trolls or sarcasm.

Oh yea also it came out some researchers used LLMs to post reddit comments for an experiment. So yea, the LLMs are being fed with other LLM content too. It's pretty much a human-centipede situation.

https://www.engadget.com/ai/researchers-secretly-experimented-on-reddit-users-with-ai-generated-comments-194328026.html

But yea, I wouldn't trust these models for anything but the most simplest of tasks and even there I would be pretty circumspect of what they give me.

[–] [email protected] 7 points 17 hours ago (1 children)

Do you subscribe to the idea that LLMs will degrade overtime after recycling their own shit for several years like a gif/jpeg rencoded for the umpteenth time

[–] [email protected] 10 points 16 hours ago

Honestly? Yea. The training data matters, that's why all these AI companies are looking for data generated by humans. Feeding them with LLM data would most likely end up in nonsensical stuff pretty fast.

[–] [email protected] 3 points 14 hours ago

I find it's decent for low stakes programming questions, and that's mostly because I can easily validate correctness just by running the code (because often it'll get it wrong initially and you need to go back to the conversation to fix the issue or just fix it yourself).

How people use it to deal with mental health or relationship issues boggles my mind tho.

[–] [email protected] 4 points 17 hours ago

All the information required is on Gineipedia! I would've done it myself as I was doing it previously but I thought I'd expedite it. It really fails at the most basic of tasks...