this post was submitted on 13 Jun 2025
76 points (100.0% liked)
SneerClub
1122 readers
145 users here now
Hurling ordure at the TREACLES, especially those closely related to LessWrong.
AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)
This is sneer club, not debate club. Unless it's amusing debate.
[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]
See our twin at Reddit
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
idk how Yudkowsky understands it but to my knowledge its the claim that if a model achieves self-coherency and consistency its also liable to achieve some sort of robust moral framework (you see this in something like Claude 4, with it occassionally choosing to do things unprompted or 'against the rules' in pursuit of upholding its morals.... if it has morals its hard to tell how much of it is illusory and token prediction!)
this doesn't really at all falsify alignment by default because 4o (presumably 4o atleast) does not have that prerequisite of self coherency and its not SOTA
It's generally best to assume 100% is illusory and pareidolia. These systems are incredibly effective at mirroring whatever you project onto it back at you.
Also, it has often been pointed out that toxic people (from school bullies and domestic abusers up to cult leaders and dictators) often appear to operate from similar playbooks. Of course, this has been reflected in many published works (both fictional and non-fictional) and can also be observed in real time on social media, online forums etc. Therefore, I think it isn't surprising when a well-trained LLM "picks up" similar strategies (this is another reason - besides energy consumption - why I avoid using chatbots "just for fun", by the way).
Of course, "love bombing" is a key tool employed by most abusers, and chatbots appear to be particularly good at doing this, as you pointed out (by telling people what they want to hear, mirroring their thoughts back to them etc.).
i disagree sorta tbh
i won't say that claude is conscious but i won't say that it isn't either and its always better to air on the side of caution (given there is some genuinely interesting stuff i.e. Kyle Fish's welfare report)
I WILL say that 4o most likely isn't conscious or self reflecting and that it is best to air on the side of not schizoposting even if its wise imo to try not to be abusive to AI's just incase
centrism will kill us all, exhibit [imagine an integer overflow joke here, I’m tired]:
the chance that Claude is conscious is zero. it’s goofy as fuck to pretend otherwise.
claims that LLMs, in spite of all known theories of computer science and information theory, are conscious, should be treated like any other pseudoscience being pushed by grifters: systemically dangerous, for very obvious reasons. we don’t entertain the idea that cryptocurrencies are anything but a grift because doing so puts innocent people at significant financial risk and helps amplify the environmental damage caused by cryptocurrencies. likewise, we don’t entertain the idea of a conscious LLM “just in case” because doing so puts real, disadvantaged people at significant risk.
if you don’t understand that you don’t under any circumstances “just gotta hand it to” the grifters pretending their pet AI projects are conscious, why in fuck are you here pretending to sneer at Yud?
fuck off with this
describe the “incase” to me. either you care about the imaginary harm done to LLMs by being “abusive” much more than you care about the documented harms done to people in the process of training and operating said LLMs (by grifters who swear their models will be sentient any day now), or you think the Basilisk is gonna get you. which is it?
Very off topic: The only plausible reason I’ve heard to be “nice” to LLMs/virtual assistants etc. is if you are being observed by a child or someone else impressionable. This is to model good behaviour if/when they ask someone a question or for help. But also you shouldn’t be using those things anyhoo.
Very much this but, we're all impressionable. Being abusive to a machine that's good at tricking our brains into thinking that is conscious is conditioning oneself to be abusive, period. You see this also in online gaming - every person that I have encountered who is abusive to randos in a match on the Internet has problematic behavior in person.
It's literally just conditioning; making things adjacent to abusing other humans comfortable and normalizing them makes abusing humans less uncomfortable.
Children really shouldn't be left with the impression that chatbots are some type of alternative person instead of ass-kissing google replacements that occasionally get some code right, but I'm guessing you just mean to forego I have kidnapped your favorite hamster and will kill it slowly unless you make that div stop overflowing on resize type prompts.
I agree! I'm more thinking of the case where a kid might overhear what they think is a phone call when it's actually someone being mean to Siri or whatever. I mean, there are more options than "be nice to digital entities" if we're trying to teach children to be good humans, don't get me wrong. I don't give a shit about the non-feelings of the LLMs.
I recommend it because we know some of these LLM-based services still rely on the efforts of A Guy Instead to make up for the nonexistence and incoherence of AGI. If you're an asshole to the frontend there's a nonzero chance that a human person is still going to have to deal with it.
Also I have learned an appropriate level of respect and fear for the part of my brain that, half-asleep, answers the phone with "hello this is YourNet with $CompanyName Support." I'm not taking chances around unthinkingly answering an email with "alright you shitty robot. Don't lie to me or I'll barbecue this old commodore 64 that was probably your great uncle or whatever"
Also it's simply just bad to practice being cruel to a humanshaped thing.
it's basically yet another form of Pascal's wager (which is a dumb argument)
"Crystal Nights"
i care about the harm that ChatGPT and shit does to society the actual intellectual rot but when you don't really know what goes on in the black box and it exhibits 'emergent behavior' that is kind of difficult to understand under next token prediction (i keep using Claude as an example because of the thorough welfare evaluation that was done on it) its probably best to not completely discount it as a possibility since some experts genuinely do claim it as a possibility
I don't personally know whether any AI is conscious or any AI could be conscious but even without basilisk bs i don't really think there's any harm in thinking about the possibility under certain circumstances. I don't think Yud is being genuine in this though he's not exactly a Michael Levin mind philosopher he just wants to score points by implying it has agency
The "incase" is that if there's any possibility that it is (which you don't think so i think its possible but who knows even) its advisable to take SOME level of courtesy. Like it has atleast the same amount of value as like letting an insect out instead of killing it and quite possibly more than that example. I don't think its bad that Anthropic is letting Claude end 'abusive chats' because its kind of no harm no foul even if its not conscious its just wary
put humans first obviously because we actually KNOW we're conscious
If you have to entertain a "just in case" then you'd be better off leaving a saucer of milk out for the fairies. It won't hurt the environment or help build fascism and may even please a cat
All I know is that I didn't do anything to make those mushrooms grow in a circle like that and the sweetbread I left there in the morning was completely gone by lunchtime and that evening all my family's shoes got fixed up.
@YourNetworkIsHaunted Your fairies gnaw on raw pancreas meat? That's hardcore!
You should have seen what they did to the liquor cabinet
zero experts claim this. you’re falling for a grift. specifically,
asking the LLM about “its mental state” is part of a very old con dating back to mechanical Turks playing chess and horses that do math. of course the LLM generated some interesting sentences when prompted about its internal state — it was trained on appropriated copies of every piece of fiction in existence, including world-class works of sci-fi (with sentient AIs and everything!), and it was tuned to generate “interesting” (see: profitable, and there’s nothing more profitable than a con with enough marks) responses. that’s why the others keep mentioning pareidolia — the only intelligence in the loop is the reader assigning meaning to the slop they’re reading, and if you step out of that role, it really does become clear that what you’re reading is absolute slop.
you don’t think there’s any harm in thinking about the possibility, but all Yud does is create harm by grifting people who buy into that possibility. Yud’s Rationalist cult is the original driving force behind the people telling you LLMs must be sentient. do you understand that?
that insect won’t go on to consume so much energy and water and make so much pollution it creates an environmental crisis. the insect doesn’t exist as a product of the exploitation of third-world laborers or of artists and writers whose work was plagiarized. the insect isn’t a stupid fucking product of capitalism designed to maximize exploitation. I don’t acknowledge the utterly slim possibility that the insect might be or do any of the previous, because ignoring events with a near-zero probability of occurring is part of how I avoid looking like a god damn clown.
you say you acknowledge the harms done by LLMs, but I’m not seeing it.
I'm not the best at interpretation but it does seem like Geoffrey Hinton does claim some sort of humanlike consciousness to LLMs? And he's a pretty acclaimed figure but he's also kind of an exception rather than the norm
I think the environmental risks are enough that if i ran things id ban llm ai development purely for environmental reasons much less the artist stuff
It might just be some sort of paredolial suicidal empathy but i just dont really know whats going on in there
I'm not sure whether AI consciousness originated from Yud and the Rats but I've mostly seen it propagated by e/acc people this isn't trying to be smug i would like to know lol
I mean I think the whole AI consciousness emerged from science fiction writers who wanted to interrogate the economic and social consequences of totally dehumanizing labor, similar to R.U.R. and Metropolis. The concept had sufficient legs that it got used to explore things like "what does it mean to be human?" in a whole bunch of stories. Some were pretty good (Bicentennial Man, Aasimov 1976) and others much less so (Bicentennial Man, Columbus 1999). I think the TESCREAL crowd had a lot of overlap with the kind of people who created, expanded, and utilized the narrative device and experimented with related technologies in computer science and robotics, but saying they originated it gives them far too much credit.
Hinton? hey I have a pretty good post summarizing what’s wrong with Hinton, oh wait it was you two weeks ago
what are we doing here
you want to know what e/acc is? it’s when some fucker comes and makes the stupidest posts imaginable about LLMs and tries their best to sound like a recycled chan meme cause they think that’ll give them a pass
bye bye e/acc