scruiser

joined 2 years ago
[–] [email protected] 17 points 1 month ago (1 children)

Another thing that's been annoying me about responses to this paper... lots of promptfondlers are suddenly upset that we are judging LLMs by abitrary puzzle solving capabilities... as opposed to the arbitrary and artificial benchmarks they love to tout.

[–] [email protected] 26 points 1 month ago (2 children)

So, I've been spending too much time on subreddits with heavy promptfondler presence, such as /r/singularity, and the reddit algorithm keeps recommending me subreddit with even more unhinged LLM hype. One annoying trend I've noted is that people constantly conflate LLM-hybrid approaches, such as AlphaGeometry or AlphaEvolve (or even approaches that don't involve LLMs at all, such as AlphaFold) with LLMs themselves. From their they act like of course LLMs can [insert things LLMs can't do: invent drugs, optimize networks, reliably solve geometry exercise, etc.].

Like I saw multiple instances of commenters questioning/mocking/criticizing the recent Apple paper using AlphaGeometry as a counter example. AlphaGeometry can actually solve most of the problems without an LLM at all, the LLM component replaces a set of heuristics that make suggestions on proof approaches, the majority of the proof work is done by a symbolic AI working with a rigid formal proof system.

I don't really have anywhere I'm going with this, just something I noted that I don't want to waste the energy repeatedly re-explaining on reddit, so I'm letting a primal scream out here to get it out of my system.

[–] [email protected] 10 points 1 month ago

Just one more training run bro. Just gotta make the model bigger, then it can do bigger puzzles, obviously!

[–] [email protected] 34 points 1 month ago* (last edited 1 month ago) (7 children)

The promptfondlers on places like /r/singularity are trying so hard to spin this paper. "It's still doing reasoning, it just somehow mysteriously fails when you it's reasoning gets too long!" or "LRMs improved with an intermediate number of reasoning tokens" or some other excuse. They are missing the point that short and medium length "reasoning" traces are potentially the result of pattern memorization. If the LLMs are actually reasoning and aren't just pattern memorizing, then extending the number of reasoning tokens proportionately with the task length should let the LLMs maintain performance on the tasks instead of catastrophically failing. Because this isn't the case, apple's paper is evidence for what big names like Gary Marcus, Yann Lecun, and many pundits and analysts have been repeatedly saying: LLMs achieve their results through memorization, not generalization, especially not out-of-distribution generalization.

[–] [email protected] 8 points 1 month ago

A surprising number of the commenters seem to be at least considering the intended message... which makes the contrast of the number of comments failing at basic reading comprehension that much more absurd (seriously, it's absurd how many comments somehow missed that the author was living in and working from Brazil and felt it didn't reflect badly on them to say as much in the HN comments).

[–] [email protected] 9 points 1 month ago (1 children)

I struggle to think of a good reason why such prominent figures in politics and tech would associate themselves with such an event.

There is no good reason, but there is an obvious bad one: these prominent figures have racist sympathies (if they aren't "outright" racist themselves) and in between a lack of empathy and position of privilege don't care about the negative effects of boosting racist influencers.

[–] [email protected] 8 points 1 month ago (2 children)

I've been waiting for this. I wish it had happened sooner, before DOGE could do as much damage it did, but better late than never. Donald Trump isn't going to screw around, and, ironically, DOGE has shown you don't need congressional approval or actual legal authority to screw over people funded by the government, so I am looking forward to Donald screwing over SpaceX or Starlink's government contracts. On the returning end... Elon doesn't have that many ways of properly screwing with Trump, even if he has stockpiled blackmail material I don't think it will be enough to turn MAGA against Trump. Still, I'm somewhat hopeful this will lead to larger infighting between the techbro alt-righters and the Christofascist alt-righters.

[–] [email protected] 20 points 1 month ago (3 children)
  • "tickled pink" is a saying for finding something humorous

  • "BI" is business insider, the newspaper that has the linked article

  • "chuds" is a term of online alt-right losers

  • OFC: of fucking course

  • "more dosh" mean more money

  • "AI safety and alignment" is the standard thing we sneer at here: making sure the coming future acasual robot god is a benevolent god. Occasionally reporter misunderstand it to mean or more PR-savvy promptfarmers misrepresent it to mean stuff like stopping LLMs from saying racist shit or giving you recipes that would accidentally poison you but this isn't it's central meaning. (To give the AI safety and alignment cultists way too much charity, making LLMs not say racist shit or give harmful instructions has been something of a spin-off application of their plans and ideas to "align" AGI.)

[–] [email protected] 8 points 1 month ago

I've seen articles and blog posts picking at bits and pieces of Google's rep (lots of articles and blogs on their roll in ongoing enshittification and I recall one article on Google rejecting someone on the basis of a coding interview despite that person being the creator and maintainer of a very useful open source library, although that article was more a criticism of coding interviews and the mystique of FAANG companies in general), but many of these criticism portray the problems as a more recent thing, and I haven't seen as thorough a take down as mirrorwitch's essay.

[–] [email protected] 9 points 1 month ago

It is definitely of interest, it might be worth making it a post on its own. It's a good reminder than even before Google cut the phrase "don't be evil", they were still a megacoporation, just with a slightly nicer veneer.

[–] [email protected] 3 points 1 month ago

Wow that blows past dunning-kurger overestimation into straight up time cube tier crank.

[–] [email protected] 13 points 1 month ago (2 children)

The space of possible evolved biological minds is far smaller than the space of possible ASI minds

Achkshually, Yudkowskian Orthodoxy says any truly super-intelligent minds will converge on Expected Value Maximization, Instrumental Goals, and Timeless-Decision Theory (as invented by Eliezer), so clearly the ASI mind space is actually quite narrow.

view more: ‹ prev next ›