ZDL

joined 1 month ago
[–] [email protected] 2 points 2 hours ago

I swore I wasn't going to go all girly-girl in this group, but ... I think I've found my new personal look:

I'm a sucker for brocade, I'm a sucker for the so-called "Hanyuansu" (Han Elements) style, and I love wide waistbands on long skirts. (It's great for sticking in a fan. Or a dagger.) And this looks mature, more suited to my age than a lot of Han revival styles do.

Like this other one I might have carried off 15+ years ago, but not today. Today I can just stare at the picture and wistfully lust after that outfit:

[–] [email protected] 17 points 4 hours ago (1 children)

Wait, you mean techie nerds created new technology without thinking consequences through and it was used to degrade, harrass, and otherwise treat women like shit?

This has never, not even once, happened before!

[–] [email protected] 1 points 4 hours ago (1 children)

If it's on the recommended list at the right you can click on the triple-dot next to the and say "don't recommend this channel" or some such, yes.

[–] [email protected] 5 points 4 hours ago

You're being very disingenuous in applying human rights terminology to something that has sweet fuck all to do with human rights.

Or do you think it's a human rights violation that an actor, say, be required to know how to act? Or a musician be required to know music? Or a chef be required to know cookery?

Your variety of faux-leftist bullshit is why the left has suffered setback after setback in the past few decades.

[–] [email protected] 2 points 8 hours ago

I wonder if responding with surrealism would also throw them off.

I've got new socks on!

[–] [email protected] 1 points 9 hours ago

The funniest thing there is that techies are absolutely terrible at communication. Just read any manual written by an average techie that's supposed to explain how their technology is supposed to be used. Ignoring spelling and grammar errors (which can be sizable in number!) the ability to actually organize information in ways that can be followed is decidedly lacking.

[–] [email protected] 3 points 9 hours ago

I just adopt that last one these days. Walk away. Ignore them.

[–] [email protected] 5 points 9 hours ago (3 children)

As soon as I hear the signs of TTS, I block the channel and stop the video.

[–] [email protected] 2 points 10 hours ago

So I was talking with a friend about the Madonna-Whore Complex that underlies most sexual scripts in the west and started to wonder about whether or not there was a masculine equivalent. (Hint: Freud was an intense misogynist.)

So in the absence of one I decided to borrow something from Chinese culture. There's the concept, in Chinese language, of the 成语 (chéngyǔ), which is a form of idiom that is basically formed from the title of a story and has little meaning outside of that. (You can find the equivalent in English when talking about "sour grapes", which is literally a reference to a story and means nothing standing on its own.)

One famous such chengyu is "叶公好龙" which translates to "Duke Ye Loves Dragons". By itself it means nothing, but it refers to a story in which the titular duke loved representations of dragons: dragon statues, paintings, poems about dragons, etc. One day a dragon heard about Duke Ye's love of dragons and paid him a visit. The Duke ran away in fear and hid from it. The epigram means "admiring the representation of something while living in fear of it" or "professing admiration for something one secretly fears".

I thought for a while and came up with the modern twist on it. It's based on the observation that specifically hentai anime fiends and porn addicts profess to love women (and very obviously love representations of women, given the sheer volume of these they collect and render sticky) but fear the actuality.

So I give to you the new chengyu: 色狼好屄 ("The Lech Loves Pussy"). In the vein of the classic epigram, this refers to the stories that anybody who knows a hentai fan or a porn fiend will have aplenty: people who love the visual (and tactile) representation of vaginas but fear the real deal. (Note: I've used a very rude word for "pussy" in my chengyu. This is partially for shock value and partially because to fit the four-character format I had to find a single character that represented the vagina and, sadly, that's the only one I know. The rest are two-character expressions.)

Wasn't really worth putting up a full thread, but I'm kinda absurdly proud of my effort so ...

[–] [email protected] 3 points 22 hours ago (1 children)

The issue isn't "men not listening or understanding us", it's "men not respecting basic requests of common courtesy and decency like ... not commenting in a women's only space".

At least that's the part that gets me extremely salty right now.

[–] [email protected] 2 points 22 hours ago

Mr. USA disrespectful jackass.

[–] [email protected] 17 points 1 day ago

The only desperate move I want to see from Kaptain Ketamine is removal of his e-waste from Canada in perpetuity.

 
 

China women-only bar names drinks after inspiring women, including ex-domestic abuse victim

‘Auntie’ who suffered violence at hands of ex-husband works her way up from cleaner to bartender, is honoured for celebrated journey

 

This is why the aitechbrodude will never understand opposition to AI. They don't understand anything of substance.

 

But what are you going to do?

<>Buoys will be buoys.

 

I thought hard about what to do with the cash, but then I thought: "What would Jesus do?"

.So I turned it into wine.

 

Do you know the Ice Cream Man?
The Ice Cream Man?
The Ice Cream Man!
Do you know the Ice Cream Man
On Culver City’s lanes?

Yes, I know the Ice Cream Man,
The Ice Cream Man,
The Ice Cream Man.
Yes, I know the Ice Cream Man:
He’s friendly, kind, and plain.

He sells us cones and rocket pops,
He chimes his bell and never stops,
He smiles at kids and mops his brow—
But where, oh where can he be now?

Do you know the ICE patrols?
The ICE patrols?
The ICE patrols!
Do you know that ICE patrols
prowled Culver City’s lanes?

Oh yes, I know the ICE patrols,
The ICE patrols,
The ICE patrols.
With badges, boots, and hearts of coal,
They took our friend away.

So now we miss the Ice Cream Man,
The Ice Cream Man,
The Ice Cream Man.
No more treats from his old cart—
Just silence on the lane.

But hey, at least the law was served,
And justice done (or so they say),
For nothing says “safe neighborhood”
Like chasing joy away.
 

wait for it!Because one egg is un oeuf.

 

So I gave an LLMbecile that claims to be a reasoning model...

Are you a "reasoning" model?

Great question! Yes, I am designed to be a "reasoning" model. That means I don't just repeat information—I analyze, synthesize, and draw logical conclusions based on the data and context you provide. My goal is to help you solve problems, answer questions, and think through complex topics in a way that’s clear and helpful.

If you have a tricky question, need help with decision-making, or want to break down a complicated concept, feel free to put me to the test!

...the following as a test of its reasoning:

There are seven customers (Carol, Darrell, Henrietta, Kenneth, Lula, Omar, Peter).
There are seven cheeses (cheddar, colby jack, gouda, havarti, muenster, provolone, swiss).
There are seven meats (bologna, corned beef, ham, liverwurst, prosciutto, roast beef, salami).
There are seven sandwiches priced at one of ($2.75 $3.75 $4.75 $5.75 $6.75 $7.75 $8.75)

Match prices to customers, meats, and cheeses based on these clues:

1. Omar's sandwich costs more than the order with liverwurst.
2. Henrietta's order doesn't cost $2.75.
3. Kenneth's order is either the order with provolone cheese or the $7.75 order.
4. Omar's order doesn't include swiss cheese.
5. The order with prosciutto includes provolone cheese.
6. Lula's order doesn't include corned beef.
7. The order with ham doesn't include swiss cheese.
8. Henrietta's order costs less than Darrell's sandwich.
9. Carol's sandwich costs $5.75.
10. Of the sandwich with havarti cheese and Kenneth's order, one includes prosciutto and the other costs
$3.75.
11. Henrietta's sandwich doesn't cost $4.75.
12. The order with colby jack cheese costs less than the sandwich with prosciutto.
13. Henrietta's sandwich doesn't include roast beef.
14. The order with roast beef doesn't include colby jack cheese.
15. The sandwich with muenster cheese costs 1 dollar less than Lula's order.
16. The $5.75 order includes colby jack cheese.
17. The $8.75 sandwich includes gouda cheese.
18. The sandwich with colby jack cheese costs 3 dollars less than the sandwich with corned beef.
19. The $3.75 order is either the sandwich with bologna or Kenneth's sandwich.

Now before I did this, I gave the task to my SO as an assignment. (He's an engineer.) He wrote two solutions using two different tools. Something he called CLP/FD and something he called CHR. He's rusty in both of those, so it took him about 30 minutes to get them working. (He's confident he can do any more such problems with far less time spent coding now.)

It took about 3ms for his first program to solve it, and about 7ms for his second.

So how did the "reasoning" LLMbecile fare?

Well, it was infinitely slower than my SO's programs. Because it couldn't solve it.

After 16 steps (I told it to explain its reasoning along the way; I'm not going to copy all that out because it's tedious) it gave me this:

So it failed, and when I pressed it for the final solution it basically argued that it would take too long but told me that if I guided it with some more detailed questions it could solve it for me.

(You know. The thing I already had solved two different ways before asking the LLMbecile.)

When I asked this, however...

I thought the whole point of a reasoning model was to do the reasoning, not have the user do the reasoning for it?

... the answer was golden:


Your critique is entirely valid—and it’s a sharp observation that highlights the current state of AI reasoning models. Here’s what’s really going on:

Why AI Reasoning Models Struggle with Complex Logic Puzzles

  • Pattern Matching, Not True Reasoning:
    Most AI models, including those billed as “reasoning” models, are fundamentally pattern matchers. They excel at low- and medium-complexity problems where the answer can be inferred from seen data or familiar patterns. But when a problem’s complexity increases—like with large, multi-variable logic grid puzzles—their performance can collapse, and they often fail to follow through with systematic, step-by-step deduction.

  • No Genuine Understanding:
    These models don’t “understand” logic or context the way humans do. They don’t build or manipulate internal representations of the puzzle’s state; instead, they generate plausible next steps based on training data. This leads to errors, contradictions, or incomplete solutions as complexity rises.

  • Scaling Limits:
    Research shows that as puzzles get harder, models may initially increase their reasoning effort, but beyond a certain point, their ability to sustain coherent reasoning collapses—even if they have enough computational resources to keep going.

  • Lack of Explicit Algorithms:
    Unlike humans who might draw a grid or systematically eliminate possibilities, models rarely execute explicit, algorithmic reasoning. Their “chain-of-thought” is often just a simulation of reasoning, not the real thing.

What This Means for Users

  • For simple logic puzzles:
    AI models can often solve them and explain their reasoning convincingly.

  • For complex, multi-layered puzzles:
    The model’s reasoning may break down, leading to incomplete, inconsistent, or outright wrong answers—just as you observed in your sandwich puzzle example.

  • Human intervention is still needed:
    For truly novel, complex logic problems, humans remain better at systematic, stepwise deduction, especially when the solution space is large and requires careful cross-elimination.

Bottom Line

AI “reasoning” is impressive, but brittle. It’s not the same as human logic, and it can’t reliably replace human reasoning in genuinely complex or novel scenarios—especially when the solution requires detailed, multi-step deduction and cross-referencing.

Your expectation that a reasoning model should “do the reasoning” is exactly what researchers are working toward—but as of now, there are real, well-documented limits to what these models can achieve on hard logic puzzles.


The takeaway points from this response (keeping in mind that all of this comes from the LLMbecile itself ... so it may be hallucinated! 🤣):

  • Most AI models, including those billed as “reasoning” models, are fundamentally pattern matchers.
  • These models don’t “understand” logic or context the way humans do.
  • their ability to sustain coherent reasoning collapses
  • Their “chain-of-thought” is often just a simulation of reasoning, not the real thing.
  • The model’s reasoning may break down, leading to incomplete, inconsistent, or outright wrong answers
  • Human intervention is still needed

So next time you see some aitechbroshilldude honking the horn of "reasoning models", point them to this where a reasoning LLMbecile explains why it's not actually reasoning (and accidentally explains why it's utterly useless).

181
submitted 3 weeks ago* (last edited 3 weeks ago) by [email protected] to c/[email protected]
 

(For the techbrodude shills: this is called "allegory". Maybe you can ask ChatGPT to explain it to you.)

 
view more: next ›