From p. 137:
The most consistent and significant behavioral divergence between the groups was observed in the ability to quote one's own essay. LLM users significantly underperformed in this domain, with 83% of participants (15/18) reporting difficulty quoting in Session 1, and none providing correct quotes. This impairment persisted albeit attenuated in subsequent sessions, with 6 out of 18 participants still failing to quote correctly by Session 3. [...] Search Engine and Brain-only participants did not display such impairments. By Session 2, both groups achieved near-perfect quoting ability, and by Session 3, 100% of both groups' participants reported the ability to quote their essays, with only minor deviations in quoting accuracy.
Or you could read the entirety of the first comment in this thread and see how it was not saying that. Notice the part that begins, "However, I believe there is an important difference to chatbots..."
No Nut Neuravember
Does master own a Sawzall?
Fuck it, repeating my joke from the earlier thread: Inviting the most pedantic nerds on Earth to critique your chatbot slop is a level of begging to be pwned that’s on par with claiming the female orgasm is a myth.
yes you need to read things to understand them
OK, here's your free opportunity to spend more time doing that. Bye now.
The stubsack is the weekly thread of miscellaneous, low-to-mid-effort posts on awful.systems.
She said, “You know what they say the modern version of Pascal’s Wager is? Sucking up to as many Transhumanists as possible, just in case one of them turns into God. Perhaps your motto should be ‘Treat every chatterbot kindly, it might turn out to be the deity’s uncle.’”
My Grand Unified Theory of Scott Aaronson is that he doesn't have a theory of mind. On subjects far less incendiary than Zionism, he simply fails to recognize that people who share his background or interests can think differently than he does.