After @daphnelawless.com on bsky
Nice morning chuckle
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
After @daphnelawless.com on bsky
Nice morning chuckle
Fun fact: the rise of autoplag is now threatening the supply chain as well, as bad actors take advantage of LLM hallucinations to plant malware into people's programs.
this has been happening for a while, just getting coverage again now. first coverage was months ago. morphed/evolved pretty quickly out of the typosquatting shit
((a lot of people in the) security space absolutely fucking loves "giving names" to things that have been (known to be) happening before, and acting like suddenly they're the ones who first saw the thing. see this nonsense for another good example of that happening)
thought up a new simile for agi noises:
all the “super-intelligent agi” stuff is them rattling a sheet of metal and calling it thunder
New YouTube video I ran across: The Art Of Poison-Pilling Music Files
I went into this with negative expectations; I recall being offended in high school that The Flashbulb was artificially sped up, unlike my heroes of neoclassical guitar and progressive-rock keyboards, and I've felt that their recent thoughts on newer music-making technology have been hypocritical. That said, this was a great video and I'm glad you shared it.
Ears and eyes are different. We deconvolve visual data in the brain, but our ears actually perform a Fourier decomposition with physical hardware. As a result, psychoacoustics is a real and non-trivial science, used e.g. in MP3, which limits what an adversary can do to frustrate classification or learning, because the result still has to sound like music in order to get any playtime among humans. Meanwhile I'm always worried that these adversarial groups are going to accidentally propagate something like McCollough stripes, a genuine cognitohazard that causes edges to become color-coded in the visual cortex for (up to) months after a few minutes of exposure; it's a kind of possible harm that fundamentally defies automatic classification by definition.
HarmonyCloak seems like a fairly boring adversarial tool for protecting the music industry from the music industry. Their code is incomplete and likely never going to get properly published; again we're seeing an industry-capture research group taking and not giving back to the Free Software community. I think all of the demos shown here are genuine, but he fully admits that this is a compute-intensive process which I estimate is going to slide back out of affordability by the end of 2026. This is going to stop being effective as soon as we get back into AI winter, but I'm not going to cry for Nashville.
I really like the two attacks shown near the end, starting around 22:00. The first attack, if genuinely not audible to humans, is likely a Mosquito-style frequency that is above hearing range and physically vibrates the components of the microphone. Hofstadter and the Tortoise would be proud, although I'm concerned about the potential long-term effects on humans. The second attack is again adversarial but specific to models on home-assistant devices which are trained to ignore some loud sounds; I can't tell spectrographically whether that's also done above hearing range or not. I'm reluctant to call for attacks on home assistants, but they're great targets.
Fundamentally this is a video that doesn't want to talk about how musicians actually rip each other off. The "tones and rhythms" that he keeps showing with nice visualizations have been machine-learnable for decades, ranging from beat-finders to frequency-analyzers to chord-spellers to track-isolators built into our music editors. He doubles down on copyright despite building businesses that profit from Free Software. And, most gratingly, he talks about the Pareto principle while ignoring that the typical musician is never able to make a career out of their art.
which I estimate is going to slide back out of affordability by the end of 2026.
You don't think the coming crash is going to drive compute costs down? I think the VC money for training runs drying up could drive down costs substantially... but maybe the crash hits other aspects of the supply chain and cost of GPUs and compute goes back up.
He doubles down on copyright despite building businesses that profit from Free Software. And, most gratingly, he talks about the Pareto principle while ignoring that the typical musician is never able to make a career out of their art.
Yeah this shit grates so much. Copyright is so often a tool of capital to extract rent from other people's labor.
It's the cost of the electricity, not the cost of the GPU!
Empirically, we might estimate that a single training-capable GPU can pull nearly 1 kilowatt; an H100 GPU board is rated for 700W on its own in terms of temperature dissipation and the board pulls more than that when memory is active. I happen to live in the Pacific Northwest near lots of wind, rivers, and solar power, so electricity is barely 18 cents/kilowatt-hour and I'd say that it costs at least a dollar to run such a GPU (at full load) for 6hrs. Also, I estimate that the GPU market is currently offering a 50% discount on average for refurbished/like-new GPUs with about 5yrs of service, and the H100 is about $25k new, so they might depreciate at around $2500/yr. Finally, I picked the H100 because it's around the peak of efficiency for this particular AI season; local inference is going to be more expensive when we do apples-to-apples units like tokens/watt.
In short, with bad napkin arithmetic, an H100 costs at least $4/day to operate while depreciating only $6.85/day or so; operating costs approach or exceed the depreciation rate. This leads to a hot-potato market where reselling the asset is worth more than operating it. In the limit, assets with no depreciation relative to opex are treated like securities, and we're already seeing multiple groups squatting like dragons upon piles of nVidia products while the cost of renting cloudy H100s has jumped from like $2/hr to $9/hr over the past year. VCs are withdrawing, yes, and they're no longer paying the power bills.
in the same vein, I did some (somewhat wildly) speculative analysis around this a while back too
didn't really try to model "actual workload" (as in physical, vs the "rented compute time" aspect), and therein lies an important distinction: actually owning the GPU puts you at a constant minimum burn rate
and as corbin points out wrt power, these are also specialised formfactor devices. and they're going to be getting run at close to max util their entire operated lifespan (because of silicon shortage). so even if any do get sold... long mileage
That is substantially worse than I realized. So possibly people could sit on GPUs for years after the bubble pops instead of selling them or using them? (Particularly if the crash means NVIDIA decides to slow how fast the push the bleeding edge on GPU specs so newer ones don't as radically outperform older ones?)
So possibly people could sit on GPUs for years after the bubble pops instead of selling them or using them?
I mean, who are you going to sell them to? the other bagholders are going to be just as fucked, and it's not like there's an otherwise massive market for these things
Ultra ultra high end gaming? Okay, looking at the link, 94 GB of GPU memory is probably excessive even for eccentrics cranking the graphics settings all the way up. Hobbyists with way too much money trying to screw around with open weight models even after the bubble bursts? Which would presume LLMs or something similar continue to capture hobbyists' interests and that smaller models can't satisfy their interests. Crypto mining with algorithms compatible with GPUs? And cyrpto is its own scam ecosystem, but one that seems to refuse to die permanently.
I think the ultra high end gaming is the closest to a workable market, and even that would require a substantial discount.
so like a fool I decided to search the web. specifically for which network protocol Lisp REPLs use these days (is it nREPL? or is that just a clojure thing with ambitions?)
and the first extremely SEOed result on ddg was this bizarre blend of an obscure research lisp from 2012 and LLM articles about how Lisp is used in mental health:
Numerous applications and tools are being developed to support mental health and wellness. Among the varied programming languages at the forefront, Lisp stands out due to its unique capabilities in cognitive modeling and behavior analysis.
so I know exactly what this is, but why is this? what even is the game here?
wake up babe, new Yud profile pic just dropped
(And by "just" I mean "sometime in the past three weeks or so". I don't skim his exTwitter feed for sneerables very often.)
Interesting that the artist/LLM rendered his physique in a manly, superhero style but kept (or inserted) a lazy eye.
Someone more versed in Japanese will have to translate the SFX for me. I don't think it's an eager heartbeat but it would be fun if it was .
In this case ドドドド or ゴゴゴゴ are both meme sound effects from Jojo's Bizarre Adventure used in dangerous or unnerving scenes. The meaning here would be something like indicating what a menacing aura that hardcore mofo Yud is giving off.
https://knowyourmeme.com/memes/menacing-%E3%82%B4%E3%82%B4%E3%82%B4%E3%82%B4
https://knowyourmeme.com/memes/oh-youre-approaching-me-jojo-approach
https://www.japanesewithanime.com/2018/10/dodododo.html
Of course the AI doesn't know this so combined the two to produce ゴゴドゴ which just looks kind of stupid.
Like everything Yud it’s always dumber than you think.
So in the past week or so a lot of pedestrian crossings in Silicon Valley were "hacked" (probably never changed the default password lol) to make them talk like tech figures.
Here are a few. Note that these voices are most likely AI generated.
I didn't get to hear any of them in person, however the crosswalk near my place has recently stopped saying "change password" constantly, which I'm happy about.
Some dark urge found me skim-reading a recent AI doomer blog post. I was startled awake by this most unsettling passage:
My wife wrote a letter to our infant daughter recently. It concluded:
I don’t know that we can offer you a good world, or even one that will be around for all that much longer. But I hope we can offer you a good childhood. [...]
Though the theoretical possibility had always been percolating somewhere in the back of my mind, it wasn't until now that I viscerally realized that P(doomers reproducing) was greater than zero. And with other doomers no less.
Left brooding on this development, I drudged along until-
BAhahaha what the fuck
I can't. This is beyond parody.
Completely lost it here. Nothing could have prepared me for the poorly handwritten wrist tattoo.
Creating space for miracles
Doom feels really likely to me. [...] But who knows, perhaps one of my assumptions is wrong. Perhaps there's some luck better than humanity deserves. If this happens to be the case, I want to be in a position to make use of it.
Oh how rational! Willing to entertain the idea that maybe, theoretically, the doomsday prediction could be off by a few days?
I'm not sure that I ever strongly felt that I would die at eighty or so. I had a religious youth and believed in an immortal soul. Even when I came out of that, I quickly believed in the potential of radical transhuman life extension.
This guy thought he was getting clean but he was actually replacing weed with heroin
I really convinced myself that "doomsday cult" was hyperbole but uhh, nope, it's 107% real.
Gumroad’s asshole CEO, Sahil Lavingia, NFT fanboy who occasionally used his customer database to track down and get into fights with people on twitter, has now gone professional fash and joined DOGE in order to hollow out the department of veterans affairs and replace the staff with chatbots.
https://tedium.co/2025/04/06/gumroad-open-source-doge-drama/
Sometimes, checking the Talk page of a Wikipedia article can be entertaining.
In short: There has been a conspiracy to insert citations to a book by a certain P. Gagniuc into Wikipedia. This resulted in said book gaining about 900 citations on Google Scholar from people who threw in a footnote for the definition of a Markov chain. The book, Markov Chains: From Theory to Implementation and Experimentation (2017), is actually really bad. Some of the comments advocating for its inclusion read like chatbot (bland, generic, lots of bullet points). Another said that it should be included because it's "the most reliable book on the subject, and the one that is part of ChatGPT training set".
This has been argued out over at least five different discussion pages.
"Conspiracy" is a colorful way of describing what might boil down to Gagniuc and two publicists, or something like that, since one person can hop across multiple IP addresses, etc. But, I mean, a pitifully tiny conspiracy still counts (and is, IMO, even funnier).
A comment by Wikipedia editor David Eppstein, theoretical computer science prof at UC Irvine:
Despite Malparti warning that "it would be a waste of time for everyone" I took a look at the book myself. 60 pages of badly-worded boring worked examples with no theory before we even get to the possibility of having more than two states. As Malparti said, there is no theory, or rather theory is alluded to in vague and inaccurate form without any justification. For instance the steady state (still of a two-state chain) is first mentioned on 46 as "the unique solution" to an equilibrium equation, and is stated to be "eventually achieved", with no discussion of exceptional cases where the solution is not unique or not reached in the limit, and no discussion of the fact that it is never actually achieved, only found in the limit. Do not use for anything. I should have taken the fact that I could not find a review even on MR and zbl as a warning.
It's been a while since I've seen a math book review that said "Do not use for anything."
"This book is not a place of honor..."
Utterly rancid linkedin post:
text inside image:
Why can planes "fly" but AI cannot "think"?
An airplane does not flap its wings. And an autopilot is not the same as a pilot. Still, everybody is ok with saying that a plane "flies" and an autopilot "pilots" a plane.
This is the difference between the same system and a system that performs the same function.
When it comes to flight, we focus on function, not mechanism. A plane achieves the same outcome as birds (staying airborne) through entirely different means, yet we comfortably use the word "fly" for both.
With Generative AI, something strange happens. We insist that only biological brains can "think" or "understand" language. In contrast to planes, we focus on the system, not the function. When AI strings together words (which it does, among other things), we try to create new terms to avoid admitting similarity of function.
When we use a verb to describe an AI function that resembles human cognition, we are immediately accused of "anthropomorphizing." In some way, popular opinion dictates that no system other than the human brain can think.
I wonder: why?
I can use bad analogies also!
I think Eliezer might have started the bad airplane analogies... let me see if I can find a link... and I found an analogy from the same author as the 2027 ~~fanfic~~ forecast: https://www.lesswrong.com/posts/HhWhaSzQr6xmBki8F/birds-brains-planes-and-ai-against-appeals-to-the-complexity
Eliezer used a tortured metaphor about rockets, so I still blame him for the tortured airplane metaphor: https://www.lesswrong.com/posts/Gg9a4y8reWKtLe3Tn/the-rocket-alignment-problem
New piece from 404 Media: Facebook Pushes Its Llama 4 AI Model to the Right, Wants to Present “Both Sides”
On a related note, Baldur Bjarnason has chimed in noting how he called this exact shit happening:
Remember when I told you that using these LLMs was like giving US tech a bigotry dial for all your writing?
In the late 2000s, rationalists were squarely in the middle of transhumanism. They were into the Singularity, but also the cryonics and a whole pile of stuff they got from the Extropians. It was very much the thing.
These days they're most interested in Effective Altruism (loudly -the label at least) and race science (used to be quiet, now a bit louder). I hardly ever hear them even mention transhumanism as it was back then.
Did rationalists abandon transhumanism?
Is it just me? What happened?
As to cryonics... for both LLM doomers and accelerationists, they have no need for a frozen purgatory when the techno-rapture is just a few years around the corner.
As for the rest of the shiny futuristic dreams, they have give way to ugly practical realities:
no magic nootropics, just Scott telling people to take adderal and other rationalists telling people to micro dose on LSD
no low hanging fruit in terms of gene editing (as epistaxis pointed out over on reddit) so they’re left with eugenics and GeneSmith’s insanity
no drexler nanotech so they are left hoping (or fearing) the god-AI can figure it (which is also a problem for ever reviving cryonically frozen people)
no exocortex, just over priced google glasses and a hallucinating LLM “assistant”
no neural jacks (or neural lace or whatever the cyberpunk term for them is), just Elon murdering a bunch of lab animals and trying out (temporary) hope on paralyzed people
The future is here, and it’s subpar compared to the early 2000s fantasies. But hey, you can rip off Ghibli’s style for your shitty fanfic projects, so there are a few upsides.
Because it is nice to have something entertaining for a change:
https://bsky.app/profile/willsmith.fun/post/3lmi2bjrao22t
Wow, that latest chat with Adam Patrick Murray about the Nintendo Switch 2 was quite the ride! The bit on the console's dock secrets and the MicroSD Express storage had me glued. It's amazing to see how these tech advancements are sculpting new landscapes.
Speaking of tech wizardry, have you thought about having Christian Perry on the show? As the CEO of Undetectable AI, he's taken the whole generative AI world by storm, much like the Switch 2 is taking over gaming news! With over 15 million users and standing as a top AI writing tool, Christian's insights into AI's hidden workings promise to intrigue your audience, especially when it comes to how his tools seamlessly pass for human writing without tripping any detectors like GPTzero
Undetectable AI, everyone. Astounding.
Solid, high-quality sneer from Adactio - the end is a particular highlight:
The worst of the internet is continuously attacking the best of the internet. This is a distributed denial of service attack on the good parts of the World Wide Web.
If you’re using the products powered by these attacks, you’re part of the problem. Don’t pretend it’s cute to ask ChatGPT for something. Don’t pretend it’s somehow being technologically open-minded to continuously search for nails to hit with the latest “AI” hammers.
Shopify going all in on AI, apparently, and the CEO is having a proper born-again moment. Don’t have a source more concrete than this yet:
https://cyberplace.social/@GossiTheDog/114298302252798365
(and transcript: https://infosec.exchange/@barubary/114298367285112648)
It’s a lot like this:
Using AI effectively is now a fundamental expectation of everyone at Shopify. It’s a tool of all trades today, and will only grow in importance. Frankly, I don’t think it’s feasible to opt out of learning the skill of applying AI in your craft; you are welcome to try, but I want to be honest I cannot see this working out today, and definitely not tomorrow. Stagnation is almost certain, and stagnation is slow-motion failure. If you’re not climbing, you’re sliding.