"Yes," chatGPT whispered gently ASMR style, "you should but that cryptocoin it is a good investment". And thus the aliens sectioned off the Sol solar system forever.
Soyweiser
Using a death for critihype jesus fuck
It is a bit like alien vs predator. Whoever wins, we lose. (And even that is owned by disney).
Uni is also a good place to learn to fail. A uni run startup imitation place can ensure both problems (guided by profs if needed) and teach people how to do better, without being in the pockets of VCs also better hours, and parties.
Nostalgia has a lowkey reactionary impulse part(see also why those right wing reactionary gamer streamers who do ten hour reactive criticize a movie streams have their backgrounds filled with consumer nerd media toys (and almost never books)) and fear of change is also a part of conservatism. 'Engineering minds' who think they can solve things, and have a bit more rigid thinking also tend to be attracted to more extremist ideologies (which usually seems to have more rigid rules and lesser exceptions), which also leads back to the problem where people like this are bad at not realizing their minds are not typical (I can easily use a console so everyone else can and should). So it makes sense to me. Not sure if the ui thing is elitism or just a strong desire to create and patrol the borders of an ingroup. (But isnt that just what elitism is?)
Right, yeah i just recall that for a high enough bit of towers the amount of steps needed to solve it rises quickly. The story, "Now Inhale", by Eric Frank Russell, uses 64 discs. Fun story.
Min steps is 2 to the power the number of disks minus 1.
Programming a system that solves it was a programming excersize for me a long time ago. Those are my stronger memories of it
Latter test fails if they write a specific bit of code to put out the 'llms fail the river crossing' fire btw. Still a good test.
Sorry what is the link between bioware and towers of hanoi? (I do know about the old "one final game before your execution" science fiction story).
I have not looked at a video just images but this looks like it is unreadable outside. Which brings up an interesting failure of testing, that they never left the building with the sun out.
But the Ratspace doesn't just expect them to actually do things, but also self improve. Which is another step above just human level intelligence, it also means that self improvement is possible (and on the highest level of nuttyness, unbound), a thing we have not even seen if it is possible. And it certainly doesn't seem to be, as the lengths between a newer better version of chatGPT seems to be increasing (an interface around it doesn't count). So imho due to chatgpt/LLMs and the lack of fast improvements we have seen recently (some even say performance has decreased, so we are not even getting incremental innovations), means that the 'could lead to AGI-foom' possibility space has actually shrunk, as LLMs will not take us there. And everything including the kitchen sink has been thrown at the idea. To use some AI-weirdo lingo: With the decels not in play(*), why are the accels not delivering?
*: And lets face it, on the fronts that matter, we have lost the battle so far.
E: full disclosure I have not read Zitrons article, they are a bit long at times, look at it, you could read 1/4th of a SSC article in the same time.
Emil Kirkegaard
Drink!
I want my kids to be science experiments as there is no other way an ethics board would approve this kind.