Soyweiser

joined 2 years ago
[–] [email protected] 9 points 21 hours ago

I want my kids to be science experiments as there is no other way an ethics board would approve this kind.

[–] [email protected] 7 points 1 day ago

"Yes," chatGPT whispered gently ASMR style, "you should but that cryptocoin it is a good investment". And thus the aliens sectioned off the Sol solar system forever.

[–] [email protected] 20 points 2 days ago

Using a death for critihype jesus fuck

[–] [email protected] 14 points 3 days ago

It is a bit like alien vs predator. Whoever wins, we lose. (And even that is owned by disney).

[–] [email protected] 15 points 3 days ago

Uni is also a good place to learn to fail. A uni run startup imitation place can ensure both problems (guided by profs if needed) and teach people how to do better, without being in the pockets of VCs also better hours, and parties.

[–] [email protected] 14 points 3 days ago* (last edited 3 days ago)

Nostalgia has a lowkey reactionary impulse part(see also why those right wing reactionary gamer streamers who do ten hour reactive criticize a movie streams have their backgrounds filled with consumer nerd media toys (and almost never books)) and fear of change is also a part of conservatism. 'Engineering minds' who think they can solve things, and have a bit more rigid thinking also tend to be attracted to more extremist ideologies (which usually seems to have more rigid rules and lesser exceptions), which also leads back to the problem where people like this are bad at not realizing their minds are not typical (I can easily use a console so everyone else can and should). So it makes sense to me. Not sure if the ui thing is elitism or just a strong desire to create and patrol the borders of an ingroup. (But isnt that just what elitism is?)

[–] [email protected] 3 points 5 days ago* (last edited 5 days ago) (1 children)

Right, yeah i just recall that for a high enough bit of towers the amount of steps needed to solve it rises quickly. The story, "Now Inhale", by Eric Frank Russell, uses 64 discs. Fun story.

Min steps is 2 to the power the number of disks minus 1.

Programming a system that solves it was a programming excersize for me a long time ago. Those are my stronger memories of it

[–] [email protected] 2 points 5 days ago (1 children)

Latter test fails if they write a specific bit of code to put out the 'llms fail the river crossing' fire btw. Still a good test.

[–] [email protected] 4 points 5 days ago (3 children)

Sorry what is the link between bioware and towers of hanoi? (I do know about the old "one final game before your execution" science fiction story).

[–] [email protected] 7 points 5 days ago

I have not looked at a video just images but this looks like it is unreadable outside. Which brings up an interesting failure of testing, that they never left the building with the sun out.

[–] [email protected] 11 points 5 days ago* (last edited 5 days ago) (1 children)

But the Ratspace doesn't just expect them to actually do things, but also self improve. Which is another step above just human level intelligence, it also means that self improvement is possible (and on the highest level of nuttyness, unbound), a thing we have not even seen if it is possible. And it certainly doesn't seem to be, as the lengths between a newer better version of chatGPT seems to be increasing (an interface around it doesn't count). So imho due to chatgpt/LLMs and the lack of fast improvements we have seen recently (some even say performance has decreased, so we are not even getting incremental innovations), means that the 'could lead to AGI-foom' possibility space has actually shrunk, as LLMs will not take us there. And everything including the kitchen sink has been thrown at the idea. To use some AI-weirdo lingo: With the decels not in play(*), why are the accels not delivering?

*: And lets face it, on the fronts that matter, we have lost the battle so far.

E: full disclosure I have not read Zitrons article, they are a bit long at times, look at it, you could read 1/4th of a SSC article in the same time.

[–] [email protected] 15 points 1 week ago

Emil Kirkegaard

Drink!

11
submitted 1 month ago* (last edited 1 month ago) by [email protected] to c/[email protected]
 

Begrudgingly Yeast (@begrudginglyyeast.bsky.social) on bsky informed me that I should read this short story called 'Death and the Gorgon' by Greg Egan as he has a good handle on the subjects/subjects we talk about. We have talked about Greg before on Reddit.

I was glad I did, so going to suggest that more people he do it. The only complaint you can have is that it gives no real 'steelman' airtime to the subjects/subjects it is being negative about. But well, he doesn't have to, he isn't the guardian. Anyway, not going to spoil it, best to just give it a read.

And if you are wondering, did the lesswrongers also read it? Of course: https://www.lesswrong.com/posts/hx5EkHFH5hGzngZDs/comment-on-death-and-the-gorgon (Warning, spoilers for the story)

(Note im not sure this pdf was intended to be public, I did find it on google, but might not be meant to be accessible this way).

 

The interview itself

Got the interview via Dr. Émile P. Torres on twitter

Somebody else sneered: 'Makings of some fantastic sitcom skits here.

"No, I can't wash the skidmarks out of my knickers, love. I'm too busy getting some incredibly high EV worrying done about the Basilisk. Can't you wash them?"

https://mathbabe.org/2024/03/16/an-interview-with-someone-who-left-effective-altruism/

 

Some light sneerclub content in these dark times.

Eliezer complements Musk on the creation of community notes. (A project which predates the takeover of twitter by a couple of years (see the join date: https://twitter.com/CommunityNotes )).

In reaction Musk admits he never read HPMOR and he suggests a watered down Turing test involving HPMOR.

Eliezer invents HPMOR wireheads in reaction to this.

view more: next ›