blakestacey

joined 2 years ago
MODERATOR OF
[–] [email protected] 5 points 1 week ago

There might be enough point-and-laugh material to merit a post (also this came in at the tail end of the week's Stubsack).

[–] [email protected] 7 points 1 week ago

The opening line of the "Beliefs" section of the Wikipedia article:

Rationalists are concerned with improving human reasoning, rationality, and decision-making.

No, they aren't.

Anyone who still believes this in the year Two Thousand Twenty Five is a cultist.

I am too tired to invent a snappier and funnier way of saying this.

[–] [email protected] 9 points 2 weeks ago

I'm the torture copy and so is my wife

[–] [email protected] 16 points 2 weeks ago

In other news, I got an "Is your website AI ready" e-mail from my website host. I think I'm in the market for a new website host.

[–] [email protected] 12 points 2 weeks ago (1 children)

That Carl Shulman post from 2007 is hilarious.

After years spent studying existential risks, I concluded that the risk of an artificial intelligence with inadequately specified goals dominates. Attempts to create artificial intelligence can be expected to continue, and to become more likely to succeed in light of increased computing power, neuroscience, and intelligence-enhancements. Unless the programmers solve extremely difficult problems in both philosophy and computer science, such an intelligence might eliminate all utility within our future light-cone in the process of pursuing a poorly defined objective.

Accordingly, I invest my efforts into learning more about the relevant technologies and considerations, increasing my earnings capability (so as to deliver most of a large income to relevant expenditures), and developing logistical strategies to more effectively gather and expend resources on the problem of creating AI that promotes (astronomically) and preserves global welfare rather than extinguishing it.

Because the potential stakes are many orders of magnitude greater than relatively good conventional expenditures (vaccine and Green Revolution research), and the probability of disaster much more likely than for, e.g. asteroid impacts, utilitarians with even a very low initial estimate of the practicality of AI in coming decades should still invest significant energy in learning more about the risks and opportunities associated with it. (Having done so, I offer my assurance that this is worthwhile.) Note that for materialists the possibility of AI follows from the existence proof of the human brain, and that an AI able to redesign itself for greater intelligence and copy itself would have the power to determine the future of Earth-derived life.

I suggest beginning with the two articles below on existential risk, the first on relevant cognitive biases, and the second discussing the relation of AI to existential risk. Processing these arguments should provide sufficient reason for further study.

The "two articles below" are by Yudkowsky.

User "gaverick" replies,

Carl, I'm inclined to agree with you, but can you recommend a rigorous discussion of the existential risks posed by Unfriendly AI? I had read Yudkowsky's chapter on AI risks for Bostrom's bk (and some of his other SIAI essays & SL4 posts) but when I forward them to others, their informality fails to impress.

Shulman's response begins,

Have you read through Bostrom's work on the subject? Kurzweil has relevant info for computing power and brain imaging.

Ray mothersodding Kurzweil!

[–] [email protected] 17 points 2 weeks ago* (last edited 2 weeks ago) (3 children)

jhbadger:

As Adam Becker shows in his book, EAs started out being reasonable "give to charity as much as you can, and research which charities do the most good" but have gotten into absurdities like "it is more important to fund rockets than help starving people or prevent malaria because maybe an asteroid will hit the Earth, killing everyone, starving or not".

I haven't read Becker's book and probably won't spend the time to do so. But if this is an accurate summary, it's a bad sign for that book, because plenty of them were bonkers all along.

As journalists and scholars scramble to account for this ‘new’ version of EA—what happened to the bednets, and why are Effective Altruists (EAs) so obsessed with AI?—they inadvertently repeat an oversimplified and revisionist history of the EA movement. It goes something like this: EA was once lauded as a movement of frugal do-gooders donating all their extra money to buy anti-malarial bednets for the poor in sub-Saharan Africa; but now, a few EAs have taken their utilitarian logic to an extreme level, and focus on ‘longtermism’, the idea that if we wish to do the most good, our efforts ought to focus on making sure the long-term future goes well; this occurred in tandem with a dramatic influx of funding from tech scions of Silicon Valley, redirecting EA into new cause areas like the development of safe artificial intelligence (‘AI-safety’ and ‘AI-alignment’) and biosecurity/pandemic preparedness, couched as part of a broader mission to reduce existential risks (‘x-risks’) and ‘global catastrophic risks’ that threaten humanity’s future. This view characterizes ‘longtermism’ as a ‘recent outgrowth’ (Ongweso Jr., 2022) or even breakaway ‘sect’ (Aleem, 2022) that does not represent authentic EA (see, e.g., Hossenfelder, 2022; Lenman, 2022; Pinker, 2022; Singer & Wong, 2019). EA’s shift from anti-malarial bednets and deworming pills to AI-safety/x-risk is portrayed as mission-drift, given wings by funding and endorsements from Silicon Valley billionaires like Elon Musk and Sam Bankman-Fried (see, e.g., Bajekal, 2022; Fisher, 2022; Lewis-Kraus, 2022; Matthews, 2022; Visram, 2022). A crucial turning point in this evolution, the story goes, includes EAs encountering the ideas of transhumanist philosopher Nick Bostrom of Oxford University’s Future of Humanity Institute (FHI), whose arguments for reducing x-risks from AI and biotechnology (Bostrom, 2002, 2003, 2013) have come to dominate EA thinking (see, e.g., Naughton, 2022; Ziatchik, 2022).

This version of events gives the impression that EA’s concerns about x-risk, AI, and ‘longtermism’ emerged out of EA’s rigorous approach to evaluating how to do good, and has only recently been embraced by the movement’s leaders. MacAskill’s publicity campaign for WWOTF certainly reinforces this perception. Yet, from the formal inception of EA in 2012 (and earlier) the key figures and intellectual architects of the EA movement were intensely focused on promoting the suite of causes that now fly under the banner of ‘longtermism’, particularly AI-safety, x-risk/global catastrophic risk reduction, and other components of the transhumanist agenda such as human enhancement, mind uploading, space colonization, prediction and forecasting markets, and life extension biotechnologies.

To give just a few examples: Toby Ord, the co-founder of GWWC and CEA, was actively collaborating with Bostrom by 2004 (Bostrom & Ord, 2004),18 and was a researcher at Bostrom’s Future of Humanity Institute (FHI) in 2007 (Future of Humanity Institute, 2007) when he came up with the idea for GWWC; in fact, Bostrom helped create GWWC’s first logo (EffectiveAltruism.org, 2016). Jason Matheny, whom Ord credits with introducing him to global public health metrics as a means for comparing charity effectiveness (Matthews, 2022), was also working to promote Bostrom’s x-risk agenda (Matheny, 2006, 2009), already framing it as the most cost-effective way to save lives through donations in 2006 (User: Gaverick [Jason Gaverick Matheny], 2006). MacAskill approvingly included x-risk as a cause area when discussing his organizations on Felificia and LessWrong (Crouch [MacAskill], 2010, 2012a, 2012b, 2012c, 2012e), and x-risk and transhumanism were part of 80K’s mission from the start (User: LadyMorgana, 2011). Pablo Stafforini, one of the key intellectual architects of EA ‘behind-the-scenes’, initially on Felificia (Stafforini, 2012a, 2012b, 2012c) and later as MacAskill’s research assistant at CEA for Doing Good Better and other projects (see organizational chart in Centre for Effective Altruism, 2017a; see the section entitled “ghostwriting” in Knutsson, 2019), was deeply involved in Bostrom’s transhumanist project in the early 2000s, and founded the Argentine chapter of Bostrom’s World Transhumanist Association in 2003 (Transhumanismo. org, 2003, 2004). Rob Wiblin, who was CEA’s executive director from 2013-2015 prior to moving to his current role at 80K, blogged about Bostrom and Yudkowksy’s x-risk/AI-safety project and other transhumanist themes starting in 2009 (Wiblin, 2009a, 2009b, 2010a, 2010b, 2010c, 2010d, 2012). In 2007, Carl Shulman (one of the most influential thought-leaders of EA, who oversees a $5,000,000 discretionary fund at CEA) articulated an agenda that is virtually identical to EA’s ‘longtermist’ agenda today in a Felificia post (Shulman, 2007). Nick Beckstead, who co-founded and led the first US chapter of GWWC in 2010, was also simultaneously engaging with Bostrom’s x-risk concept (Beckstead, 2010). By 2011, Beckstead’s PhD work was centered on Bostrom’s x-risk project: he entered an extract from the work-in-progress, entitled “Global Priority Setting and Existential Risk: Crucial Ethical Considerations” (Beckstead, 2011b) to FHI’s “Crucial Considerations” writing contest (Future of Humanity Institute, 2011), where it was the winning submission (Future of Humanity institute, 2012). His final dissertation, entitled On the Overwhelming Importance of Shaping the Far Future (Beckstead, 2013) is now treated as a foundational ‘longtermist’ text by EAs.

Throughout this period, however, EA was presented to the general public as an effort to end global poverty through effective giving, inspired by Peter Singer. Even as Beckstead was busy writing about x-risk and the long-term future in his own work, in the media he presented himself as focused on ending global poverty by donating to charities serving the distant poor (Beckstead & Lee, 2011; Chapman, 2011; MSNBC, 2010). MacAskill, too, presented himself as doggedly committed to ending global poverty....

(Becker's previous book, about the interpretation of quantum mechanics, irritated me. It recapitulated earlier pop-science books while introducing historical and technical errors, like getting the basic description of the EPR thought-experiment wrong, and butchering the biography of Grete Hermann while acting self-righteous about sexist men overlooking her accomplishments. See previous rant.)

[–] [email protected] 25 points 2 weeks ago (1 children)

astrange:

They're members of a religion which says that if you do math in your head the right way you'll be correct about everything, and so they think they're correct about everything.

They also secondarily believe everyone has an IQ which is their DBZ power level; they believe anything they see that has math in it, and IQ is math, so they believe anything they see about IQ. So if you avoid trying to find out your own IQ you can just believe it's really high and then you're good.

Unfortunately this lead them to the conclusion that computers have more IQ than them and so would automatically win any intellectual DBZ laser beam fight against them / enslave them / take over the world.

[–] [email protected] 18 points 2 weeks ago

My Grand Unified Theory of Scott Aaronson is that he doesn't have a theory of mind. On subjects far less incendiary than Zionism, he simply fails to recognize that people who share his background or interests can think differently than he does.

[–] [email protected] 18 points 2 weeks ago

From p. 137:

The most consistent and significant behavioral divergence between the groups was observed in the ability to quote one's own essay. LLM users significantly underperformed in this domain, with 83% of participants (15/18) reporting difficulty quoting in Session 1, and none providing correct quotes. This impairment persisted albeit attenuated in subsequent sessions, with 6 out of 18 participants still failing to quote correctly by Session 3. [...] Search Engine and Brain-only participants did not display such impairments. By Session 2, both groups achieved near-perfect quoting ability, and by Session 3, 100% of both groups' participants reported the ability to quote their essays, with only minor deviations in quoting accuracy.

[–] [email protected] 19 points 2 weeks ago

Or you could read the entirety of the first comment in this thread and see how it was not saying that. Notice the part that begins, "However, I believe there is an important difference to chatbots..."

[–] [email protected] 13 points 3 weeks ago

No Nut Neuravember

 

Yudkowsky writes,

How can Effective Altruism solve the meta-level problem where almost all of the talented executives and ops people were in 1950 and now they're dead and there's fewer and fewer surviving descendants of their heritage every year and no blog post I can figure out how to write could even come close to making more people being good executives?

Because what EA was really missing is collusion to hide the health effects of tobacco smoking.

 

Aella:

Maybe catcalling isn't that bad? Maybe the demonizing of catcalling is actually racist, since most men who catcall are black

Quarantine Goth Ms. Frizzle (@spookperson):

your skull is full of wet cat food

 

Last summer, he announced the Stanford AI Alignment group (SAIA) in a blog post with a diagram of a tree representing his plan. He’d recruit a broad group of students (the soil) and then “funnel” the most promising candidates (the roots) up through the pipeline (the trunk).

See, it's like marketing the idea, in a multilevel way

 

Emily M. Bender on the difference between academic research and bad fanfiction

 

Steven Pinker tweets thusly:

My friend & Harvard colleague Howard Gardner, offers a thoughtful critique of my book Rationality -- but undermines his cause, as all skeptics of rationality must do, by using rationality to make it.

"My colleague and fellow esteemed gentleman of Harvard neglects to consider the premise that I am rubber and he is glue."

 

In the far-off days of August 2022, Yudkowsky said of his brainchild,

If you think you can point to an unnecessary sentence within it, go ahead and try. Having a long story isn't the same fundamental kind of issue as having an extra sentence.

To which MarxBroshevik replied,

The first two sentences have a weird contradiction:

Every inch of wall space is covered by a bookcase. Each bookcase has six shelves, going almost to the ceiling.

So is it "every inch", or are the bookshelves going "almost" to the ceiling? Can't be both.

I've not read further than the first paragraph so there's probably other mistakes in the book too. There's kind of other 'mistakes' even in the first paragraph, not logical mistakes as such, just as an editor I would have... questions.

And I elaborated:

I'm not one to complain about the passive voice every time I see it. Like all matters of style, it's a choice that depends upon the tone the author desires, the point the author wishes to emphasize, even the way a character would speak. ("Oh, his throat was cut," Holmes concurred, "but not by his own hand.") Here, it contributes to a staid feeling. It emphasizes the walls and the shelves, not the books. This is all wrong for a story that is supposed to be about the pleasures of learning, a story whose main character can't walk past a bookstore without going in. Moreover, the instigating conceit of the fanfic is that their love of learning was nurtured, rather than neglected. Imagine that character, their family, their family home, and step into their library. What do you see?

Books — every wall, books to the ceiling.

Bam, done.

This is the living-room of the house occupied by the eminent Professor Michael Verres-Evans,

Calling a character "the eminent Professor" feels uncomfortably Dan Brown.

and his wife, Mrs. Petunia Evans-Verres, and their adopted son, Harry James Potter-Evans-Verres.

I hate the kid already.

And he said he wanted children, and that his first son would be named Dudley. And I thought to myself, what kind of parent names their child Dudley Dursley?

Congratulations, you've noticed the name in a children's book that was invented to sound stodgy and unpleasant. (In The Chocolate Factory of Rationality, a character asks "What kind of a name is 'Wonka' anyway?") And somehow you're trying to prove your cleverness and superiority over canon by mocking the name that was invented for children to mock. Of course, the Dursleys were also the start of Rowling using "physically unsightly by her standards" to indicate "morally evil", so joining in with that mockery feels ... It's aged badly, to be generous.

Also, is it just the people I know, or does having a name picked out for a child that far in advance seem a bit unusual? Is "Dudley" a name with history in his family — the father he honored but never really knew? His grandfather who died in the War? If you want to tell a grown-up story, where people aren't just named the way they are because those are names for children to laugh at, then you have to play by grown-up rules of characterization.

The whole stretch with Harry pointing out they can ask for a demonstration of magic is too long. Asking for proof is the obvious move, but it's presented as something only Harry is clever enough to think of, and as the end of a logic chain.

"Mum, your parents didn't have magic, did they?" [...] "Then no one in your family knew about magic when Lily got her letter. [...] If it's true, we can just get a Hogwarts professor here and see the magic for ourselves, and Dad will admit that it's true. And if not, then Mum will admit that it's false. That's what the experimental method is for, so that we don't have to resolve things just by arguing."

Jesus, this kid goes around with L's theme from Death Note playing in his head whenever he pours a bowl of breakfast crunchies.

Always Harry had been encouraged to study whatever caught his attention, bought all the books that caught his fancy, sponsored in whatever maths or science competitions he entered. He was given anything reasonable that he wanted, except, maybe, the slightest shred of respect.

Oh, sod off, you entitled little twit; the chip on your shoulder is bigger than you are. Your parents buy you college textbooks on physics instead of coloring books about rocketships, and you think you don't get respect? Because your adoptive father is incredulous about the existence of, let me check my notes here, literal magic? You know, the thing which would upend the body of known science, as you will yourself expound at great length.

"Mum," Harry said. "If you want to win this argument with Dad, look in chapter two of the first book of the Feynman Lectures on Physics.

Wesley Crusher would shove this kid into a locker.

view more: ‹ prev next ›