this post was submitted on 14 Jul 2023
69 points (96.0% liked)

Showerthoughts

31639 readers
551 users here now

A "Showerthought" is a simple term used to describe the thoughts that pop into your head while you're doing everyday things like taking a shower, driving, or just daydreaming. The most popular seem to be lighthearted, clever little truths, hidden in daily life.

Here are some examples to inspire your own showerthoughts: 1

Rules

  1. All posts must be showerthoughts
  2. The entire showerthought must be in the title
  3. No politics
    • If your topic is in a grey area, please phrase it to emphasize the fascinating aspects, not the dramatic aspects. You can do this by avoiding overly politicized terms such as "capitalism" and "communism". If you must make comparisons, you can say something is different without saying something is better/worse.
    • A good place for politics is c/politicaldiscussion
    • If you feel strongly that you want politics back, please volunteer as a mod.
  4. Posts must be original/unique
  5. Adhere to Lemmy's Code of Conduct and the TOS

If you made it this far, showerthoughts is accepting new mods. This community is generally tame so its not a lot of work, but having a few more mods would help reports get addressed a little sooner.

Whats it like to be a mod? Reports just show up as messages in your Lemmy inbox, and if a different mod has already addressed the report the message goes away and you never worry about it.

founded 2 years ago
MODERATORS
 

I'm sure there are some AI peeps here. Neural networks scale with size because the number of combinations of parameter values that work for a given task scales exponentially (or, even better, factorially if that's a word???) with the network size. How can such a network be properly aligned when even humans, the most advanced natural neural nets, are not aligned? What can we realistically hope for?

Here's what I mean by alignment:

  • Ability to specify a loss function that humanity wants
  • Some strict or statistical guarantees on the deviation from that loss function as well as potentially unaccounted side effects
you are viewing a single comment's thread
view the rest of the comments
[โ€“] [email protected] 12 points 2 years ago* (last edited 2 years ago) (4 children)

To continue the thought, even if the alignment problem within AI could be solved (I don't think it can fully), who is developing this AI and determining it matched up with human needs? Just listening to the experts both acknowledge the issues and dangers and in the next sentence speculate "but if we can do it" fantasies is always concerning. Yet another example of a few determining the rest of humanity's future with very high risks. Our best luck would be if AGI and beyond simply isn't possible, and even then the "dumb" AI still have similar misalignment issues - we see them in current language models, and yet ignore the flags to make things more powerful.

I forgot to add - I'm totally on the side of our AI overlords and Roko's Basilisk.

[โ€“] [email protected] 5 points 2 years ago* (last edited 2 years ago)

Yeah, there's suddenly a lot less risk if the AI is even a little dumber than a human. Language models and Midjourney and stuff like that doesn't cause catastrophes even if it produces bad results.

load more comments (3 replies)