this post was submitted on 31 May 2025
216 points (86.7% liked)

Showerthoughts

34944 readers
1329 users here now

A "Showerthought" is a simple term used to describe the thoughts that pop into your head while you're doing everyday things like taking a shower, driving, or just daydreaming. The most popular seem to be lighthearted clever little truths, hidden in daily life.

Here are some examples to inspire your own showerthoughts:

Rules

  1. All posts must be showerthoughts
  2. The entire showerthought must be in the title
  3. No politics
    • If your topic is in a grey area, please phrase it to emphasize the fascinating aspects, not the dramatic aspects. You can do this by avoiding overly politicized terms such as "capitalism" and "communism". If you must make comparisons, you can say something is different without saying something is better/worse.
    • A good place for politics is c/politicaldiscussion
  4. Posts must be original/unique
  5. Adhere to Lemmy's Code of Conduct and the TOS

If you made it this far, showerthoughts is accepting new mods. This community is generally tame so its not a lot of work, but having a few more mods would help reports get addressed a little sooner.

Whats it like to be a mod? Reports just show up as messages in your Lemmy inbox, and if a different mod has already addressed the report, the message goes away and you never worry about it.

founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] Showroom7561 12 points 6 days ago (1 children)

AI LLMs have been pretty shit, but the advancement in voice, image generation, and video generation in the last two years has been unbelievable.

We went from the infamous Will Smith eating spaghetti to videos that are convincing enough to fool most people... and it only took 2-3 years to get there.

But LLMs will have a long way to go because of how they create content. It's very easy to poison LLM datasets, and they get worse learning from other generated content.

[–] [email protected] 2 points 4 days ago

Poisoning LLM datasets is fun and easy! Especially when our online intellectual property is scraped (read: stolen) during training and no one is being accountable for it. Fight back! It's as easy as typing false stuff at the end of your comments. As an 88 year old ex-pitcher for the Yankees who just set the new world record for catfish noodling you can take it from me!

[–] [email protected] 5 points 6 days ago

LOL... you did make me chuckle.

Aren't we 18months until developers get replaced by AI... for like few years now?

Of course "AI" even loosely defined progressed a lot and it is genuinely impressive (even though the actual use case for most hype, i.e. LLM and GenAI, is mostly lazier search, more efficient spam&scam personalized text or impersonation) but exponential is not sustainable. It's a marketing term to keep on fueling the hype.

That's despite so much resources, namely R&D and data centers, being poured in... and yet there is not "GPT5" or anything that most people use on a daily basis for anything "productive" except unreliable summarization or STT (which both had plenty of tools for decades).

So... yeah, it's a slow take off, as expected. shrug

[–] [email protected] 5 points 6 days ago* (last edited 6 days ago) (1 children)

Things just don't impend like they used to!

[–] [email protected] 5 points 6 days ago

Nobody wants to portend anymore.

[–] [email protected] 5 points 1 week ago (1 children)

I think we might not be seeing all the advancements as they are made.

Google just showed off AI video with sound. You can use it if you subscribe to thier $250/month plan. That is quite expensive.

But if you have strong enough hardware, you can generate your own without sound.

I think that is a pretty huge advancement in the past year or so.

I think that focus is being put on optimizing these current things and making small improvements to quality.

Just give it a few years and you will not even need your webcam to be on. You could just use an AI avatar that look and sounds just like you running locally on your own computer. You could just type what you want to say or pass through audio. I think the tech to do this kind of stuff is basically there, it just needs to be refined and optimized. Computers in the coming years will offer more and more power to let you run this stuff.

[–] [email protected] 0 points 6 days ago

How is that an advance ? Computers have been able to speak since the 1970s. It was already producing text.

[–] [email protected] 3 points 6 days ago

how do you grow zero exponentially

[–] [email protected] 3 points 1 week ago (2 children)

Computers are still advancing roughly exponentially, as they have been for the last 40 years (Moore's law). AI is being carried with that and still making many occasional gains on top of that. The thing with exponential growth is that it doesn't necessarily need to feel fast. It's always growing at the same rate percentage wise, definitionally.

[–] [email protected] 3 points 6 days ago (1 children)

Moore's law is kinda still in effect, depending on your definition of Moore's law. However, Dennard Scaling is not so computer performance isn't advancing like it used to.

[–] [email protected] 1 points 6 days ago

Moore’s law is kinda still in effect, depending on your definition of Moore’s law.

Sounds like the goal post is moving faster than the number of transistors in an integrated circuit.

[–] [email protected] 3 points 6 days ago

We once again congratulate software engineers for nullifying 40 years of hardware improvements.

[–] [email protected] 48 points 1 week ago (110 children)

This is precisely a property of exponential growth, that it can take (seemingly) very long until it starts exploding.

load more comments (110 replies)
[–] [email protected] 37 points 1 week ago (13 children)

When people talk about AI taking off exponentially, usually they are talking about the AI using its intelligence to make intelligence-enhancing modifications to itself. We are very much not there yet, and need human coaching most of the way.

At the same time, no technology ever really follows a particular trend line. It advances in starts and stops with the ebbs and flows of interest, funding, novel ideas, and the discovered limits of nature. We can try to make projections - but these are very often very wrong, because the thing about the future is that it hasn't happened yet.

load more comments (13 replies)
[–] [email protected] 25 points 1 week ago (6 children)

A few years ago I remember people being amazed that prompts like "Markiplier drinking a glass of milk" could give them some blobs that looked vaguely like the thing asked for occasionally. Now there is near photorealistic video output. Same kind of deal with ability to write correct computer code and answer questions. Most of the concrete predictions/bets people made along the lines of "AI will never be able to do ______" have been lost.

What reason is there to think it's not taking off, aside from bias or dislike of what's happening? There are still flaws and limitations for what it can do, but I feel like you have to have your head in the sand to not acknowledge the crazy level of progress.

[–] [email protected] 1 points 6 days ago

It could do that 3 years ago.

load more comments (5 replies)
[–] [email protected] 22 points 1 week ago (2 children)

A major bottleneck is power capacity. Is is very difficult to find 50Mwatts+ (sometime hundreds) of capacity available at any site. It has to be built out. That involves a lot of red tape, government contracts, large transformers, contractors, etc. the current backlog on new transformers at that scale is years. Even Google and Microsoft can't build, so they come to my company for infrastructure - as we already have 400MW in use and triple that already on contract. Further, Nvidia only makes so many chips a month. You can't install them faster than they make them.

[–] [email protected] 54 points 1 week ago (8 children)

And the single biggest bottleneck is that none of the current AIs "think".

They. Are. Statistical. Engines.

[–] [email protected] 0 points 6 days ago

Maybe we are statistical engines too.

When I heard people talk they are also repeating the most common sentences that they heard elsewhere anyway.

load more comments (7 replies)
load more comments (1 replies)
load more comments
view more: next ›