this post was submitted on 17 Feb 2025
42 points (72.3% liked)

Technology

63375 readers
6146 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

I refuse to sit here and pretend that any of this matters. OpenAI and Anthropic are not innovators, and are antithetical to the spirit of Silicon Valley. They are management consultants dressed as founders, cynical con artists raising money for products that will never exist while peddling software that destroys our planet and diverts attention and capital away from things that might solve real problems.

I'm tired of the delusion. I'm tired of being forced to take these men seriously. I'm tired of being told by the media and investors that these men are building the future when the only things they build are mediocre and expensive. There is no joy here, no mystery, no magic, no problems solved, no lives saved, and very few lives changed other than new people added to Forbes' Midas list.

None of this is powerful, or impressive, other than in how big a con it’s become. Look at the products and the actual outputs and tell me — does any of this actually feel like the future? Isn’t it kind of weird that the big, scary threats they’ve made about how AI will take our jobs never seem to translate to an actual product? Isn’t it strange that despite all of their money and power they’re yet to make anything truly useful?

My heart darkens, albeit briefly, when I think of how cynical all of this is. Corporations building products that don't really do much that are being sold on the idea that one day they might, peddled by reporters that want to believe their narratives — and in some cases actively champion them. The damage will be tens of thousands of people fired, long-term environmental and infrastructural chaos, and a profound depression in Silicon Valley that I believe will dwarf the dot-com bust.

And when this all falls apart — and I believe it will — there will be a very public reckoning for the tech industry.

top 25 comments
sorted by: hot top controversial new old
[–] [email protected] 18 points 1 week ago

antithetical to the spirit of silicon valley

Tell me you haven't been paying attention without telling me you haven't been paying attention.

[–] Greg 7 points 1 week ago (2 children)

The author seems to think that OpenAI having an unsustainable business model means generative AI is a con. Generative AI doesn’t mean OpenAI 🤦‍♂️ There is a good chance that the VC funds invested in OpenAI will have evaporated in 5 years. But generative AI will exist in 5 years, it will be orders of magnitude more useful, and it will help solve many problems.

[–] [email protected] 5 points 1 week ago (1 children)

The level of user sophistication required to use modern "Ai" in a productive useful way puts it squarely beyond the reach of the masses. Alphafold is fucking awesome, chatGPT O3 is nothing but a gimmick.

[–] Greg 0 points 1 week ago

100% and like any tool, it can be used poorly resulting in AI bit rot, bugs, unmaintainable code, etc. But when used well, given appropriate context, by users that know what good solutions looks like, it can increase developer efficiency.

[–] [email protected] 1 points 1 week ago (1 children)

it will help solve many problems

And will make many jobs redundant.

[–] [email protected] 0 points 1 week ago (1 children)

It will. So has just about every other major technical development ever. Eventually those lost jobs should be replaced by even more jobs made possible by the new technology, but in the meantime it will suck.

Thats how you know its not just a gimmick. How many jobs did blockchain replace? Just about zero. How many jobs did computers or the Internet or the mechanical loom or the freaking steam engine replace? Tons.

[–] [email protected] 4 points 1 week ago (2 children)

except genAI has proven no purpose. this is like saying "look at how many jobs bankers replaced! we just used to eat for free, now we have to work our entire lives for it or starve!"

[–] [email protected] 3 points 1 week ago (1 children)

Im pretty sure most of us already have to work our whole lives or starve.

[–] [email protected] 2 points 1 week ago

great, now enjoy that 10 times worse when genAI is used to make skeleton crews an even bigger issue and increase worker exploitation

[–] Greg 0 points 1 week ago (1 children)

except genAI has proven no purpose

Generative AI has spawned an awful amount of AI slop and companies are forcing incomplete products on users. But don't judge the technology by shitty implementations. There are loads of use cases where when used correctly, generative AI brings value. For example, in document discovery in legal proceedings.

[–] [email protected] 1 points 4 days ago (1 children)

But is it worth the cost, and is it the best option? Everyone knows that the generative models are heavily subsidized by VC.

You could have other kinds of language processing and machine learning do document discovery better.

[–] Greg 1 points 4 days ago (1 children)

It is the best option for certain use cases. OpenAI, Anthropic, etc sell tokens, so they have a clear incentive to promote LLM reasoning as an everything solution. LLM read is normally an inefficient use of processor cycles for most use cases. However, because LLM reasoning is so flexible, even though it’s inefficient from a cycle perspective, it is still the best option in many cases because the current alternatives are even more inefficient (from a cycle or human time perspective).

Identifying typos in a project update is a task that LLMs can efficiently solve.

[–] [email protected] 1 points 4 days ago (1 children)

Yes I think it's a good option for spell check, or for detecting when the word it sees seems unlikely given the context.

For things where it's generating text, or categorizing things, It might be the easiest option. Or currently the cheapest option. But I don't think it's the best option if you consider everyone involved.

[–] Greg 1 points 4 days ago (1 children)

But I don’t think it’s the best option if you consider everyone involved.

Can you expand on this? Do you mean from an environmental perspective because of the resource usage, social perspective because of jobs losses, and / or other groups being disadvantaged because of limited access to these tools?

[–] [email protected] 1 points 3 days ago (1 children)

Basically the LLM may make people's jobs easier, for instance someone can get a meeting summary with less effort, but they produce worse results if you consider everyone affected by the work product, like considering whose views are underrepresented in the summary. Or, if you're using it to categorize text, you can't find out why it is producing incorrect results and improve it the way you could with other machine learning techniques. I think Emily Bender can do a better job explaining it than I can:

https://m.youtube.com/watch?v=3Ul_bGiUH4M&t=36m35s

check out the part where she talks about the problems with relying on LLMs to generate meeting summaries and with using it to clarify customer support calls as "resolved" or "not resolved". I tried to get close to that second part since the video is long.

[–] Greg 2 points 3 days ago

I agree and I think this comes back to execution of the technology as opposed to the technology itself. For context, I work as an ML engineer and I’ve been concerned with bias in AI long before ChatGPT. I’m interested in other folks perspectives on this technology. The hype and spin from tech companies is a frustrating distraction from the real benefits and risks of AI.

[–] [email protected] 4 points 1 week ago* (last edited 1 week ago)

I feel the frustration. I swear to God, the market evaluation of this technology, compared to downright environmental and financial costs, is absolutely insane. There is nothing they can solve that would excuse this kind of investment and damage, nothing. I love machine learning and data science, but my urge to violently bludgeon every stupid MBA biz-dev head to death for constantly over-promising and straight up lying to customers is rising every month. Fuckers will cause another AI winter, while burning the Earth so we can get more shitty slop.

[–] [email protected] 3 points 1 week ago

“If we run terabytes of text through a statistical model, then spend millions of man-hours labeling outputs, we can approximate the way humans respond to a prompt.” –OpenAI, more or less

Wow, what a surprise. I'll do you one better: if you take me to a river, I can tell you where the water is going to go next! Maybe we can get some VC money by promising to deliver clean water to every business in the world without all the expense of pipelines and plumbers? I mean, just look at all this water. It may not go where you want right now, but let us dump sewage in it for a couple years and who knows what it’ll do.

[–] [email protected] 2 points 1 week ago* (last edited 1 week ago)

Not the most important thing but god I hate this website’s image. “WYEA?” [rock man smiling smugly]

[–] [email protected] 1 points 1 week ago (2 children)

What is the E in WYEA? I’ve only seen this used in ways that had no context clues.

[–] [email protected] 2 points 1 week ago* (last edited 1 week ago)

Ed as in Edward Zitron, the author of the site. The ttle is a play on a Basement Jaxx song.

[–] [email protected] 2 points 1 week ago

WYEA is the name of the website "Where's Your Ed At?" Ed is the nickname of the writer, Edward Zitron.

[–] [email protected] 1 points 1 week ago

Think of the Metaverse. Or Web3

They didnt really fall apart, they faded out as the VC money moved from them to AI. There is too much money sloshing around, courtesy of moneyprinting and inherited wealth from the largest and richest generation the world will ever see.

Neural networks work. We know they do, but they have some way to go before they learn to do the work for us. The current crop of companies will become obsolete as soon as the RSUs of most of their workers vest.

Patience

[–] [email protected] -1 points 1 week ago (1 children)

author complains about having to repeat himself in his ranting over and over again, 2/10

read this if you like angry people yelling at you.... tons of bold, italics, underlines, and even the occasional ALL CAPS

my favorite part: 1. Shut the fuck up!

[–] [email protected] 1 points 1 week ago

it's called having a conscience, you might wanna try it