this post was submitted on 30 Jun 2025
22 points (95.8% liked)

Ask

432 readers
80 users here now

Rules

  1. Be nice
  2. Posts must be legitimate questions (no rage bait or sea lioning)
  3. No spam
  4. NSFW allowed if tagged
  5. No politics
  6. For support questions, please go to [email protected]

Icon by Hilmy Abiyyu A.

founded 3 months ago
MODERATORS
top 26 comments
sorted by: hot top controversial new old
[–] [email protected] 2 points 3 hours ago

AI : fucking whatever

Corporations using AI to excuse stealing more information OOB, governments abusing ML to justify lies, programmers getting laid off with "leveraging AI to full potential" and in general, the powerful using it to exploit the weak?

I fucking hate it

[–] Sunshine 1 points 4 hours ago

A lot, I don’t like seeing it seep into so many things.

[–] [email protected] 2 points 7 hours ago

Im more concerned about people trying to use ai for things it can't do well.

[–] [email protected] 14 points 12 hours ago (1 children)

I'm concerned about corporate enshittification of AI.

An analogue: Lemmy as Mastadon don't really concern me. Reddit and Facebook do.

There needs to be more literacy, a shift in attitude to viewing it as a tool, and more local use. But the power usage fear is overblown, and there is no Terminator scenario to worry with the architectures we have atm.

[–] [email protected] 3 points 12 hours ago* (last edited 12 hours ago) (1 children)

Right. Plus big things tend to end up differently from what we anticipated. Even if we arrive at Terminator level AI doom some day far in the future... It'll be the one thing we didn't anticipate. It's been that way with most big disruptive changes in history. Or it's not doom but transitioning from horses to cars. People back then couldn't predict how that was going to turn out as well. Main point, we don't know, we mainly speculate.

[–] [email protected] 2 points 12 hours ago* (last edited 12 hours ago)

To me, our "AI Doom" scenario is already obvious: worsening the attention optimization epidemic, misinformation, consolidating wealth, and so on. It's no mystery. It's going to suck unless open source takes off.

In other words, its destabilizing society because of corporate enshittification and lack of any literacy/regulation, not because of AI itself.

[–] [email protected] 11 points 13 hours ago

People that rely on AI are rapidly losing critical thinking skills, and even those of us that don't rely on it are still seeing AI generated content damn near everywhere now.

I think it can be funny at times, when deliberately and obviously used to make funny memes and stuff, but really shouldn't be used for anything serious, and definitely shouldn't be used to make deepfakes.

Even when used in good intentions, AI can be erroneous or just outright wrong. All the while, as the machines 'learn', more and more people are blindly trusting it and forgetting how to think for themselves.

TL;DR - I don't like AI

[–] [email protected] 9 points 13 hours ago* (last edited 13 hours ago) (1 children)

As much as Climate Change does given the catastrophic energy requirements for AI necessary to accomplish things we already have tools for.

What scares me is I know that is the point.

[–] [email protected] 2 points 13 hours ago (1 children)

Can you explain the first sentence like I'm 5?

[–] [email protected] 4 points 13 hours ago* (last edited 12 hours ago) (2 children)

In analyzing both public and proprietary data about data centers as a whole, as well as the specific needs of AI, the researchers came to a clear conclusion. Data centers in the US used somewhere around 200 terawatt-hours of electricity in 2024, roughly what it takes to power Thailand for a year. AI-specific servers in these data centers are estimated to have used between 53 and 76 terawatt-hours of electricity. On the high end, this is enough to power more than 7.2 million US homes for a year.

If we imagine the bulk of that was used for inference, it means enough electricity was used on AI in the US last year for every person on Earth to have exchanged more than 4,000 messages with chatbots. In reality, of course, average individual users aren’t responsible for all this power demand. Much of it is likely going toward startups and tech giants testing their models, power users exploring every new feature, and energy-heavy tasks like generating videos or avatars.

Data centers in the US used somewhere around 200 terawatt-hours of electricity in 2024, roughly what it takes to power Thailand for a year.

By 2028, the researchers estimate, the power going to AI-specific purposes will rise to between 165 and 326 terawatt-hours per year. That’s more than all electricity currently used by US data centers for all purposes; it’s enough to power 22% of US households each year. That could generate the same emissions as driving over 300 billion miles—over 1,600 round trips to the sun from Earth.

The researchers were clear that adoption of AI and the accelerated server technologies that power it has been the primary force causing electricity demand from data centers to skyrocket after remaining stagnant for over a decade. Between 2024 and 2028, the share of US electricity going to data centers may triple, from its current 4.4% to 12%.

https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/

The environmental impact of AI extends beyond high electricity usage. AI models consume enormous amounts of fossil-fuel-based electricity, significantly contributing to greenhouse gas emissions. The need for advanced cooling systems in AI data centers also leads to excessive water consumption, which can have serious environmental consequences in regions experiencing water scarcity.

The short lifespan of GPUs and other HPC components results in a growing problem of electronic waste, as obsolete or damaged hardware is frequently discarded. Manufacturing these components requires the extraction of rare earth minerals, a process that depletes natural resources and contributes to environmental degradation.

Additionally, the storage and transfer of massive datasets used in AI training require substantial energy, further increasing AI’s environmental burden. Without proper sustainability measures, the expansion of AI could accelerate ecological harm and worsen climate change.

https://iee.psu.edu/news/blog/why-ai-uses-so-much-energy-and-what-we-can-do-about-it

[–] [email protected] 4 points 12 hours ago (1 children)

OK so tldr is "really bad". God I didn't know all of that

[–] [email protected] 4 points 12 hours ago
[–] [email protected] 1 points 12 hours ago

All these estimates are hopefully nonsense. Deepseek cost basically nothing to train, AI uses basically no power run locally, especially on NPUs.

It's only the Altman AI camp pushing "lets take what we have, not make it more efficient and scale it infinitely! Now shut down the dangerous competition and give me cash."

[–] [email protected] 8 points 13 hours ago

Moderately. I don't understand it, and it's been used for things I find indefensible like disinformation. It feels like it's downhill from here.

[–] [email protected] 4 points 11 hours ago* (last edited 11 hours ago) (1 children)

I'm worried a lot about the future if we don't change our attitude about "ownership."

As it stands now we have AI companies rushing to replace workers in non-physical fields, but when BMW and Mercedes (among others) improve their "factory floor" bipedal manufacturing workers, and AI companies integrate their processing capabilities into those robots, you now have physical labor being replaced as well. Any company that can justify the expense of the robots will implement them to save on labor costs.

Our attitude about ownership is: you own the means to produce, you own the profit. The only reason we get paid as workers is because slavery is illegal and they need to attract workers. If you don't play a part in the business anymore, you don't "deserve" any money. The ownership class has already been fighting tooth and nail to cut every tax imaginable and have media outlets calling disabled people/poor people "parasites." Just imagine when we have a massive population that can't find work.

I strongly believe we're rushing towards a future where "we just don't have enough work for them" becomes a common phase explaining why it's unavoidable that we have so many homeless/starving people.

[–] [email protected] 2 points 11 hours ago (1 children)

I strongly believe we're rushing towards a future where "we just don't have enough work for them" becomes a common phase explaining why it's unavoidable that we have so many homeless/starving people.

Having machines which are able to take care of the bulk of running civilization "should" be a boon for society, not a blow.

[email protected]

[email protected]

Ideally, these sorts of programs "should" be implemented before widespread unemployment becomes an issue, to aid in a smooth transition.

[–] [email protected] 2 points 11 hours ago

We don't do "ideally" unless it has a direct benefit to the ownership class unfortunately. At least here in the US. I could see Europeans doing it right, or at least closer to right, but I have absolutely 0 faith in the US doing anything other than "shovel money at the rich."

[–] Godort 5 points 12 hours ago

The technology doesn't concern me that much. It definitely has uses. The thing that concerns me about it is how much money has been sunk into it. It's basically the root cause of everything wrong with it.

So much venture capital has been pumped into the technology, and with the use cases where it shines alone, there is no way they'll recoup their investment, let alone make money. So what they're doing is trying to go all-in on this gamble by creating a ton of hype and putting it in everything to hopefully get back their investment.

They're spinning up huge datacenters, that are sucking up unprecedented levels of water and power to run these things, and the end result is a tool that is not reliable enough for technical and scientific uses unless the training data is very specific about the topic in question. The generalized models are really only good for taking notes and composing emails.

The other big concern is how they're trained. The tool sucks up truly staggering amounts of data online with no regard for anything. Privacy, copyright, and server bandwidth are simply not factored in, because the tool would be even less usable than it is now.

This is going to be the next dotcom bubble and things will get worse before they get better, because some rich assholes will need to lose some money. But after that happens, the industry will collapse to a point where things are reasonable again.

[–] [email protected] 4 points 12 hours ago

About as much as tobacco in 1995.

It is going to kill a bunch of people.

Everyone paying attention knows not to use it.

The regulators are off actively using it and don’t want to limit its use.

[–] [email protected] 2 points 11 hours ago

The algorithms are beautiful. The courage of neural networks researchers working in relative obscurity from the 70s to the 90s is inspiring. What the corporations have done with the technology is truly disgusting. Those are the "battle lines"; I'm on the side of the algorithms and researchers, but I think the corporations will do a lot of damage.

[–] [email protected] 3 points 12 hours ago

AI is fine, though a bit damaging purely from how people are treating it. We don't have the systems to support the effect it's having on society, though we can't blame the entire downward economic trend on AI. It seems likely to crash, we know it's eating up a shitzillion dollars and earning next to nothing in return.

I think people being essentially tricked into thinking llms are magic reasoning devices are going to be issues for a while. Llms might be a good start to a behaviour interface between humans and a real logic system, but as far as I'm aware we don't have anything like that, and it's a long ways off. Ethically, I'm not a fan of companies trawling the internet for data they can use to build hallucination machines. I'm also not a huge fan of people's creative, language, and logic skills being influenced by something like chatgpt.

Where I hope it'll go is it'll have a big collapse. Then it might go into the background for a while, and return as something actually useful. The tech won't ever go away, but neither has blockchain.

[–] [email protected] 2 points 12 hours ago* (last edited 12 hours ago)

I'm scared that big AI companies and copyright companies will successfully run a media campaign to turn public sentiment against AI, so they can easily pass laws meant to strengthen copyright, wall in all the data and create a monopoly around AI.

The job loss will happen regardless, but if three or so companies successfully position themselves as the only ones legally allowed to serve AI solutions, all the economic benefits will go to them. We won't be able to tax companies to pay for UBI if they are already being taxed by Google.

[–] [email protected] 1 points 11 hours ago
  • We are already seeing its impact in my industry. Much like the proliferation of the Internet in the late 90s, this is going to be a paradigm shift. There will be winners who keep up and losers who lag behind.
  • AI helps me get to the things I don't ever have time for.
  • I'm a huge advocate of sustainability and the environment. I believe it is a core foundational tenet of mine. I do not believe the naysayers that say AI is going to be more destructive to the environment. Yes, in the short term. However, it has great potential to improve efficiency and reduce waste. -- AI augmented cars will reduce stop and go traffic and accidents -- AI augmented industry (AI + CNC) will maximize the use of manufactured/cut materials -- AI augmented design will produce structures that meet tolerances with both material constraints and safety in mind -- AI augmented maintenance means that "things" will tell us before they break down, reducing failures and replacements. -- AI augmented reporting/spreadsheets will reduce double entry and the need for manual reconciliation -- AI augmented chatbots will mean everyone has access to a personal assistant

That said, I do have concerns about what I'm hearing about brain activity and use of AI. I can somewhat relate it to the same idea that back 30 years ago, everyone knew something like ten of their closest friends' and relatives' phone numbers. Now I'd be lucky to remember two. Cell phones reduced the need to keep those in my head. Likewise, leaning too heavily on AI is already starting to produce some homogeneity of thought across business, to the expense of creativity. I think there needs to be careful consideration put into what we should all be able to learn and recall without AI, before we turn AI into a crutch.

[–] [email protected] 1 points 13 hours ago* (last edited 12 hours ago) (1 children)

I like to tinker with it and occasionally use it. But I still don't rely on it. And it's not really part of my every day life. I use google (and skip the AI summary) instead of asking some Alexa or assistant. I can operate my toaster myself and read news articles and find facts myself.

I think it's super useful for translation, automate some stuff, computer vision, robotics, text to speech and so on... Chatbots are a bit meh and have limited application. I use them to spark my creativity but not to execute tasks.

AI is predicted to be quite disruptive and to have a big impact on the world and society. I'm pretty sure that's true and soon all the internet is flooded with AI slop. We can't tell if something is real or fabricated... And art and things with a human touch have to figure out how to survive.

I don't panic. Humanity always found ways to deal with stuff. And we have like a bazillion other serious issues like social media, climate change, fascists on the rise... I wouldn't even know what to panic about first.

Edit: And I'm really impressed by the scientific progress and the pace we went from very stupid AI to now human-like voice, pictures, even video and all you need to tell is what you want in human language. That's some crazy fast advancement.

But what I definitely worry about is this tech being mainly in the hands of big mega corporations. That's not good at all. And it's going to be benefit them and harm the people, unless we regulate this.

[–] [email protected] 3 points 11 hours ago (1 children)

translation

Learning languages is a different story now that you can basically chat to a bot all day in your target language.

[–] Godort 3 points 11 hours ago* (last edited 11 hours ago)

How have I never thought of this?

I want to get some real practice in, but social anxiety is a huge roadblock.