this post was submitted on 18 May 2025
248 points (94.0% liked)

Ask Lemmy

31929 readers
3324 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either [email protected] or [email protected]. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email [email protected]. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try [email protected] or [email protected]


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

Lots of people on Lemmy really dislike AI’s current implementations and use cases.

I’m trying to understand what people would want to be happening right now.

Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?

Thanks for the discourse. Please keep it civil, but happy to be your punching bag.

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 169 points 1 week ago (15 children)

If we're going pie in the sky I would want to see any models built on work they didn't obtain permission for to be shut down.

Failing that, any models built on stolen work should be released to the public for free.

[–] [email protected] 68 points 1 week ago

This is the best solution. Also, any use of AI should have to be stated and watermarked. If they used someone's art, that artist has to be listed as a contributor and you have to get permission. Just like they do for every film, they have to give credit. This includes music, voice and visual art. I don't care if they learned it from 10,000 people, list them.

load more comments (14 replies)
[–] [email protected] 90 points 1 week ago (37 children)

I want people to figure out how to think for themselves and create for themselves without leaning on a glorified Markov chain. That's what I want.

[–] [email protected] 27 points 1 week ago (11 children)

AI people always want to ignore the environmental damage as well...

Like all that electricity and water are just super abundant things humans have plenty of.

Everytime some idiot asks AI instead of googling it themselves the planet gets a little more fucked

load more comments (11 replies)
load more comments (36 replies)
[–] [email protected] 71 points 1 week ago (14 children)

I want real, legally-binding regulation, that’s completely agnostic about the size of the company. OpenAI, for example, needs to be regulated with the same intensity as a much smaller company. And OpenAI should have no say in how they are regulated.

I want transparent and regular reporting on energy consumption by any AI company, including where they get their energy and how much they pay for it.

Before any model is released to the public, I want clear evidence that the LLM will tell me if it doesn’t know something, and will never hallucinate or make something up.

Every step of any deductive process needs to be citable and traceable.

[–] [email protected] 21 points 1 week ago (1 children)

Before any model is released to the public, I want clear evidence that the LLM will tell me if it doesn’t know something, and will never hallucinate or make something up.

Their creators can't even keep them from deliberately lying.

[–] [email protected] 15 points 1 week ago
[–] [email protected] 19 points 1 week ago

Clear reporting should include not just the incremental environmental cost of each query, but also a statement of the invested cost in the underlying training.

load more comments (12 replies)
[–] [email protected] 46 points 1 week ago (16 children)

They have to pay for every copyrighted material used in the entire models whenever the AI is queried.

They are only allowed to use data that people opt into providing.

load more comments (16 replies)
[–] [email protected] 41 points 1 week ago

Like a lot of others, my biggest gripe is the accepted copyright violation for the wealthy. They should have to license data (text, images, video, audio,) for their models, or use material in the public domain. With that in mind, in return I'd love to see pushes to drastically reduce the duration of copyright. My goal is less about destroying generative AI, as annoying as it is, and more about leveraging the money being it to change copyright law.

I don't love the environmental effects but I think the carbon output of OpenAI is probably less than TikTok, and no one cares about that because they enjoy TikTok more. The energy issue is honestly a bigger problem than AI. And while I understand and appreciate people worried about throwing more weight on the scales, I'm not sure it's enough to really matter. I think we need bigger "what if" scenarios to handle that.

[–] [email protected] 33 points 1 week ago

TBH, it's mostly the corporate control and misinformation/hype that's the problem. And the fact that they can require substantial energy use and are used for such trivial shit. And that that use is actively degrading people's capacity for critical thinking.

ML in general can be super useful, and is an excellent tool for complex data analysis that can lead to really useful insights..

So yeah, uh... Eat the rich? And the marketing departments. And incorporate emissions into pricing, or regulate them to the point where it only becomes viable to non-trivial use cases.

[–] [email protected] 33 points 1 week ago (1 children)

The technology side of generative AI is fine. It's interesting and promising technology.

The business side sucks and the AI companies just the latest continuation of the tech grift. Trying to squeeze as much money from latest hyped tech, laws or social or environmental impact be damned.

We need legislation to catch up. We also need society to be able to catch up. We can't let the AI bros continue to foist more "helpful tools" on us, grab the money, and then just watch as it turns out to be damaging in unpredictable ways.

load more comments (1 replies)
[–] [email protected] 31 points 1 week ago (3 children)

Long, long before this AI craze began, I was warning people as a young 20-something political activist that we needed to push for Universal Basic Income because the inevitable march of technology would mean that labor itself would become irrelevant in time and that we needed to hash out a system to maintain the dignity of every person now rather than wait until the system is stressed beyond it's ability to cope with massive layoffs and entire industries taken over by automation/AI. When the ability of the average person to sell their ability to work becomes fundamentally compromised, capitalism will collapse in on itself - I'm neither pro- nor anti-capitalist, but people have to acknowledge that nearly all of western society is based on capitalism and if capitalism collapses then society itself is in jeopardy.

I was called alarmist, that such a thing was a long way away and we didn't need "socialism" in this country, that it was more important to maintain the senseless drudgery of the 40-hour work week for the sake of keeping people occupied with work but not necessarily fulfilled because the alternative would not make the line go up.

Now, over a decade later, and generative AI has completely infiltrated almost all creative spaces and nobody except tech bros and C-suite executives are excited about that, and we still don't have a safety net in place.

Understand this - I do not hate the idea of AI. I was a huge advocate of AI, as a matter of fact. I was confident that the gradual progression and improvement of technology would be the catalyst that could free us from the shackles of the concept of a 9-to-5 career. When I was a teenager, there was this little program you could run on your computer called Folding At Home. It was basically a number-crunching engine that uses your GPU to fold proteins, and the data was sent to researchers studying various diseases. It was a way for my online friends and I to flex how good our PC specs were with the number of folds we could complete in a given time frame and also we got to contribute to a good cause at the same time. These days, they use AI for that sort of thing, and that's fucking awesome. That's what I hope to see AI do more of - take the rote, laborious, time consuming tasks that would take one or more human beings a lifetime to accomplish using conventional tools and have the machine assist in compiling and sifting through the data to find all the most important aspects. I want to see more of that.

I think there's a meme floating around that really sums it up for me. Paraphrasing, but it goes "I thought that AI would do the dishes and fold my laundry so I could have more time for art and writing, but instead AI is doing all my art and writing so I have time to fold clothes and wash dishes.".

I think generative AI is both flawed and damaging, and it gives AI as a whole a bad reputation because generative AI is what the consumer gets to see, and not the AI that is being used as a tool to help people make their lives easier.

Speaking of that, I also take issue with that fact that we are more productive than ever before, and AI will only continue to improve that productivity margin, but workers and laborers across the country will never see a dime of compensation for that. People might be able to do the work of two or even three people with the help of AI assistants, but they certainly will never get the salary of three people, and it means that two out of those three people probably don't have a job anymore if demand doesn't increase proportionally.

I want to see regulations on AI. Will this slow down the development and advancement of AI? Almost certainly, but we've already seen the chaos that unfettered AI can cause to entire industries. It's a small price to pay to ask that AI companies prove that they are being ethical and that their work will not damage the livelihood of other people, or that their success will not be born off the backs of other creative endeavors.

load more comments (3 replies)
[–] [email protected] 29 points 1 week ago (10 children)

Other people have some really good responses in here.

I'm going to echo that AI is highlighting the problems of capitalism. The ownership class wants to fire a bunch of people and replace them with AI, and keep all that profit for themselves. Not good.

load more comments (10 replies)
[–] [email protected] 27 points 1 week ago* (last edited 1 week ago) (1 children)

What do I really want?

Stop fucking jamming it up the arse of everything imaginable. If you asked for a genie wish, make it it illegal to be anything but opt in.

load more comments (1 replies)
[–] [email protected] 23 points 1 week ago* (last edited 1 week ago)

For it to go away just like Web 3.0 and NFTs did. Stop cramming it up our asses in every website and application. Make it opt in instead of maybe if you're lucky, opt out. And also, stop burning down the planet with data center power and water usage. That's all.

Edit: Oh yeah, and get sued into oblivion for stealing every copyrighted work known to man. That too.

Edit 2: And the tech press should be ashamed for how much they've been fawning over these slop generators. They gladly parrot press releases, claim it's the next big thing, and generally just suckle at the teet of AI companies.

[–] [email protected] 23 points 1 week ago (5 children)

Idrc about ai or whatever you want to call it. Make it all open source. Make everything an ai produces public domain. Instantly kill every billionaire who's said the phrase "ai" and redistribute their wealth.

load more comments (5 replies)
[–] [email protected] 23 points 1 week ago (4 children)

Magic wish granted? Everyone gains enough patience to leave it to research until it can be used safely and sensibly. It was fine when it was an abstract concept being researched by CS academics. It only became a problem when it all went public and got tangled in VC money.

load more comments (4 replies)
[–] [email protected] 23 points 1 week ago

I'm not against it as a technology. I use it for my personal use, as a toy, to have some fun or to whatever.

But what I despise is the forced introduction everything. AI written articles and AI forced assistants in many unrelated apps. That's what I want to disappear, how they force in lots of places.

[–] [email protected] 23 points 1 week ago

That stealing copyrighted works would be as illegal for these companies as it is for normal people. Sick and tired of seeing them get away with it.

[–] [email protected] 22 points 1 week ago

Rename it to LLMs, because that's that it is. When the hype label is gone, it won't get shoved into everywhere for shits and giggles and be used for stuff it's actually useful for.

[–] [email protected] 22 points 1 week ago* (last edited 1 week ago) (5 children)

First of all stop calling it AI. It is just large language models for the most part.

Second: immediate carbon tax in line with the current damage expectations for emissions on the energy consumption of datacenters. That would be around 400$/tCO2 iirc.

Third: Make it obligatory by law to provide disclaimers about what it is actually doing. So if someone asks "is my partner cheating on me". The first message should be "this tool does not understand what is real and what is false. It has no actual knowledge of anything, in particular not of your personal situation. This tool just puts words together that seem more likely to belong together. It cannot give any personal advice and cannot be used for any knowledge gain. This tool is solely to be used for entertainment purposes. If you use the answers of this tool in any dangerous way, such as for designing machinery, operating machinery, financial decisions or similar you are liable for it yourself."

load more comments (5 replies)
[–] [email protected] 21 points 1 week ago (1 children)

There's too many solid reasons to be upset with, well, not AI per say, but the companies that implement, market, and control the AI ecosystem and conversation to go into in a single post. Sufficient to say I think AI is an existential threat to humanity mainly because of who's controlling it and who's not.

We have no regulation on AI, we have no respect for artists, writers, musicians, actors, and workers in general coming from these AI peddling companies, we only see more and more surveillance and control over multiple aspects of our lives being consolidated around these AI companies and even worse, we get nothing more in exchange except for the promise of increased productivity and quality, and that increase in productivity and quality is a lie. AI currently gives you the wrong answer or some half truth or some abomination of someone else's artwork really really fast...that is all it does, at least for the public sector currently.

For the private sector at best it alienates people as chatbots, and at worst is being utilized to infer data for surveillance of people. The tools of technology at large are being used to suppress and obfuscate speech by whoever uses it, and AI is one tool amongst many at the disposal of these tech giants.

AI is exacerbating a knowledge crisis that was already in full swing as both educators and students become less curious about subjects that don't inherently relate to making profits or consolidating power. And because knowledge is seen as solely a way to gather more resources/power and survive in an ever increasingly hostile socioeconomic climate, people will always reach for the lowest hanging fruit to get to that goal, rather than actually knowing how to solve a problem that hasn't been solved before or inherently understand a problem that has been solved before or just know something relatively useless because it's interesting to them.

There's too many good reasons AI is fucking shit up, and in all honesty what people in general tote about AI is definitely just a hype cycle that will not end well for the majority of us and at the very least, we should be upset and angry about it.

Here are further resources if you didn't get enough ranting.

lemmy.world's fuck_ai community

System Crash Podcast

Tech Won't Save Us Podcast

Better Offline Podcast

load more comments (1 replies)
[–] [email protected] 21 points 1 week ago (1 children)

Admittedly very tough question. Here are some of the ideas I just came up with:

Make it easier to hold people or organizations liable for mistakes made because of haphazard reliance on LLMs.

Reparations for everyone ever sued for piracy, and completely do away with intellectual privacy protections for corporations, but independent artists get to keep them.

Public service announcements campaign aimed at making the general public less trustful of LLMs.

Strengthen consumer protection such that baseless claims of AI capabilities in advertising or product labeling are legally dangerous to make.

Fine companies for every verifiably inaccurate result given to a customer or end user by an LLM

load more comments (1 replies)
[–] [email protected] 21 points 1 week ago

I do not need AI and I do not want AI, I want to see it regulated to the point that it becomes severly unprofitable. The world is burning and we are heading face first towards a climate catastrophe (if we're not already there), we DONT need machines to mass produce slop.

[–] [email protected] 21 points 1 week ago (2 children)

I just want my coworkers to stop dumping ai slop in my inbox and expecting me to take it seriously.

load more comments (2 replies)
[–] [email protected] 20 points 1 week ago (1 children)

Serious investigation into copyright breaches done by AI creators. They ripped off images and texts, even whole books, without the copyright owners permissions.

If any normal person broke the laws like this, they would hand out prison sentences till kingdom come and fines the size of the US debt.

I just ask for the law to be applied to all equally. What a surprising concept...

load more comments (1 replies)
[–] [email protected] 20 points 1 week ago (1 children)

Regulate its energy consumption and emissions. As a whole, the entire AI industry. Any energy or emissions in effort to develop, train, or operate AI should be limited.

If AI is here to stay, we must regulate what slice of the planet we're willing to give it. I mean, AI is cool and all, and it's been really fascinating watching how quickly these algorithms have progressed. Not to oversimplify it, but a complex Markov chain isn't really worth the energy consumption that it currently requires.

A strict regulation now, would be a leg up in preventing any rogue AI, or runaway algorithms that would just consume energy to the detriment of life. We need a hand on the plug. Capitalism can't be trusted to self regulate. Just look at the energy grabs all the big AI companies have been doing already (xAI's datacenter, Amazon and Google's investments into nuclear). It's going to get worse. They'll just keep feeding it more and more energy. Gutting the planet to feed the machine, so people can generate sexy cat girlfriends and cheat in their essays.

We should be funding efforts to utilize AI more for medical research. protein folding , developing new medicines, predicting weather, communicating with nature, exploring space. We're thinking to small. AI needs to make us better. With how much energy we throw at it we should be seeing something positive out of that investment.

load more comments (1 replies)
[–] [email protected] 20 points 1 week ago (5 children)

Part of what makes me so annoyed is that there's no realistic scenario I can think of that would feel like a good outcome.

Emphasis on realistic, before anyone describes some insane turn of events.

load more comments (5 replies)
[–] [email protected] 19 points 1 week ago (1 children)

Lots of copyright comments.

I want those building it at scale to stop killing my planet.

load more comments (1 replies)
[–] [email protected] 17 points 1 week ago

Training data needs to be 100% traceable and licensed appropriately.

Energy usage involved in training and running the model needs to be 100% traceable and some minimum % of renewable (if not 100%).

Any model whose training includes data in the public domain should itself become public domain.

And while we're at it we should look into deliberately taking more time at lower clock speeds to try to reduce or eliminate the water usage gone to cooling these facilities.

[–] [email protected] 17 points 1 week ago

I'd like there to be a web-wide expectation by everyone that any AI generated text, comment, story or image be clearly marked as being AI. That people would feel incensed and angry when it wasn't labeled so. Rather than wondering whether there were a person with a soul producing the content, or losing faith that real info could be found online.

[–] [email protected] 16 points 1 week ago (4 children)

Destroy capitalism. That's the issue here. All AI fears stem from that.

load more comments (4 replies)
[–] [email protected] 15 points 1 week ago

Make AIs OpenSource by law.

[–] [email protected] 14 points 1 week ago

My issue is that the c-levels and executives see it as a way of eliminating one if their biggest costs - labour.

They want their educated labour reduced by three quarters. They want me doing the jobs of 4 people with the help of AI, and they want to pay me less than they already are.

What I would like is a universal basic income paid for by taxing the shit out of the rich.

[–] [email protected] 14 points 1 week ago* (last edited 1 week ago) (1 children)

I'm perfectly ok with AI, I think it should be used for the advancement of humanity. However, 90% of popular AI is unethical BS that serves the 1%. But to detect spoiled food or cancer cells? Yes please!

It needs extensive regulation, but doing so requires tech literate politicians who actually care about their constituents. I'd say that'll happen when pigs fly, but police choppers exist so idk

load more comments (1 replies)
[–] [email protected] 14 points 1 week ago

I am largely concerned that the development and evolution of generative AI is driven by hype/consumer interests instead of academia. Companies will prioritize opportunities to profit from consumers enjoying the novelty and use the tech to increase vendor lock-in.

I would much rather see the field advanced by scientific and academic interests. Let's focus on solving problems that help everyone instead of temporarily boosting profit margins.

I believe this is similar to how CPU R&D changed course dramatically in the 90s due to the sudden popularity in PCs. We could have enjoyed 64 bit processors and SMT a decade earlier.

[–] [email protected] 14 points 1 week ago

I want disclosure. I want a tag or watermark to let people know that AI was used. I want to see these companies pay dues for the content used in the similar vein that we have to pay for higher learning. And we need to stop calling it AI as well.

[–] [email protected] 13 points 1 week ago

I'd like to have laws that require AI companies to publicly list their sources/training materials.

I'd like to see laws defining what counts as AI, and then banning advertising non-compliant software and hardware as "AI".

I'd like to see laws banning the use of generative AI for creating misleading political, social, or legal materials.

My big problems with AI right now, are that we don't know what info has been scooped up by them. Companies are pushing misleading products as AI, while constantly overstating the capabilities and under-delivering, which will damage the AI industry as a whole. I'd also want to see protections to keep stupid and vulnerable people from believing AI generated content is real. Remember, a few years ago, we had to convince people not to eat tidepods. AI can be a very powerful tool for manipulating the ranks of stupid people.

[–] [email protected] 13 points 1 week ago

I want the LLMs to be able to determine their source works during the query process to be able to pay the source copyright owners some amount. That way if you generate a Ms Piggy image, it pays the Henson Workshop some fraction of a penny. Eventually it would add up.

[–] [email protected] 12 points 1 week ago

I don't dislike ai, I dislike capitalism. Blaming the technology is like blaming the symptom instead of the disease. Ai just happens to be the perfect tool to accelerate that

load more comments
view more: next ›