I'm of the tilt that it's spam if it's not providing a service. I don't want comment sections covered in vapid muck.
Asklemmy
A loosely moderated place to ask open-ended questions
Search asklemmy π
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- [email protected]: a community for finding communities
~Icon~ ~by~ ~@Double_[email protected]~
I don't have any problem with AI myself. It's a tool like any other. What we should be focusing on is promoting positive uses of this tech instead.
For starters, do you have reason to believe a large number of Lemmy users are legitimately bots, or is this just a thing where you saw someone with a different opinion? Lemmy overall is aligned in being generally anti-AI.
Ew.
If I wanted to interact with AI content I would be on Reddit.
We have some cracking communities for AI images β
The moral panic where the hivemind loves-to-hate AI won't last, I just tune it out.
In general, if it isn't open source in every sense of the term, GPL license, all weights and parts of the model, and all the training data and training methods, it's a non-starter for me.
I'm not even interested in talking about AI integration unless it passes those initial requirements.
Scraping millions of people's data and content without their knowledge or consent is morally dubious already.
Taking that data and using it to train proprietary models with secret methodologies, locking it behind a pay wall, then forcing it back onto consumers regardless of what they want in order to artificially boost their stock price and make a handful of people disgustingly wealthy is downright demonic.
Especially because it does almost nothing to enrich our lives. In its current form, it is an anti-human technology.
Now all that being said, if you want to run it totally on your own hardware, to play with and help you with your own tasks, that's your choice. Using in a way that you have total sovereignty over is good.
I wondered if comments you post are, according to AI they're actually copyright protected. But it's funny that no one read the TOS and basically give copywrite of comments to meta and Reddit (maybe) so legally the comments can be scraped without the authors consent. So there's plenty of legally and pretty much (technically)ethical sources content for LLMs, if you're okay with capitalism and corporations.
I look at AI as a tool, the rich definitely look at as a tool too, so I'm not going to shy away from it. I found a way to use AI to discriminate if a post is about live stream or not and use that to boost the post on mastodon. And I built half a dozen scripts with perplexity and chat gpt, one of witch is a government watchdog to see if there's any ethical or legal violations https://github.com/solidheron/AI-Watchdog-city-council
I'm not advocate that you should be pro or anti AI, but if you're anti AI then you should be doing anti AI measures
we need to be able to verify humans on all instances
everyone else could be a bot
The million drachma question, though, is how.
The entire Internet will need some way to validate that a given user is a human and not a bot, but in practice it's becoming increasingly more impossible.
I thought about using legalese or old obscure phrases from 100 years ago (maybe even old English) in a reply to a bit and seeing how it responds. General not all language is known to a person but an AI wouldn't be stumped (maybe). If we found AI they would get better to the point were they're human like and after that it's like "oh well we got ai citizens of the Internet '
there are several government gateways that provide that service using an up to date passport for example
Personally, if I see AI content I block the user that posted it. If a community is all about AI, I block the community. I want to see content from people that have actual talent or something intelligent to contribute.
In the fediverse? Same as outside. It's a solution looking for a problem. We generate our own content here, everyone is here because of the rest of the automated bots everywhere else. Look at lemmit online, it's an instance dedicated to mirroring reddit subs for us here, but it's a ghost town because we all pretty quickly realized it was boring interacting with bots.
A bot has to have a good purpose here. Like an auto archive bot so people click a better link, or bots like wikibot. I'm not saying AI is useless here, but I haven't seen a good actual use case for it here yet
LLMs, image generators like Stable Diffusion etc, and other of what's come lately to be called "generative AI" should have no place on the Fediverse or anywhere else.
I love genAI and I play with it all the time. I also use it to generate inspiration for my art. I'd never suggest releasing a model to the Fediverse.
algorithms are going to come regardless of what anyone wants.
Seems like Lemmy has some basic algorithms but I know one instance will implement algorithms