Truer words have never been said.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
Like Sam Altman who invests in Prospera, a private "Start-up City" in Honduras where the board of directors pick and choose which laws apply to them!
The switch to Techno-Feudalism is progressing far too much for my liking.
The problem with AI is that it pirates everyone’s work and then repackages it as its own and enriches the people that did not create the copywrited work.
AI has a vibrant open source scene and is definitely not owned by a few people.
A lot of the data to train it is only owned by a few people though. It is record companies and publishing houses winning their lawsuits that will lead to dystopia. It's a shame to see so many actually cheering them on.
So long as there are big players releasing open weights models, which is true for the foreseeable future, I don't think this is a big problem. Once those weights are released, they're free forever, and anyone can fine-tune based on them, or use them to bootstrap new models by distillation or synthetic RL data generation.
Two intrinsic problems with the current implementations of AI is that they are insanely resource-intensive and require huge training sets. Neither of those is directly a problem of ownership or control, though both favor larger players with more money.
And a third intrinsic problem is that the current models with infinite training data have been proven to never approach human language capability, from papers written by OpenAI in 2020 and Deepmind in 2022, and also a paper by Stanford which proposes AI simply have no emergent behavior and only convergent behavior.
So yeah. Lots of problems.
I’d say the biggest problem with AI is that it’s being treated as a tool to displace workers, but there is no system in place to make sure that that “value” (I’m not convinced commercial AI has done anything valuable) created by AI is redistributed to the workers that it has displaced.
Welcome to every technological advancement ever applied to the workforce
Same as always. There is no technology capitalism can't corrupt
Either the article editing was horrible, or Eno is wildly uniformed about the world. Creation of AIs is NOT the same as social media. You can't blame a hammer for some evil person using it to hit someone in the head, and there is more to 'hammers' than just assaulting people.
Eno does strike me as the kind of person who could use AI effectively as a tool for making music. I don’t think he’s team “just generate music with a single prompt and dump it onto YouTube” (AI has ruined study lo fi channels) - the stuff at the end about distortion is what he’s interested in experimenting with.
There is a possibility for something interesting and cool there (I think about how Chuck Pearson’s eccojams is just like short loops of random songs repeated in different ways, but it’s an absolutely revolutionary album) even if in effect all that’s going to happen is music execs thinking they can replace songwriters and musicians with “hey siri, generate a pop song with a catchy chorus” while talentless hacks inundate YouTube and bandcamp with shit.
The biggest problem with AI is the damage it’s doing to human culture.
The government likes concentrated ownership because then it has only a few phonecalls to make if it wants its bidding done (be it censorship, manipulation, partisan political chicanery, etc)
No?
Anyone can run an AI even on the weakest hardware there are plenty of small open models for this.
Training an AI requires very strong hardware, however this is not an impossible hurdle as the models on hugging face show.
Yah, I'm an AI researcher and with the weights released for deep seek anybody can run an enterprise level AI assistant. To run the full model natively, it does require $100k in GPUs, but if one had that hardware it could easily be fine-tuned with something like LoRA for almost any application. Then that model can be distilled and quantized to run on gaming GPUs.
It's really not that big of a barrier. Yes, $100k in hardware is, but from a non-profit entity perspective that is peanuts.
Also adding a vision encoder for images to deep seek would not be theoretically that difficult for the same reason. In fact, I'm working on research right now that finds GPT4o and o1 have similar vision capabilities, implying it's the same first layer vision encoder and then textual chain of thought tokens are read by subsequent layers. (This is a very recent insight as of last week by my team, so if anyone can disprove that, I would be very interested to know!)
But the people with the money for the hardware are the ones training it to put more money in their pockets. That's mostly what it's being trained to do: make rich people richer.
This completely ignores all the endless (open) academic work going on in the AI space. Loads of universities have AI data centers now and are doing great research that is being published out in the open for anyone to use and duplicate.
I've downloaded several academic models and all commercial models and AI tools are based on all that public research.
I run AI models locally on my PC and you can too.
But you can make this argument for anything that is used to make rich people richer. Even something as basic as pen and paper is used everyday to make rich people richer.
Why attack the technology if its the rich people you are against and not the technology itself.
The biggest problem with AI is that it’s the brut force solution to complex problems.
Instead of trying to figure out what’s the most power efficient algorithm to do artificial analysis, they just threw more data and power at it.
Besides the fact of how often it’s wrong, by definition, it won’t ever be as accurate nor efficient as doing actual thinking.
It’s the solution you come up with the last day before the project is due cause you know it will technically pass and you’ll get a C.
And those people want to use AI to extract money and to lay off people in order to make more money.
That’s “guns don’t kill people” logic.
Yeah, the AI absolutely is a problem. For those reasons along with it being wrong a lot of the time as well as the ridiculous energy consumption.
The real issues are capitalism and the lack of green energy.
If the arts where well funded, if people where given healthcare and UBI, if we had, at the very least, switched to nuclear like we should've decades ago, we wouldn't be here.
The issue isn't a piece of software.
like most of money
Technological development and the future of our civilization is in control of a handful of idiots.
I don't really agree that this is the biggest issue, for me the biggest issue is power consumption.
That is a big issue, but excessive power consumption isn't intrinsic to AI. You can run a reasonably good AI on your home computer.
The AI companies don't seem concerned about the diminishing returns, though, and will happily spend 1000% more power to gain that last 10% better intelligence. In a competitive market why wouldn't they, when power is so cheap.