This kind of stuff has always been an endless war of escalation, the same as any kind of security. There was a period of time where all it took to mess with Gen AI was artists uploading images of large circles or something with random tags to their social media accounts. People ended up with random bits of stop signs and stuff in their generated images for like a week. Now, artists are moving to sites that treat AI scrapers like malware attacks and degrading the quality of the images that they upload.
EldritchFeminity
I agree that's a BIG if. In an ideal world, people would cite their sources and bring more attention to the creator. I also didn't mean that artists should create work for the opportunity to have it turned into a meme and maybe go viral and get exposure that way, but that at least there's a chance of people getting more clients through word of mouth that way for work that they've already done, however small, compared to having their art thrown into a training algorithm which has an absolutely zero chance of the artist seeing any benefit.
Last I heard, current AI will devour themselves if trained on content from other AI. It simply isn't good enough to use, and the garbage noise to value ratio is too high to make it worth filtering through. Which means that there is still a massive demand for human-made content, and possibly will be even more demand in the future for some time yet. Pay artists to create that content, and I see no real problem in the model. There are some companies that have started doing just that. Procreate has partnered with a company that creates websites that is hiring artists to create training data for their UI generating LLM and paying those artists commission fees. Nobody has to spend their day making hundreds of buttons for stupid websites, and the artists get paid. A win-win for everybody.
My stance on AI always comes down to the ethics behind the creation of the tool, not the tool itself. My pie in the sky scenario would be that artists could spend their time making what they want to make without having to worry about whether or not they can afford rent. There's a reason we see most artists posting only commission work online, and it's because they can't afford to work on their own stuff. My more realistic view is that there's a demand for content to train these things, so pay the people making that content an appropriate wage for their work and experience. There could be an entire industry around creating stuff specifically for different content tags for training data.
And as for AI being similar to humans, I think you're largely right. It's a really simplified reproduction of how human creativity and inspiration work, but with some major caveats. I see AI as basically a magic box containing an approximation of skill but lacking understanding and intent. When you give it a prompt, you provide the intent, and if you're knowledgeable, you have the understanding to apply as well. But many people don't care about the understanding or value the skill, they just want the end result. Which is where we stand today with AI not being used for the betterment of our daily lives, but just as a cost-cutting tool to avoid having to pay workers what they're worth.
Hence, we live in a world where they told us when we were growing up that AI would be used to do the things we hate doing so that we had more time to write poetry and create art, while today AI is used to write poetry and create art so that we have more time to work our menial jobs and create value for shareholders.
As the saying goes, "I don't like to get political is a weird way of saying my rights aren't up for debate every 4 years."
Although in the past decade, it's been closer to being up for debate every couple of days.
And when it comes to authors and artists, it amounts to wage theft. When a company hires an artist to make an ad, the artist gets paid to make it. If you then take that ad, you're not taking money from the worker - they already got paid for the work that they did. Even if you take a piece from the social media of an independent artist and make a meme out of it or something, so long as people can find that artist, it can lead to people hiring them. But if you chop it up and mash it into a data set, you're taking their work for profit or to avoid paying them for their skills and expertise to create something new. AI can not exist without a constant stream of human art to devour, yet nobody thinks the work to produce that art is worth paying for. It's doing a corporation to avoid paying the working class what their skills are worth.
Well, your example isn't quite right because these companies didn't buy the data originally.
I'd say it's more like when somebody samples a song without permission and uses it in their own music. If we wanna go even further, I'd say the AI companies we have today are basically making and selling synthesizers created off of samples used without permission. The AI don't learn the way we do, they simply regurgitate what they think is correct based on the probability they get from an algorithm derived from their training set.
Yes and no. American companies have been following OpenAI's strategy, which is simply scaling up as quickly as possible. From massive data centers swallowing our drinking water for coolant to coal and natural gas power plants to keep it running, it's been all about pouring as much money and resources as possible in order to scale up to improve their models.
What DeepSeek has done is prove that that's the wrong way to go about it, and now suddenly, all these companies that have been a massive money sink without any clear path to profitability already have to completely pivot their strategy. Most will probably die before they can. Investors are already selling off their stock.
So AI will become closer to actually being practical/profitable, but I imagine most of the companies who reach that goal won't be the companies that exist today, and the AI bubble itself will probably collapse from this pivot, if we're lucky.
Since DeepSeek is also open source, we might even see free competitors that can be run locally pop up that can go toe to toe with the likes of ChatGPT, which would be a real stake through the heart for these massive companies.
My favorite was how German spies in WW2 were easily caught because they'd just show up in beachside towns out of nowhere (right off the u-boat) in suits and carrying briefcases stuffed with cash.
Our glorious truth vs. their vile propaganda, etc.
The gay theatre kid has been a stereotype forever, but they literally had to invent a word to describe guys who showered and wore something that wasn't a T-shirt because that was enough for even women to think you were gay. The homophobia was so bad back then that you could possibly lose your job if people thought you were gay because you used hair gel and dressed well.
The 90s and 2000s were something else.
I mean, isn't that part of the whole thing? They want to get railed without it being gay?
And very expensive doctors.
You're largely arguing what I'm saying back at me. I didn't mean that the AI is bad, but that the AI content that's out there has filled the internet with tons of low quality stuff over the past few years, and enough of this garbage going in degrades the quality coming out, in a repeating cycle of degradation. You create biases in your model, and feeding those back in makes it worse. So the most cost-effective way to filter it out is to avoid training on possibly AI content altogether. I think OpenAI was limiting the training data for ChatGPT to stuff from before 2020 up until this past year or so.
It's a similar issue to what facial recognition software had. Early on, facial recognition couldn't tell the difference between two women, two black people (men or women), or two white men under the age of 25 or so. Because it was trained on the employees working on it, who were mostly middle-aged white men.
This means that there's a high demand for content to train on, which would be a perfect job to hire artists for. Pay them to create work for whatever labels you're looking for for your data sets. But companies don't want to do that. They'd rather steal content from the public at large. Because AI is about cutting costs for these companies.
To put it simply: AI can generate an image, but it isn't capable of understanding 2-point perspective or proper lighting occlusion, etc. It's just a tool. A very powerful tool, especially in the right hands, but a tool nonetheless. If you look at AI images, especially ones generated by the same model, you'll begin to notice certain specific mistakes - especially in lighting. AI doesn't understand the concept of lighting, and so has a very hard time creating realistic lighting. Most characters end up with competing light sources and shadows from all over the place that make no sense. And that's just a consequence of how specific you'd need your prompt to be in order to get it right.
Another flaw with AI is that it can't iterate. Production companies that were hiring AI prompters to their movie crews have started putting blanket bans on hiring prompters because they simply can't do the work. You ask them to give you 10 images of a forest, and they'll come back the next day with 20. But you say, "Great, I like this one, but take the people out of it," and they'll come back the next day with 15 more pictures of forests, but not the original without people in it. It's a great tool for what it does, but you can't tell it, "Can you make the chili dog 10 times larger" and get the same piece, just with a giant chili dog.
And don't get me started on Hollywood or any of those other corporate leeches. I think Adam Savage said it best when he said last year that someday, a film student is going to do something really amazing with AI - and Hollywood is going to copy it to death. Corporations are the death of art, because they only care about making a product to be consumed. For some perfect examples of what I mean, you should check out these two videos: Why do "Corporate Art Styles" Feel Fake? by Solar Sands, and Corporate Music - How to Compose with no Soul by Tantacrul. Corporations also have no courage when money is on the line, so that's why we see so many sequels and remakes out of Hollywood. People aren't clamoring for a live action remake of (insert childhood Disney movie here), but they will go and watch it, and that's a safe bet for Hollywood. That's why we don't see many new properties. Artists want to make them, but Hollywood doesn't.
As I said, in my ideal world, AI would be making that corporate garbage and artists would be able to create what they actually want. But in the real world, there's very little chance that you can keep a roof over your head making what you want. Making corporate garbage is where the jobs are, and most artists have very little time left over for working on personal stuff. People always ask questions like, "Why aren't people making statues like the Romans did," or "Why don't we get paintings like Rembrandt used to do." And the answer is, because nobody is paying artists to make them. They're paying them to make soup commercials, and they don't even want to pay them for that.