this post was submitted on 13 Jul 2023
31 points (100.0% liked)
Technology
191 readers
1 users here now
This magazine is dedicated to discussions on the latest developments, trends, and innovations in the world of technology. Whether you are a tech enthusiast, a developer, or simply curious about the latest gadgets and software, this is the place for you. Here you can share your knowledge, ask questions, and engage in discussions on topics such as artificial intelligence, robotics, cloud computing, cybersecurity, and more. From the impact of technology on society to the ethical considerations of new technologies, this category covers a wide range of topics related to technology. Join the conversation and let's explore the ever-evolving world of technology together!
founded 2 years ago
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
@IncognitoErgoSum I don't think you can. Because THIS? Is not a model of how humans learn language. It's a model of how a computer learns to write sentences.
If what you're going to give me is an oversimplified analogy that puts too much faith in what AI devs are trying to sell and not enough faith in what a human brain is doing, then don't bother because I will dismiss it as a fairy tale.
But, if you have an answer that actually, genuinely proves that this "neural" network is operating similarly to how the human brain does... then you have invalidated your original post. Because if it really is thinking like a human, NO ONE should own it.
In either case, it's probably not worth your time.
I think this is a neat point.
The human brain is very complex. The neural networks trained on computers right now are more like collections of neurons grown together in a petri dish, rather than a full human brain. They serve one function, say, recognizing or generating an image or calculating some probability or deciding on what the next word should be in a sequence. While the brain is a huge internetwork of these smaller, more specialized neural networks.
No, neural networks don't have a database and they don't do stats. They're trained through trial and error, not aggregation. The way they work is explicitly based on a mathematical model of a biological neuron.
And when an AI is developed that's advanced enough to rival the actual human brain, then yeah, the AI rights question becomes a real thing. We're not there yet, though. Still just matter in petri dishes. That's a whole other controversial argument.
I don't believe that current AIs should have rights. They aren't conscious.
My point is was purely that AIs learn concepts and that concepts aren't copyrightable. Encoding concepts into neurons (that is, learning) doesn't require consciousness.
Oh, 100%. They're way too rudimentary. NNs alone don't go through the sense-think-act loops that necessitates a conscious autonomous agent. One day, maybe, but again, we're at the brain matter in petri dish stage.
I agree on the concepts thing too. People learn to paint by imitating what they see around them, their favourite artists, their favourite comics and cartoons. Then, over time with practice and experimentation, these things get encoded, but there's always that influence there somewhere.
Midjourney just has the benefit of being able to learn from way more imagery in a way shorter of an amount of time and practice way faster than any living human. So like, I get why artists are scared of it, but there's definitely a fundamental misunderstanding around how these things work floating around.