this post was submitted on 20 Apr 2024
698 points (94.6% liked)
Showerthoughts
30754 readers
413 users here now
A "Showerthought" is a simple term used to describe the thoughts that pop into your head while you're doing everyday things like taking a shower, driving, or just daydreaming. The most popular seem to be lighthearted, clever little truths, hidden in daily life.
Here are some examples to inspire your own showerthoughts: 1
Rules
- All posts must be showerthoughts
- The entire showerthought must be in the title
- No politics
- If your topic is in a grey area, please phrase it to emphasize the fascinating aspects, not the dramatic aspects. You can do this by avoiding overly politicized terms such as "capitalism" and "communism". If you must make comparisons, you can say something is different without saying something is better/worse.
- A good place for politics is c/politicaldiscussion
- If you feel strongly that you want politics back, please volunteer as a mod.
- Posts must be original/unique
- Adhere to Lemmy's Code of Conduct
If you made it this far, showerthoughts is accepting new mods. This community is generally tame so its not a lot of work, but having a few more mods would help reports get addressed a little sooner.
Whats it like to be a mod? Reports just show up as messages in your Lemmy inbox, and if a different mod has already addressed the report the message goes away and you never worry about it.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I had a bunch of sections of your comment that I wanted to quote, let's see how much I can answer without copy-pasting too much.
First off, my apologies, I misunderstood your analogy about machine learning not as a comparison towards evolution, but towards how we learn with our developed brains. I concur that the process of evolution is similar, except a bit less targeted (and hence so much slower) than deep learning. The result however, is "cogito ergo sum" - a creature that started self-reflecting and wondering about it's own consciousness. And this brings me to humans thinking logically: As such a creature, we are able to form logical thoughts, which allow us to understand causality. To give an example of what I mean: Humans (and some animals) did not need the invention of logic or statistics in order to observe moving objects and realize that where something moves, something has moved it - and therefore when they see an inanimate object move, they will eventually suspect the most likely cause for the move in the direction that the object is coming from. Then, when we do not find the cause (someone throwing something) there, we will investigate further (if curious enough) and look for a cause. That's how curiosity turns into science. But it's very much targeted, nothing a deep learning system can do. And that's kind of what I would also expect from something that calls itself "AI": a systematic analysis / categorization of the input data for the purpose of processing that the system was built for. And for a general AI, also the ability to analyze phenomena to understand their root cause.
Of course, logic is often not the same as our intuitive thoughts, but we are still able to correct our intuitive assumptions based on outcome, but then understand the actual causal relation (unlike a deep learning system) based on our corrected "model" of whatever we observed. In the end, that's also how science works: We describe reality with a model, and when we discover a discrepancy, we aim to update the model. But we always have a model.
With regards to some animals understanding objects / causal relations, I believe - beyond having a concept of an object - defining what I mean by "understanding" is not really helpful, considering that the spectrum of intelligence among animals overlaps with that of humans. Some of the more clever animals clearly have more complex thoughts and you can interact with them in a more meaningful way than some of the humans with less developed brains, be it due to infancy, or a disability or psychological condition.
First off, I meant the LLM comment seriously - I am considering already to stop participating in internet debates because LLMs have become so sophisticated that I will no longer be able to know whether I am arguing with a human, or whether some LLM is wasting my precious life time.
As for how to describe consciousness, that's a largely philosophical topic and strongly linked to whether or not free will exists (IMO), although theoretically it would be possible to be conscious but not have any actual free will. I can not define the "sense of self" better than philosophers are doing it, because our language does not have the words to even properly structure our thoughts on that. I can however, tell you how I define free will:
And this lowest level trigger event - by some researchers attributed to quantum decay - might be / could be influenced by our free will, even if - because we have this "brain lag" - the actual decision happened quite some time earlier, and even if for some decisions, they are hardwired (like reflexes, which can also be trained).
My personal model how I would like consciousness to be: An as-of-yet undiscovered property of matter, that every atom has, but only combined with an organic computer that is complex enough to process and store information would such a property actually exhibit a consciousness.
In other words: If you find all the subatomic particles (or most of them) that made up a person in history at a given point in time, and reassemble them in the exact same pattern, you would, in effect, re-create that person, including their consciousness at that point in time.
If you duplicate them from other subatomic particles with the exact same properties (as far as we can measure) - who knows? Because we couldn't measure nor observe the "consciousness property", how would we know if that would be equal among all particles that are equal in the properties we can measure. That would be like assuming atoms of a certain element were all the same, because we do not see chemical differences for other isotopes.