this post was submitted on 26 Oct 2022
5 points (85.7% liked)

Science

13874 readers
54 users here now

Subscribe to see new publications and popular science coverage of current research on your homepage


founded 5 years ago
MODERATORS
top 5 comments
sorted by: hot top controversial new old
[–] [email protected] 3 points 2 years ago* (last edited 2 years ago) (1 children)

Very interesting work! I think it is very well-laid out. I like that they define everything so carefully. Some comments:

We argue that consciousness originally developed as part of the episodic memory system—quite likely the part needed to accomplish that flexible recombining of information.

I think that to demonstrate this one would need to show that having a conscious experience is required to achieve such flexibility. To show that, the 'hard problem' needs to be solved.

In the limitations section, they state:

We wish to clearly state that we are well aware that our so-called explanations of the subjective experience of consciousness do not even begin to get at the hard problem of consciousness—how a collection of neurons and supporting brain tissue produces subjective experience. We are, however, hopeful that our slightly increased understanding of the phenomenology of subjective experience of consciousness can help other researchers look in the right locations and do the right experiments to tackle the hard problem. Our suggestion to them is to focus on the conscious memory system.

But I think that this they are under-stating the severity of this problem. Understanding the hard problem is fundamentally essential to back-up their claim, because the solution to the hard problem tells us the conditions under which conscious experience arises. If we don't know this, then we can't know whether our memory system HAD to give rise to consciousness in order to achieve what it does. What they propose gives us more tools to work on the 'easy problems' of consciousness, but they do sell their hypothesis in the context of trying to solve the hard problem, for example when stating:

Again, we hope that this discussion of what we consider to be key brain regions and structures will help others to dive deeper and achieve a more complete understanding of the hard problem.

And I also hope that it does, because solving the hard problem during my lifetime would be super cool. I just don't see how any of the experiments proposed will move us closer.

[–] [email protected] 0 points 2 years ago (1 children)

In my opinion one of the key prerequisites for solving the hard problem is coming up with a theory that explains the necessity of consciousness.

Personally, I've always been partial to the idea that what we experience is effectively a simulation synthesized by the brain. This makes a lot of sense from thermodynamics perspective. Working on an internal model is much cheaper than parsing sensory input directly. Having an internal model that gets synced up with sensory input at intervals is efficient, and it explains a lot of things like sensory illusions, dreams, effects of psychedelic drugs, and so on.

The theory that consciousness is needed as a mechanism for synthesizing memories in order to to project into the future provides a potential insight as to why such a simulation is useful for an organism to develop in order to increase its chances of survival.

I imagine that the work on AIs is the most likely path towards figuring out the hard problem. We can test such theories by constructing neural networks that produce the desired behaviors and then seeing whether the observed behaviors match the predictions.

It's of course impossible to say definitively whether a system has qualia of experience or not, but I think that if we have a theory for the role that qualia serves and we build a system that appear to exhibit qualia then we could reasonably assume that it really does have an internal experience of the world akin to ours.

[–] [email protected] 2 points 2 years ago (1 children)

I see two likely possibilities:

  • Qualia is a property that emerges spontaneously in systems that process information. There might be a threshold of complexity at which it emerges, or it is always there and it scales in intensity with the complexity. Living things did not evolve qualia because it is useful, our qualia arises spontaneously because of the complexity of our brain.
  • Qualia is a property that requires a very specific set of circumstances to emerge, and living things actively evolved to have qualia because qualia serves a useful purpose.

I agree with what you say. I would call the process of working on the internal model "thinking". It would in principle be possible to have a system that thinks just like we do and appears to possess qualia, but it does not. Unless the scenario (1) is correct - and qualia will spontaneously emerge in a complex thinking system - which I think is the case. So, qualia is a necessary byproduct of our useful internal model processing.

As you have said, we can make the reasonable assumption that systems that behave as if they have qualia have qualia, then we may proceed doing useful things in this field. We may design new types of useful conscious systems, have discussions about AI and animal welfare ethics, etc.

As to:

It’s of course impossible to say definitively whether a system has qualia of experience or not

II interpret this as if the answer of the hard problem is "we can't solve it, because it is fundamentally impossible to measure qualia". Measuring qualia is essential to address the hard problem, because if we can't show that a system has qualia, then we can't show the conditions under which it arises.

I imagine that the work on AIs is the most likely path towards figuring out the hard problem. We can test such theories by constructing neural networks that produce the desired behaviors and then seeing whether the observed behaviors match the predictions.

I also used to think that AI was the best way. Recently though, the paper of training neurons to play ping-pong blew my mind, because they practically showed that you can apply negative and positive stimulation to a simple network of neurons by providing it with either predictable or random information. I am not so hopeful about the hard problem being solvable, but seeing work like this gives me hope.

I think that qualia is one of the most (if not the most) interesting pieces of nature. To me it is practically super-natural. The laws of physics as we know them can describe our world without a hint of qualia, and the unieverse just randomly decided to sprinkle the gift of conscious awareness on us for no apparent reason. It is so ubiquitous (i think) and yet we can't answer the simplest questions about it with certainty.

[–] [email protected] 1 points 2 years ago

I'm definitely partial to the idea that qualia is an emergent property, but I also think that it's a continuous gradient as opposed to a binary property. I suspect that some level of self awareness arises in any organism that interacts with its environment in a volitional way. At a fundamental level it's the organism modelling itself in its environment. Being able to move towards food and away from danger are two basic requirements, and creating an internal model that integrates sensory inputs and reacts to them is a good solution to this problem. And we observe this in pretty much all living organisms. Then it's just a matter of how complex this model becomes.

Simple organisms have a very rudimentary model, but complex ones exhibit increasing levels of fidelity. There is also a possibility that qualia is just an exhaust fume from this process. It's entirely possible that the functional survival of the organism does not necessitate consciousness, but it is an emergent property that necessarily falls out from the need of the organism to model itself within the environment. Consciousness can be sort of an echo resulting from that process.

I definitely agree that this is one of the most interesting problems in nature. It's fascinating that it's something so core to our experience, yet we're unable to define what it is exactly.

[–] [email protected] 1 points 2 years ago

That's very intriguing, thanks for posting!