Paper review of: From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0. Giulio Tononi’s theory for how phenomenal consciousness arises from computation.

We only made it through the first 10 pages of the paper, so this discussion will be continued at a later point.

Axioms

The approach of the theory is to first formalise first-person descriptions of the nature of conscious experiences as axioms. These are:

  • consciousness exists
  • consciousness experiences are composed of multiple smaller details
  • conscious experiences contain information - they differ from each other and constrain the range of possible past or future experiences
  • conscious experiences are integrated - they can’t be divided into independent components without losing information
  • consciousness is exclusive - we have only one conscious experience at a time

We were mostly happy with these axioms, and the approach of adopting axioms to begin with. A theory of consciousness must at least be consistent with our own observations, as those are the only data about which we can make testable predictions. The exclusion axiom was somewhat contentious. We felt it possible to be conscious of multiple things at once, to some extent. But a conscious experience could then be defined to encompass this. Indeed, typical conscious experiences don’t consist of simultaneous streams from different perspectives.

Postulates

Having taken the above axioms, IIT then postulates that a physical system must have certain properties to give rise to consciousness. And it is a mathematical formalisation of these postulates that forms the core of IIT. While this logically makes sense, going from axioms to postulates seems like a significant leap, and while there are no significant problems with the postulates generally, it is not clear that they are the only possible set of postulates consistent with the axioms.

But this logical leap would likely be required of any theory aiming to connect high level phenomenological descriptions to a low level mathematical description. It is unclear whether it would even be possible to build a mathematical formalism top-down from the axioms, without at some point jumping from high to low level and building back up.

Conclusions

Overall, IIT’s most interesting aspect is that it makes firm predictions that particular systems are and aren’t conscious. The second half of the paper contains a number of example systems which we will run through next time to get a feel for the theory. The mathematical framework, while somewhat complicated, is possible to follow easily enough.

Random Musings

According to IIT, can different states have the same (or very similar) conceptual structures? Intuitively, the number of possible brain states means that we couldn’t ever be in the exact same state twice, and yet experiences can still feel very similar to previous experiences. Does IIT predict this?

How do we distinguish the continuous “narrative” of first person experience from simple qualia? Does the narrative come from integrating qualia across longer time scales? Or does the narrative come from an entirely different process to “raw feels”.

The “is perceiving a particular colour a fundamental experience, or does it just rely on associations” debate rages on… Where do the associations come from? Can “red” only exist as a concept once we have seen a sufficient number of red objects, and clustered those experiences together? Is this clustering process biologically hardwired? How long would it take a baby to form a clustered concept for a colour that would allow associations? A baby viewing a colour for the first time would have a different photoreceptor response to red or blue light, and hence different neural activity. But would these experiences “feel” different?