A

algekalipso

786 karmaJoined

Bio

Consciousness researcher and co-founder of the Qualia Research Institute. I blog at qualiacomputing.com

Core interests span - measuring emotional valence objectively, formal models of phenomenal space and time, the importance of phenomenal binding, models of intelligence based on qualia, and neurotechnology.

Comments
42

Thank you for this fascinating post. I'll share here what I posted on Twitter too:

 

I have many reasons why I don't think we should care about non-conscious agency, and here are some of them:

1) That which lacks frame invariance cannot be truly real. Algorithms are not real. They look real from the point of view of (frame invariant) experiences that *interpret* them. Thus, there is no real sense in which an algorithm can have goals - they only look like it from our (integrated) point of view. It's useful for us, pragmatically, to model them that way. But that's different from them actually existing in any intrinsic substantial way.

2) The phenomenal texture of valence is deeply intertwined with conscious agency when such agency matters. The very sense of urgency that drives our efforts to reduce our suffering has a *shape* with intrinsic causal effects. This shape and its causal effects only ever cash out as such in other bound experiences. So the very _meaning_ of agency, at least in so far as moral intuitions are concerned, is inherently tied to its sentient implementation.

3) Values are not actually about states of world, and that is because states of the world aside from moments of experience don't really exist. Or at least we have no reason to believe they exist. As you increase the internal coherence of one's understanding of conscious agency, it becomes, little by little, clear that the underlying *referent* of our desires were phenomenal states all along, albeit with levels of indirection and shortcuts.

4) Even if we were to believe that non-sentient agency (imo an oxymoron) is valuable, we would have also good reasons to believe it is in fact disvaluable. Intense wanting is unpleasant, and thus sufficiently self-reflective organisms try to figure out how to realize their values with as little desire as possible.

5) Open Individualism, Valence Realism, and Math can provide a far more coherent system of ethics than any other combo I'm aware of, and they certainly rule out non-conscious agency as part of what matters.

6) Blindsight is poorly understood. There's an interesting model of how it works where our body creates a kind of archipelago of moments of experience, in which there is a central hub and then many peripheral bound experiences competing to enter that hub. When we think that a non-conscious system in us "wants something", it might very well be because it indeed has valence that motivates it in a certain way. Some exotic states of consciousness hint at this architecture - desires that seem to "come from nowhere" are in fact already the result of complex networks of conscious subagents merging and blending and ultimately binding to the central hub.

------- And then we have pragmatic and political reasons, where the moment we open the floodgates of insentient agency mattering intrinsically, we risk truly becoming powerless very fast. Even if we cared about insentient agency, why should we care about insentient agency in potential? Their scaling capabilities, cunning, and capacity for deception might quickly flip the power balance in completely irreversible ways, not unlike creating sentient monsters with radically different values than humans.

Ultimately I think value is an empirical question, and we already know enough to be able to locate it in conscious valence. Team Consciousness must wise up to avoid threats from insentient agents and coordinate around these risks catalyzed by profound conceptual confusion.

Thank you Gavin (algekalipso here).

I think that the most important EA-relevant link for #1 would be this: Logarithmic Scales of Pleasure and Pain: Rating, Ranking, and Comparing Peak Experiences Suggest the Existence of Long Tails for Bliss and Suffering 

For a summary, see: Review of Log Scales.

In particular, I do think aspiring EAs should take this much more seriously:

An important pragmatic takeaway from this article is that if one is trying to select an effective career path, as a heuristic it would be good to take into account how one’s efforts would cash out in the prevention of extreme suffering (see: Hell-Index), rather than just QALYs and wellness indices that ignore the long-tail. Of particular note as promising Effective Altruist careers, we would highlight working directly to develop remedies for specific, extremely painful experiences. Finding scalable treatments for migraines, kidney stones, childbirth, cluster headaches, CRPS, and fibromyalgia may be extremely high-impact (cf. Treating Cluster Headaches and Migraines Using N,N-DMT and Other Tryptamines, Using Ibogaine to Create Friendlier Opioids, and Frequency Specific Microcurrent for Kidney-Stone Pain). More research efforts into identifying and quantifying intense suffering currently unaddressed would also be extremely helpful. Finally, if the positive valence scale also has a long-tail, focusing one’s career in developing bliss technologies may pay-off in surprisingly good ways (whereby you may stumble on methods to generate high-valence healing experiences which are orders of magnitude better than you thought were possible).

Best,

Andrés :)

This post significantly adds to the conversation in Effective Altruism about how pain is distributed. As explained in the review of Log Scales, understanding that intense pain follows a long-tail distributions significantly changes the effectiveness landscape for possible altruistic interventions. In particular, this analysis shows that finding the top 5% of people who suffer the most in a given medical condition and treating them as the priority will allow us to target a very large fraction of the total pain such a condition generates. In the case of cluster headaches, the distribution is extremely skewed: 5% of sufferers experience over 50% of all cluster headaches. 

More so, the survey also showed that the leading cause for why sufferers don't use tryptamines to treat their condition is the difficulty of acquiring them. Thus, changing the legal landscape via e.g. providing programs for the easy access to tryptamines to sufferers of migraines and cluster headaches might be a very cost-effective way of massively reducing suffering throughout the world.

Zooming out, perhaps the significance of this goes beyond cluster headaches in particular: it perhaps hints at a more significant paradigmatic change for analyzing the cost-effectiveness of interventions.

As explained in the review of Log Scales, cluster headaches are some of the most painful experiences people can have in life. If a $5 DMT Vape Pen produced at scale is all it takes to fully take care of the problem for people sufferers, this stands to be an Effective Altruist bargain.

In the future, I would love to see more analysis of this sort. Namely, analysis that look at particular highly painful conditions (the "pain points of humanity", as it were), and identify tractable, cost-effective solutions to them. Given the work in this area so far, I expect this to generate dozens of interventions that, in aggregate, might take care of perhaps even the majority of dolors experienced by people.

Most people who know about drugs tend to have an intuitive model of drug tolerance where "what goes up must come down". In this piece, the author shows that this intuitive model is wrong, for drug tolerance can be reversed pharmacologically. This seems extremely important in the context of pain relief: for people who simply have no option but to take opioids to treat their chronic pain, anti-tolerance would be a game-changer. I sincerely believe this will be a paradigm shift in the world of pain management, with a clear before-and-after cultural shift around it. But before that, a lot of foundational research needs to take place. That's the stage we are at.

We anticipate and hope that the field of anti-tolerance drugs  soon materializes in an academically credible way. Given how common chronic pain is, we would all benefit from its fruits in the future.

I would like to suggest that Logarithmic Scales of Pleasure and Pain (“Log Scales” from here on out) presents a novel, meaningful, and non-trivial contribution to the field of Effective Altruism. It is novel because even though the terribleness of extreme suffering has been discussed multiple times before, such discussions have not presented a method or conceptual scheme with which to compare extreme suffering relative to less extreme varieties. It is meaningful because it articulates the essence of an intuition of an aspect of life that deeply matters to most people, even if they cannot easily put it into words. And it is non-trivial because the inference that pain (and pleasure) scales are better understood as logarithmic in nature does require one to consider the problem from multiple points of view at once that are rarely, if ever, brought together (e.g. combining deference analysis, descriptions of pain scales by their creators, latent-trait analysis, psychophysics, and so on). 


 

Fundamentally, we could characterize this article as a conceptual reframe that changes how one assesses magnitudes of suffering in the world. To really grasp the significance of this reframe, let’s look back into how Effective Altruism itself was an incredibly powerful conceptual reframe that did something similar. In particular, a core insight that establishes the raison d'etre of Effective Altruism is that the good that you can do in the world with a given set of resources varies enormously depending on how you choose to allocate it: by most criteria that you may choose (whether it’s QALYs or people saved from homelessness), the cost-effectiveness of causes seem to follow much more closely (at least qualitatively) a long-tail rather than a normal distribution (see: Which world problems are the most pressing to solve? by Benjamin Todd). In turn, this strongly suggests that investigating carefully how to invest one’s altruistic efforts is likely to pay off in very large ways: choosing a random charity versus a top 1% charity will lead to benefits whose scale differs by orders of magnitude.


 

Log Scales suggests that pain and pleasure themselves follow a long-tail distribution. In what way, exactly? Well, to a first approximation, across the entire board! The article (and perhaps more eloquently the subsequent video presentation at the NYC EA Meetup on the same topic) argues that when it comes to the distribution of the intensity of hedonic states, we are likely to find long-tails almost any way we choose to slice or dice the data. This is analogous to, for example, how all of the following quantities follow long-tail distributions: avalanches per country, avalanches per mountain, amount of snow in mountains, number of avalanche-producing mountains per country, size of avalanches, number of avalanches per day, etc. Likewise, in the case of the distribution of pain, the arguments presented suggest we will find that all of the following distributions are long-tails: average pain level per medical condition, number of intensely painful episodes per person per year, intensity of pain per painful episode, total pain per person during one’s life, etc. Thus, that such a small percentage of cluster headache patients accounts for the majority of episodes per year would be expected (see: Cluster Headache Frequency Follows a Long-Tail Distribution), and along with it, the intensity of such episodes themselves would likely follow a long-tail distribution.


 

This would all be natural, indeed, if we consider neurological phenomena such as pain to be akin to weather phenomena. Log Scales allows us to conceptualize the state of a nervous system and what it gives rise to as akin to how various weather conditions give rise to natural disasters: a number of factors multiply each other resulting in relatively rare, but surprisingly powerful, black swan events. Nervous systems such as those of people suffering from CRPS, fibromyalgia, and cluster headaches are like the Swiss Alps of neurological weather conditions… uniquely suited for ridiculously large avalanches of suffering.


 

Log Scales are not just of academic interest. In the context of Effective Altruism, they are a powerful generator for identifying new important, neglected, and tractable cause areas to focus on. For instance, DMT for cluster headaches, microdose ibogaine for augmentation of painkillers in sufferers of chronic pain, and chanca piedra for kidney stones (writeup in progres) are all what we believe to be highly promising interventions (of the significant, neglected, and tractable variety) that might arguably reduce suffering in enormous ways and that would not have been highlighted as EA-worthy were it not for Log Scales. (See also: Get-Out-Of-Hell-Free Necklace). On a personal note, I’ve received numerous thank you notes by sufferers of extreme pain for this research. But the work has barely begun: with Log Scales as a lens, we are poised to tackle the world’s reserves of suffering with laser-focus, assured in the knowledge that preventing a small fraction of all painful conditions is all that we need to abolish the bulk of experiential suffering.


 

But does Log Scales make accurate claims? Does it carve reality at the joints? How do we know?


 

The core arguments presented were based on (a) the characteristic distribution of neural activity, (b) phenomenological accounts of extreme pleasure and pain, (c) the way in which the creators of pain scales have explicitly described their meaning, and (d) the results of a statistical analysis of a pilot study we conducted where people ranked, rated, and assigned relative proportions to their most extreme experiences. We further framed this in terms of comparing qualitative predictions from what we called the ​​Normal World vs. Lognormal World. In particular, we stated that: “If we lived in the ‘Lognormal World’, we would expect: (1) That people will typically say that their top #1 best/worst experience is not only a bit better/worse than their #2 experience, but a lot better/worse. Like, perhaps, even multiple times better/worse. (2) That there will be a long-tail in the number of appearances of different categories (i.e. that a large amount, such as 80%, of top experiences will belong to the same narrow set of categories, and that there will be many different kinds of experiences capturing the remaining 20%). And (3) that for most pairs of experiences x and y, people who have had both instances of x and y, will usually agree about which one is better/worse. We call such a relationship a ‘deference’. More so, we would expect to see that deference, in general, will be transitive (a > b and b > c implying that a > c).” And then we went ahead and showed that the data was vastly more consistent with Lognormal World than Normal World. I think it holds up.


 

An additional argument that since has been effective at explaining the paradigm to newcomers has been in terms of exploring the very meaning of Just-Noticeable Differences (JNDs) in the context of the intensity of aspects of one’s experience. Indeed, for (b), the depths of intensity of experience simply make no sense if we were to take a “Just-Noticeable Pinprick” as the unit of measurement and expect a multiple of it to work as the measuring rod between pain levels in the 1-10 pain scale. The upper ends of pain are just so bright, so immensely violent, so as to leave lesser pains as mere rounding errors. But if on each step of a JND of pain intensity we multiply the feeling by a constant, sooner or later (as Zvi might put it) “the rice grains on the chessboard suddenly get fully out of hand” and we enter hellish territory (for a helpful visual aid of this concept: start at 6:06 of our talk at the 2020 EAGxVirtual Unconference on this topic).


 

From my point of view, we can now justifiably work under the assumption that the qualitative picture painted by Log Scales is roughly correct. It is the more precise quantitative analysis which is a work in progress that ought to be iterated over in the coming years. This will entail broadening the range of people interviewed, developing better techniques to precisely capture and parametrize phenomenology (e.g. see our tool to measure visual tracers), use more appropriate and principled statistical methods (e.g. see the comment about the Bradley-Terry model and extreme value theory), experimental work in psychophysics labs, neuroimaging research of peak experiences, and the search for cost-effective pragmatic solutions to deal with the worst suffering. I believe that future research in this area will show conclusively the qualitative claims, and perhaps there will be strong consilience on the more precise quantitative claims (but in the absence of a true Qualiascope, the quantitative claims will continue to have a non-negligible margin of error).


 

Ok, you may say, but if I disagree about the importance of preventing pain, and I care more about e.g. human flourishing, why should I care about this? Here I would like to briefly address a key point that people in the EA sphere have raised in light of our work. The core complaint, if we choose to see it that way, is that one must be a valence utilitarian in order to care about this analysis. That only if you think of ethics in terms of classical Benthamite pain-minimization and pleasure-maximization should we be so keen on mapping the true distribution of valence across the globe. 


 

But is that really so?


 

Three key points stand out: First, that imperfect metrics that are proxies for aspects of what you care about (even when not all that you care about) can nonetheless be important. Second, that if you cared a little about suffering already, then the post-hoc discovery that suffering is actually that freaking skewed really ought to be a major update. And third, there really are reasons other than valence maximization as a terminal goal to care about extreme suffering: suffering is antithetical to flourishing since it has long-term sequelae. More so, even if confined to non-utilitarian ethical theories, one can make the case that there is something especially terrible about letting one’s fellow humans (and non-humans) suffer so intensely without doing anything about it. And perhaps especially so if stopping such horrors turn out to be rather easy. 


 

Let’s tackle each in turn.


 

(1) Perhaps here we should bring a simple analogy: GDP. Admittedly, there are very few conceptions of the good in which it makes sense for GDP to be the metric to maximize. But there are also few conceptions of the good where you should disregard it altogether. You can certainly be skeptical of the degree to which GDP captures all that is meaningful, but in nearly all views of economic flourishing, GDP will likely have a non-zero weight. Especially if we find that, e.g. some interventions we can do to the economy would cause a 99.9% reduction in a country’s GDP, one should probably not ignore that information (even if the value one assigns to GDP is relatively small compared to what other economists and social scientists assign it). Likewise for extreme suffering. There might be only a few conceptions of the good where that is the only thing we ought to work on. But avoiding hellish states is a rather universally desired state for oneself. Why not take it at least somewhat into account?


 

In truth, this is not something that classical questions in Effective Altruism pre-Log Scales could overcome either. For instance, as far as I am aware, in practice QALYs are used more as a guide than as an absolute; their value within EA comes from the fact that in practice interventions are orders of magnitude different when it comes to their cost-effectiveness when assessed with QALYs. So even though the vast majority of EAs are not QALY absolutists, the differences in QALYs saved between interventions are large enough that as an approximate guide, the metric still generates huge amounts of consilience.


 

(2) In turn, the post-hoc finding that hellish states are much, much worse than one would intuitively believe really should at least rebalance one’s priorities somewhat. Is there really no amount of suffering that would do so? Unless one has a utility function akin to a ReLu activation function, going far enough down into the depths of hell ought to count for something. And…


 

(3) Speaking candidly, in order to fully articulate the true significance of this finding will take us to philosophically polemical territory: philosophy of personal identity where person-affecting views will see the situation quite differently than person-moment-affecting views, philosophy of mind where the ontological status of pleasure and pain are questioned, and intricate questions that arise at the interface between the views of virtue ethicists, deontologists, negative and classical utilitarians. Of course a negative utilitarian who believes in Empty Individualism and Eternalism at the same time will likely be especially horrified by this information. But I would suggest that there are good reasons to care no matter how antithetical one’s view is to philosophical hedonism


 

In particular, I would argue that deontologists and virtue ethicists should still take note. The cultivation of virtue requires a minimum of wellbeing in order to maintain motivation to live. And perhaps deontologists might find extreme suffering particularly egregious from the point of view of “things so horrible that ought not to be''. Really, the people we interviewed for the cluster headache writeup told us that experiencing such levels of hellish suffering causes profound psychological transformations (e.g. one interviewee told us that experiencing the higher end of pain in a cluster headache feels like a profound “spiritual violation” from which you may never recover - a feeling most certainly aggravated by the seeming indifference of people at large about their plight). Virtue ethicists and deontologists might as well recognize this cause area as work that is unconscionable not to perform, regardless of arguments based on precise mathematical optimization for the prevention of negative valence. 


 

And finally, in all seriousness, as the cognitive intelligentsia of the world begins to see clearly the nature of the distribution of pleasure and pain, we can expect there to be a big social benefit to being the one who destroys hell. Right now there isn’t a huge social reward to be obtained by working on this cause, but I predict this will change. And, pragmatically, it is sensible to present this cause in a motivating rather than depressing light: indeed, let’s give honor, glory, and endless admiration to whoever makes tangible progress in tearing hell down. And to all of the millionaires and billionaires reading this: this could be you! You could be the one who took on the mantle of preventing all future cluster headaches, established the field of anti-tolerance drugs for severe chronic pain, or got rid of kidney stones (and you did it before it was cool!). Let’s get to work!


 

Hi Holden!

I am happy to see you think deeply about questions of personal identity. I've been thinking about the same for many years (e.g. see "Ontological Qualia: The Future of Personal Identity"), and I think that addressing such questions is critical for any consistent theory of consciousness and ethics.

I broadly agree with your view, but here are some things that stand out as worth pointing out:

First, I prefer Daniel Kolak's factorization of "views of personal identity". Namely, Closed Individualism (common sense - we are each a "timeline of experience"), Empty Individualism (we are all only individual moments of experience, perhaps most similar to Parfit's reductionist view as well as yours), and Open Individualism (we are all the same subject of experience). 

I think that if Open Individualism is true a lot of ethics could be drastically simplified: caring about all sentient beings is not only kind, but in fact rational. While I think that Empty Individualism is a really strong candidate, I don't discard Open Individualism. If you do assume that you are the same subject of experience over time (which I know you discard, but many don't), I think it follows that Open Individualism is the only way to reconcile that with the fact that each moment of experience generated by your brain is different. In other words, if there is no identity carrier  we can point to that connects every moment of experience generated by e.g. my brain, then we might infer that the very source of identity is the fact of consciousness per se. Just something to think about.

The other key thing I'd highlight is that you don't seem to pay much attention to the mystery of why each snapshot of your brain is unified. Parfit also seems have some sort of neglect around this puzzle, for I don't see it addressed anywhere in his writings despite its central importance to the problem of personal identity.

Synchrony is not a good criteria: there is no universal frame of reference. Plus, even if we could use synchrony as an approximate "unifier" of physical states, we then further have the problem that we would need a natural ground truth boundary to arise that would make your brain generate a moment of experience that is numerically distinct from those generated by other brains at the same time.

I do think that there is in fact a way to solve this. To do so, rather than thinking in terms of "binding" (i.e. why do these two atoms contribute to the same experience but not these two atoms?), we should think in terms of "boundaries" (i.e. what makes this region of reality have a natural boundary that separates it from the rest?). In particular, my solution uses topological segmentation, and IMO solves all of the classic problems. It results in a strong case for Empty Individualism, since topological boundaries in the fields of physics would be objective, causally significant, and frame-invariant (all highly desirable properties for the mechanism of individuation so that e.g. natural selection would have a way of recruiting moments of experience for computational purposes). Additionally, the topological pockets that define individual moments of experience would be spatiotemporal in nature. We don't need to worry about infinitesimal partitions and a lack of objective frames of reference for simultaneity because the topological pockets have definite spatial and temporal depth. There would, in fact, be a definite and objective answer to "how many experiences are there in this volume of spacetime?" and similar questions.

If interested, I recommend watching my video about my solution to the binding problem here: Solving the Phenomenal Binding Problem: Topological Segmentation as the Correct Explanation Space. Even just reading the video description goes a long way :-) Let me know your thoughts if you get to it.

All the best! 
 

People are asking for  object-level justifications for the Symmetry Theory of Valence:

The first thing to mention is that the Symmetry Theory of Valence (STV) is *really easy to strawman*. It really is the case that there are many near enemies of STV that sound exactly like what a naïve researcher who is missing developmental stages (e.g. is a naïve realist about perception) would say. That we like pretty symmetrical shapes of course does not mean that symmetry is at the root of valence; that we enjoy symphonic music does not mean harmony is "inherently pleasant"; that we enjoy nice repeating patterns of tactile stimulation does not mean, well, you get the idea...

The truth of course is that at QRI we really are meta-contrarian intellectual hipsters. So the weird and often dumb-sounding things we say are already taking into account the criticisms people in our people-cluster would make and are taking the conversation one step further. For instance, we think digital computers cannot be conscious, but this belief comes from entirely different arguments than those that justify such beliefs out there. We think that the "energy body" is real and important, except that we interpret it within a physicalist paradigm of dynamic systems. We take seriously the possible positive sum game-theoretical implications of MDMA, but not out of a naïve "why can't we all love each other?" impression, but rather, based on deep evolutionary arguments. And we take seriously non-standard views of identity, not because "we are all Krishna", but because the common-sense view of identity turns out to, in retrospect, be based on illusion (cf. Parfit, Kolak, "The Future of Personal Identity") and a true physicalist theory of consciousness (e.g. Pearce's theory) has no room for enduring metaphysical egos. This is all to say that straw-manning the paradigms explored at QRI is easy; steelmanning them is what's hard. Can anyone here make a Titanium Man out of them instead? :-)

Now, I am indeed happy to address any mischaracterization of STV. Sadly, to my knowledge nobody outside of QRI really "gets it", so I don't think there is anyone other than us (and possibly Scott Alexander!) who can make a steelman of STV. My promise is that "there is something here" and that to "get it" is not merely to buy into the theory blindly, but rather, it is what happens when you give it enough benefit of the doubt, share a sufficient number of background assumptions, and have a wide enough experience base that it actually becomes a rather obvious "good fit" for all of the data available.
 

For a bit of history (and properly giving due credit), I should clarify that Michael Johnson is the one who came up with the hypothesis in Principia Qualia (for a brief history see: STV Primer). I started out very skeptical of STV myself, and in fact it took about three years of thinking it through in light of many meditation and exotic high-energy experiences to be viscerally convinced that it's pointing in the right direction. I'm talking about a process of elimination where, for instance, I checked if what feels good is at the computational level of abstraction (such as prediction error minimization) or if it's at the implementation level (i.e. dissonance). I then developed a number of technical paradigms for how to translate STV into something we could actually study in neuroscience and ultimately try out empirically with non-invasive neurotech (in our case, light-sound-vibration systems that produce multi-modally coherent high-valence states of consciousness). Quintin Frerichs (who gave a presentation about Neural Annealing to Friston) has since been working hard on the actual neuroscience of it in collaboration with Johns Hopkins University, Daniel Ingram, Imperial College and others. We are currently testing the theory in a number of ways and will publish a large paper based on all this work.

For clarification, I should point out that what is brilliant (IMO) about Mike's Principia Qualia is that he breaks down the problem of consciousness in such a way that it allows us to divide and conquer the hard problem of consciousness. Indeed, once broken down into his 8 subproblems, calling it the "hard problem of consciousness" sounds as bizarre as it would sound to us to hear about "the hard problem of matter". We do claim that if we are able to solve each of these subproblems, that indeed the hard problem will dissolve. Not the way illusionists would have it (where the very concept of consciousness is problematic), but rather, in the way that electricity and lightning and magnets all turned out to be explained by just 4 simple equations of electromagnetism. Of course the further question of why do those equations exist and why consciousness follows such laws remains, but even that could IMO be fully explained with the appropriate paradigm (cf. Zero Ontology).


The main point to consider here w.r.t. STV is that symmetry is posited to be connected with valence at the implementation level of analysis. This squarely and clearly distinguishes STV from behaviorist accounts of valence (e.g. "behavioral reinforcement") and also from algorithmic accounts (e.g. compression drive or prediction error minimization). Indeed, with STV you can have a brain (perhaps a damaged brain, or one in an exotic state of consciousness) where prediction errors are not in fact connected to valence. Rather, the brain evolved to recruit valence gradients in order to make better predictions. Similarly, STV predicts that what makes activation of the pleasure centers feel good is precisely that doing so gives rise to large-scale harmony in brain activity. This is exciting because it means the theory predicts we can actually observe a double dissociation: if we inhibit the pleasure centers while exogenously stimulating large-scale harmonic patterns we expect that to feel good, and we likewise expect that even if you activate the pleasure centers you will not feel good if something inhibits the large-scale harmony that would typically result. Same with prediction errors, behavior, etc.: we predict we can doubly-dissociate valence from those features if we conduct the right experiment. But we won't be able to dissociate valence from symmetry in the formalism of consciousness.

Now, of course we currently can't see consciousness directly, but we can infer a lot of invariants about it with different "projections", and so far all are consistent with STV:


 


 

Of especial note, I'd point you to one of the studies discussed in the 2020 STV talk: The Human Default Consciousness and Its Disruption: Insights From an EEG Study of Buddhist Jhāna Meditation. It shows a very tight correspondence between jhanas and various smoothly-repeating EEG patterns, including a seizure-like activity that unlike normal seizures (of typically bad valence) shows up as having a *harmonic structure*. Here we find a beautiful correspondence between (a) sense of peace/jhanic bliss, (b) phenomenological descriptions of simplicity and smoothness, (c) valence, and (d) actual neurophysiological data mirroring these phenomenological accounts. At QRI we have similarly observed something quite similar studying the EEG patterns of other ultra-high-valence meditation states (which we will hopefully publish in 2022). I expect this pattern to hold for other exotic high-valence states in one way or another, ranging from quality of orgasm to exogenous opioids. 

Phenomenologically speaking, STV is not only capable of describing and explaining why certain meditation or psychedelic states of consciousness feel good or bad, but in fact it can be used as a navigation aid! You can introspect on the ways energy does not flow smoothly, the presence of blockages and pinch points make it reflect in discordant ways, or zone in on areas of the "energy body" that are out of synch with one another and then specifically use attention in order to "comb the field of experience". This approach - the purely secular climbing of the harmony gradient leads all of its own to amazing high-valence states of consciousness (cf. Buddhist Annealing). I'll probably make a video series with meditation instructions for people to actually experience this on themselves first hand. It doesn't take very long, actually. Also, STV as a paradigm can be used in order to experience more pleasant trajectories along the "Energy X Complexity landscape" of a DMT trip (something I even talked about at the SSC meetup online!). In a simple quip, I'd say "there are good and bad ways of vibing on DMT, and STV gives you the key to the realms of good vibes" :-)

Another angle: we can find subtle ways of dissociating valence from e.g. chemicals: if you take stimulants but don't feel the nice buzz that provides a "working frame" for your mental activity, they will not feel good. At the same time, without stimulants you can get that pleasant productivity-enhancing buzz with the right tactile patterns of stimulation. Indeed this "buzz" that characterizes the effects of many euphoric drugs (and the quality of e.g. metta meditation) is precisely a valence effect, one that provides a metronome to self-organize around and which can feel bad when you don't follow where it takes you. Literally, one of the core reasons why MDMA feels better than LSD which feels better than DOB is precisely because the "quality of the buzz" of each of these highs is different. MDMA's buzz is beautiful and harmonious; DOB's buzz is harsh and dissonant. More so, such a buzz can work as task-specific dissonance guide-rails, if you will. Meaning that when you do buzz-congruent behaviors you feel a sense of inner harmony, whereas when you do buzz-incongruent behaviors you feel a sense of inner turmoil. Hence what kind of buzz one experiences is deeply consequential! All of this falls rather nicely within STV - IMO other theories need to keep adding epicycles to keep up.

Hopefully this all worked as useful clarifications.

Thank you for this very insightful and information-dense article!

My sense is that critical flicker fusion is more about sampling rate than about phenomenal time per se. And also, that just because time feels slow doesn't mean you are actually getting more experience on the whole. The critical issue here is the difference between phenomenal time and physical time (as covered in the Pseudo-Time Arrow).

In particular, one could e.g. have 1000 experiences per second and think that you are only having one experience per second (e.g. lots of very short pseudo-time arrows!), or you could have 1 experience per second but feel like you are having 1000s of them (e.g. when the single experience per second happens to have a huge pseudo-time arrow that integrates a lot of temporally-rich information). So I think CFF will be correlated with amount of qualia and subjective sense of time, but only mildly. And that to get the ground truth of "amount of qualia" we will need to see through phenomenal time as a construct.

I mean, for example, I don't think you get different CFFs on DMT, even though your pseudo-time arrow is extremely distorted and at times "seconds can feel like eternities".

I don't see anything like that from QRI either, although someone can correct me if I missed it.

 

In Principia Qualia (p. 65-66), Mike Johnson posits:

What is happening when we talk about our qualia? 

If ‘downward causation’ isn’t real, then how are our qualia causing us to act? I suggest that we should look for solutions which describe why we have the sensory illusion of qualia having causal power, without actually adding another causal entity to the universe.

I believe this is much more feasible than it seems if we carefully examine the exact sense in which language is ‘about’ qualia. Instead of a direct representational interpretation, I offer we should instead think of language’s ‘aboutness’ as a function of systematic correlations between two things related to qualia: the brain’s logical state (i.e., connectome-level neural activity), particularly those logical states relevant to its self-model, and the brain’s microphysical state (i.e., what the quarks which constitute the brain are doing). 

In short, our brain has evolved to be able to fairly accurately report its internal computational states (since it was adaptive to be able to coordinate such states with others), and these computational states are highly correlated with the microphysical states of the substrate the brain’s computations run on (the actual source of qualia). However, these computational states and microphysical states are not identical. Thus, we would need to be open to the possibility that certain interventions could cause a change in a system’s physical substrate (which generates its qualia) without causing a change in its computational level (which generates its qualia reports). We’ve evolved toward having our qualia, and our reports about our qualia, being synchronized – but in contexts where there hasn’t been an adaptive pressure to accurately report our qualia, we shouldn’t expect these to be synchronized ‘for free’. 

The details of precisely how our reports of qualia, and our ground-truth qualia, might diverge will greatly depend on what the actual physical substrate of consciousness is.48 What is clear from this, however, is that transplanting the brain to a new substrate – e.g., emulating a human brain as software, on a traditional Von Neumann architecture computer – would likely produce qualia very different from the original, even if the high-level behavioral dynamics which generate its qualia reports were faithfully replicated. Copying qualia reports will likely not copy qualia. 

I realize this notion that we could (at least in theory) be mistaken about what qualia we report & remember having is difficult to swallow. I would just say that although it may seem far-fetched, I think it’s a necessary implication of all theories of qualia that don’t resort to anti-scientific mysticism or significantly contradict what we know of physical laws. 

Back to the question: why do we have the illusion that qualia have causal power? 

In short, I’d argue that the brain is a complex, chaotic, coalition-based dynamic system with well defined attractors and a high level of criticality (low activation energy needed to switch between attractors) that has an internal model of self-as-agent, yet can’t predict itself. And I think any conscious system with these dynamics will have the quale of free will, and have the phenomenological illusion that its qualia have causal power. 

And although it would be perfectly feasible for there to exist conscious systems which don’t have the quale of free will, it’s plausible that this quale will be relatively common across most evolved organisms. (Brembs 2011) argues that the sort of dynamical unpredictability which leads to the illusion of free will tends to be adaptive, both as a search strategy for hidden resources and as a game-theoretic advantage against predators, prey, and conspecifics: “[p]redictability can never be an evolutionarily stable strategy.”

Load more