22 Comments
⭠ Return to thread

I think all of the supposed discrepanices with modeling the brain as a hedonic reinforcement learning model can be explained with standard ML and economics.

If you do a lot of research on epistemic facts related to your political beliefs, the first order consequence is often that you spend hours doing mildly unpleasant reading, then your friends yell at you and call you a Nazi.

In the case of doing your taxes or the lion, that unpleasantness is modulated by the much larger unpleasantness of being sued by the IRS and/or eaten alive by a lion. So there's a normal tradeoff between costs (filing taxes is boring, seeing lions is scary) and benefits (not being sued or devoured).

But in the case of political beliefs, the costs are internalized (your friends hate you) and the benefits are diffuse (1 vote out of 160 million for different policy outcome). So it's no wonder that people aren't motivated to have a scout mindset.

Expand full comment
Comment deleted
Feb 1, 2022
Comment deleted
Expand full comment

People are pretty good at avoiding sources of stress, especially ones that offer no obvious benefit. I can't really imagine why anyone should go become a scout given the tradeoffs. Maybe it's a noble sacrifice you make for the greater good?

Expand full comment
Comment deleted
Feb 1, 2022Edited
Comment deleted
Expand full comment

I don't think that's obvious. The scout mindset is almost definitionally being less confident about your knowledge. Soldiers seem confident too, that's kind of the point.

Expand full comment

> Confidence comes from competence

I think that's only one route to competence. Plenty of people are very confident without being competent specifically because they have the soldier mindset.

Expand full comment

If scout mindset were that effective, I think evolution would have made it more common.

Expand full comment

I don't think even Julia Galef would claim that a scout mindset makes you more likely to have babies (which is all 'effectiveness' means from an evolutionary perspective)

Expand full comment

Yes, and I'll emphasize that this was the case prior to the invention of birth control as well.

Expand full comment

Because if you become good at predicting social and economic trends that everyone else thinks is wild pie-in-the-sky nonsense but actually happen, you get words like "disrupter" and "avant-garde" attached to you, attain social status and financial success, and ultimately be able to have a kid with Grimes before she divorces you and decides she's a Marxist now.

Expand full comment

Now this I could believe with the caveat that you have to already be a billionaire to get "disrupter" or "avant garde". The rest of us just get "crazy" or "weird".

And of course then there's views that aren't pie in the sky, just ordinary run-of-the-mill views of the opposite tribe. Those get the label "Nazi" or "Marxist" depending on which tribe you defect to.

Expand full comment

>The rest of us just get "crazy" or "weird".

If you're a Bitcoin millionaire, people aren't going to call you "crazy" (unless you start trying to argue for the AOC to be abolished, cannibalism to be legalized, or something else so far outside the Overton Window that no amount of money can shield you from social censure). "Crazy" requires you to flap your arms, jump off the barn, and break your legs. If you actually FLY, you're an eccentric visionary. I realize it's really really easy to imagine that social classes are an iron-clad law of the universe and people will still treat you as Joe Schmoe from Buffalo even if your bank account hits 7+ figures, but (in America, at least), like 70% of social class is attached to your net worth.

>Politics

I hope you aren't looking for someone on a random blog's comment section to give a complete solution to solve political disagreement. I was just making the point that there are, in fact, reasons to break from the herd besides pure altruism.

Expand full comment

There are some, very limited, reasons to break from the herd in specific highly limited circumstances.

Expand full comment

Yes, breaking from consensus is high-risk high-reward, as opposed to being a follower, which is low-risk low-reward. I think you're trying to argue against some point I'm not making here.

Expand full comment

I'm not trying to argue for or against any point you're making. At least not intentionally.

I only want to argue that all of the supposed discrepanices with modeling the brain as a hedonic reinforcement learning model can be explained with standard ML and economics. Nothing more, nothing less.

Expand full comment

The title of this post is "Motivated Reasoning As Mis-Applied Reinforcement Learning". I'm not missing it, I'm trying to explain what it is.

Expand full comment

I'm not trying to focus on the title, just the argument being made. I believe that all of the supposed discrepanices with modeling the brain as a hedonic reinforcement learning model can be explained with standard ML and economics.

I have a prior: Being eaten alive by a lion would be extremely painful and end my life before I can procreate. I've assigned 100% probability to that prior. The brain is able to backpropagate to my visual cortex that a False Positive on a lion is scary, but low risk (let's say loss value = 1.0), but a False Negative on a lion is life-ending (let's say loss value = 10000.0). Given those incentives, your visual cortex optimizes lion recognition for high recall and reasonable precision.

I don't see why there's any contradiction between hedonic reinforcement learning (learn not to get eaten alive, extremely low hedonic state) and what the visual cortex does.

Expand full comment

His point is that sometimes in life we do the thing that doesn't happen with our eyes: our eyes always see the lion, even if we really do not want the lion to be there. Your explanation of why that is makes sense: if we don't see the lion, we die. Yet at the same time in other areas of our lives there is a problem, it really will negatively impact us if we don't do something about it, and yet instead of seeing the problem our minds work hard to find a way to convince ourselves the problem doesn't exist. So why does it do that? That's the question, not why our eyes always see the lion. It's why our minds sometimes try to tell us the lion we see isn't really there because we don't want it to be.

Expand full comment

Is that the point? The post seems to presuppose that there are "reinforcable" and "non-reinforcable" parts of the brain. I don't see any need for that. You can perfectly explain the function of the neural cortex and every other part of the brain through hedonic reinforcement learning.

I originally supposed that he was asking "why does our brain work for the lion, but not for reasoning about politics?", so I answered that question with what I think is a logical, mathematical view based on internalized costs and externalized (diffuse) benefits. I think that's the answer to your question "Why does our brain not see the problem when it will negatively impact us?" My answer is that being wrong about politics *doesn't* negatively impact you because the costs/benefits of being right/wrong are shared across hundreds of millions of people. But the costs/benefits of agreeing/disagreeing with your friends accrue only to you.

Expand full comment

Because the future is uncertain, and the further away it is, the more uncertain, and while a lion in the bush will eat you *right now* almost all the things we procrastinate about are lions that may or may not eat us some time from now, assuming our fearful reasoning about the future is correct, which it often isn't. The fact that our emotional centers routinely dial down threats which are more hypothetical and uncertain is hardly surprising, and clearly adaptive. The fact that we have warring agents in our awareness that struggle to gain consensus on threat-level estimates that do not agree with each other is also unsurprising, given normal conscious experience.

Expand full comment

Love it. Great thinking. Really well put.

Expand full comment

I rephrased the first sentence of my original post to not focus on the title and instead summarize my argument. My argument is that all of the supposed discrepanices with modeling the brain as a hedonic reinforcement learning model can be explained with standard ML and economics.

Expand full comment