etacognition in the rat (n.)

  1. A brown rat’s (Rattus norvegicus) awareness of it’s own mental contents and ability to act on that awareness.
  2. A rationalist’s (Homo actually sapiens) awareness of the meta level and unquenchable desire to think about it.

A key skill that rationalists cultivate is making decisions under uncertainty. To do that, one must be able to gauge their own uncertainty. And according to Stanislas Dehaene’s Consciousness and the Brain, gauging your own uncertainty is one of the specific mental operations that requires conscious access.

We can know things that we aren’t conscious of, like guessing better than chance whether a digit flashed on the screen too briefly to enter awareness is above 5 or below. But we don’t know what we know and how certain we are. In the flashing digit example, we can’t estimate on which trials we are likelier to have guessed correctly if the stimulus didn’t reach consciousness.

This is from a guest review of Consciousness and the Brain on ACX. According to that review, the ability to gauge one’s confidence is not unique to humans but is apparently also present in rats! Does this mean that rodents are conscious and aware of their own thinking? Or will it turn out that the research paper the review links to is an entertaining mess from which it’s impossible to draw any conclusions?

Either way, I was intrigued enough to read through Metacognition in the Rat (2007) by Allison Foote and Jonathon Crystal. The charts and quotes below are from that paper unless mentioned otherwise.

(The rat images are DALL·E 2 generations, courtesy of @thinkwert. If anyone at OpenAI is reading this can I please pretty please with a bow on top please have DALL·E 2 access so I don’t have to beg my Twitter mutuals? Just think how many beautifully-illustrated posts I could produce before we all go extinct!)

Foote and Crystal (henceforth F&C) trained rats to classify audible noises as short (below 4 seconds) or long. The noises ranged from 2-8 seconds, with durations closer to the cutoff (3.6 and 4.4 seconds) correspondingly more difficult to classify than those closer to 2 or 8. Rats were rewarded for a correct guess, and sometimes also had the option to decline the test for a smaller but guaranteed reward.

The hypothesis: if rats decline the more difficult trials (when they are more uncertain about the sound’s classification) more than they do the easier ones, they must be aware of their own uncertainty.

F&C drew up a neat diagram for the experiment, acquired eight cuddly rats, and… that’s where the good news ends.

For some inexplicable reason, F&C gave the rats 6 pellets for a correct guess and only 3 for refusing to guess. If the ratio was 5:3 the rats would need to be at least 60% sure they classified the sound correctly for guessing to be worthwhile. But at 6:3 there is no reason for the rat to ever decline guessing even if it does so almost at random, since even 50.1% of 6 is more than 3. The trial was repeated many times in succession smoothing out any “risk”, and in any case

there is evidence that rats are risk prone in a situation similar to our own

How did the rats perform on this simple test of rationality in betting?

Five rats rarely declined to take the duration test (M = 97.8%, SEM = ± .01%). The performance for these five rats was likely due to response bias as it appears that these rats failed to learn the experimental contingency of the nose-poke apertures. As a result, these five rats did not provide evidence for or against metacognition.

I don’t know if these rats “failed to learn” anything. They appear to have learned that never declining the test maximizes the number of pellets they receive and that’s that.

And what of the other three?

The x-axis represents the difficulty of the tests, with the easiest ones (2s, 8s) at 2.00 and the hardest (3.6s, 4.4s) at 1.00. The right column shows the proportion of correct tests. Here we see a baffling result: even rats who rarely declined the difficult test got it right 75% of the time when they had the option to decline, but just below 50% of the time on the “forced” trials where declining wasn’t an option!

Were the rats so thrown off by the lack of “decline” option (even though this was the case for a third of the trials) that they entirely forgot which lever does what? Did the experimenters mess something up mechanically? Were these three rats just drunk, as evidenced by their inability to figure out that they should never decline? F&C make no mention of this result at all, hoping you won’t notice it either.

The left column shows the “decline” rate for the three dumb rats who sometimes declined the test. Combining their results with the other five, here are the full results for rate of declining the easy and hard tests:

  • Easy test: 0%, 0%, 0%, 0%, 0%, 0%, 40%, 20%
  • Hard test: 0%, 0%, 0%, 0%, 0%, 15%, 70%, 45%

This is… something? A result? An invitation to repeat the experiment with more rats and a better setup? For their part, F&C declare victory for metacognition since the three rats did decline the difficult tests more often than the easy ones. They even slapped a p<.05 on it, although based on the listed results I’m curious what distribution they assumed for the p-value calculation. Whatever is going on with these rats, it surely ain’t normal.

But the worst news for this paper isn’t even the bad setup, the fact that 62.5% of the experimental subjects were dismissed for being too smart, the huge variance among the other three rats, the ~50% guess rate on forced trials, or the p-value invoked more as an incantation than as analysis. The worst part is that none of this proves anything about metacognition.

The researchers classified the sounds as long or short, but that doesn’t mean that the rats had to. I can propose an alternative model in which the rats classified the sounds into three categories, each with an associated behavior that doesn’t rely on any metacognition:

  • Short sound (2-3.5 seconds) -> press left lever
  • Medium sound (3.5-4.5 seconds) -> decline test
  • Long sound (4.5-8 seconds) -> press right lever

(That’s for the three rats that occasionally declined, the other five only had “short” and “long” with no metacognition either.)

My alternative model explains the results much better than the researchers’ own proposed model.

First, it explains why the three rats occasionally declined the easy and pretty-easy (1.75 difficulty) tests even though they practically never got them wrong on forced tests. The rats would never confuse a short sound (2s or 2.44s) with a long one (4.4s and up), but they may occasionally mistake it for a medium one (3.6s) and decline the test instead of pressing the left. The researchers’ model can’t explain why Rat 2 who got 95%+ of easy tests correct when forced to choose would decline the choice (and 3 pellets) at least half the time.

Second, my model can explain why the rats only got ~50% of the difficult forced tests correct. After hearing the (medium) sound they determined that the sound was medium (with no metacognitive information, e.g. that it was possibly short instead of medium with some probability). Upon discovering that they don’t have the option to decline, a rat that only knows that it heard a medium sound can do nothing pick between the long and short sound levers at random.

My explanation does such a better job at explaining the entirety of the experimental results, its omission makes one wonder about the metacognitive abilities of rat researchers…

To be fair, other researchers in the field eventually picked up on the behaviorist alternative explanation. Foote and Crystal responded by throwing eight more rats at an even more convoluted setup involving a level the rats could press to repeat the sound.

The outcome in their own words:

Metacognition, but not an alternative non-metacognition model, predicts that accuracy on difficult durations is higher when subjects are forced to repeat the stimulus compared to trials in which the subject chose to repeat the stimulus, a pattern observed in our data. Simulation of a non-metacognition model suggests that this part of the data from rats is consistent with metacognition, but other aspects of the data are not consistent with metacognition. The current results call into question previous findings suggesting that rats have metacognitive abilities. Although a mixed pattern of data does not support metacognition in rats, we believe the introduction of the method may be valuable for testing with other species to help evaluate the comparative case for metacognition.

In other words: “Some of our results showed metacognition in the rat, some didn’t, and some contradicted previous pro-metacognition results. In summary, this entire setup is completely useless for studying rat metacognition but if you give us more grant money we could try it with pigeons or something.”

This is 100% the correct conclusion and I applaud the researchers for their honesty.

There’s one final question I’m curious about: does metacognition exist in the non-rat human? I would posit that if we measured metacognition solely as the awareness of your own uncertainty on specific propositions, the answer may not be straightforward as you think.

Here are some things rationalists do with the awareness of their own uncertainty:

  • Do Bayesian math, relying on the idea of subjective probability.
  • State their epistemic status before making declarative statements.
  • Love betting on anything and everything and playing with prediction markets.
  • Be ahead of the curve on Bitcoin, COVID, and many other situations where an event with likelihood lower than 50% was still worth preparing for ahead of time based on expected value.

Here are some things non-rat humans tend to do:

  • Be a frequentist, insisting that probability is a property of a large (how large?) collection of similar (how similar?) trials and not of a mind possessing incomplete information.
  • Be like, what the fuck is an epistemic status?
  • Refuse to bet on things and be suspicious of prediction markets.
  • Ignore everything with a probability between 0-49% (add Trump and the Russian invasion of Ukraine to the above list as examples) until it becomes a reality, and at that point insist that it couldn’t have turned out any other way.

In short, while rationalists tend to express carefully quantified uncertainty on many propositions and behave according to it, non-rats seem to inhabit one of only three epistemic states with respect to a proposition P:

  1. P is impossible, literally 0%
  2. P is utterly unknowable, don’t ask me to bet or make any decision until the uncertainty resolves
  3. P is 100% certain

You may think that non-rat humans obviously possess metacognition. But when was the last time you’ve actually seen one outperform a rat?

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *