• The amount of information when an event that occurs with probability p is observed is -log(p)
  • If it costs a certain amount of money to check if that event is happening, what do you cut out of it?
  • You stop checking for events that have a very low probability of happening.
  • The “information volume” - log(p) is large, but the probability p of it happening is small.
  • Since the expected value is -p log(p) of the product, events with low probability also have small expected values
  • The maximum expected value is when p is 1/e
  • 1/2.718, isn’t this the same root phenomenon that flow theory experimentally showed to be “most flow when there is a 1/3 chance of success”?

image

2014-12-17 Facebook The amount of information when an event with probability p is observed is -log(p), but if it costs a certain amount of money to check whether the event is happening or not, I was thinking about what I should cut off. I thought that the “amount of information” -log(p)- is large, but the product -p log(p)- is small because the probability p of it happening is small. I checked and found that it was at 1/e. Ha, 1/2.718? Isn’t this the same as the experimental “1/3 chance of success is the most flow” in flow theory? And I’m so excited, I want to tell someone NOW!

The maximum value of entropy is set to -(p log(p) + (1 - p)log(1 - p)) in the so-called information-theoretic definition of entropy, when p is 0.5, but human hardware does not give much importance to “information that did not happen. But human hardware doesn’t give much importance to “information that didn’t happen”, and as a result, the maximum value may be shifted to the 1/e position.

Or when values are not symmetrical and more value is placed on “what happened.” From an information-theoretic point of view, it’s “1/3 chance of something happening happened hooray!” and “2/3 chance of something happening didn’t happen! are equivalent, but when you pull a gacha in a game, you pull it with the expectation that a rare will come out.

Ah, so if the case is “I want information,” then a bet of 1/2 is optimal, and if “I want ‘a return of -log(p) if I succeed and 0 if I fail,’” then a bet of 1/e is optimal. The experiment in flow theory is “When choosing a chess opponent, I tend to choose an opponent with whom I have a 1/3 chance of winning,” so in this case, I feel that “if I can beat an opponent with probability p, -log(p), and 0 if I lose,” and maximizing that maximization results in 1/e, is that right?

Nakayama, Watariten I wonder what is going on. I think it comes from the success rate of hunting of thermostatic animals, but I’m not sure. I wonder if it comes from the amount of information or the desire for survival of the organism. If the cost of confirmation is sufficiently low for a life form (such as a metamorphic animal or silicon life), I wonder if we can wait for a slightly lower probability of success.

I can’t rule out the possibility that the hypothesis comes from the fact that the hunt succeeds one out of three times. But there is no way to get more or less cost of confirmation in the equation where the maximum value is x=1/e. If there is a life form whose pleasure at winning any bet is 1, it will choose the bet with the highest probability of winning by maximizing the pleasure it gets. If pleasure is determined by the amount of information from an information-theoretic point of view, then it would choose the bets that win by 1/2 through maximization. For some reason, humans evolved to choose 1/3, and that’s why.

Nakayama, Tokiten Learning revolves around the amount of information acquired. If learning = survival, it seems to be acquired from selection pressure.

I’m not sure if the selection pressure is a genetic layer or a meme layer. Test to see if 1/3 preference occurs in non-human creatures.

Nakayama, Izakaten Didn’t we have something like that in the monkey experiment where you get a reward for pushing a button? I heard that if you mess with the probability of getting a reward, you can break their brains and make them push it indefinitely.

This page is auto-translated from /nishio/ăƒ•ăƒ­ăƒŒăšćŸ—ă‚‰ă‚Œă‚‹æƒ…ć ±ăźæœ€ć€§ćŒ– using DeepL. If you looks something interesting but the auto-translated English is not good enough to understand it, feel free to let me know at @nishio_en. I’m very happy to spread my thought to non-Japanese readers.