- The amount of information when an event that occurs with probability p is observed is -log(p)
- If it costs a certain amount of money to check if that event is happening, what do you cut out of it?
- You stop checking for events that have a very low probability of happening.
- The âinformation volumeâ - log(p) is large, but the probability p of it happening is small.
- Since the expected value is -p log(p) of the product, events with low probability also have small expected values
- The maximum expected value is when p is 1/e
- 1/2.718, isnât this the same root phenomenon that flow theory experimentally showed to be âmost flow when there is a 1/3 chance of successâ?
2014-12-17 Facebook The amount of information when an event with probability p is observed is -log(p), but if it costs a certain amount of money to check whether the event is happening or not, I was thinking about what I should cut off. I thought that the âamount of informationâ -log(p)- is large, but the product -p log(p)- is small because the probability p of it happening is small. I checked and found that it was at 1/e. Ha, 1/2.718? Isnât this the same as the experimental â1/3 chance of success is the most flowâ in flow theory? And Iâm so excited, I want to tell someone NOW!
The maximum value of entropy is set to -(p log(p) + (1 - p)log(1 - p)) in the so-called information-theoretic definition of entropy, when p is 0.5, but human hardware does not give much importance to âinformation that did not happen. But human hardware doesnât give much importance to âinformation that didnât happenâ, and as a result, the maximum value may be shifted to the 1/e position.
Or when values are not symmetrical and more value is placed on âwhat happened.â From an information-theoretic point of view, itâs â1/3 chance of something happening happened hooray!â and â2/3 chance of something happening didnât happen! are equivalent, but when you pull a gacha in a game, you pull it with the expectation that a rare will come out.
Ah, so if the case is âI want information,â then a bet of 1/2 is optimal, and if âI want âa return of -log(p) if I succeed and 0 if I fail,ââ then a bet of 1/e is optimal. The experiment in flow theory is âWhen choosing a chess opponent, I tend to choose an opponent with whom I have a 1/3 chance of winning,â so in this case, I feel that âif I can beat an opponent with probability p, -log(p), and 0 if I lose,â and maximizing that maximization results in 1/e, is that right?
Nakayama, Watariten I wonder what is going on. I think it comes from the success rate of hunting of thermostatic animals, but Iâm not sure.ăI wonder if it comes from the amount of information or the desire for survival of the organism. If the cost of confirmation is sufficiently low for a life form (such as a metamorphic animal or silicon life), I wonder if we can wait for a slightly lower probability of success.
I canât rule out the possibility that the hypothesis comes from the fact that the hunt succeeds one out of three times. But there is no way to get more or less cost of confirmation in the equation where the maximum value is x=1/e. If there is a life form whose pleasure at winning any bet is 1, it will choose the bet with the highest probability of winning by maximizing the pleasure it gets. If pleasure is determined by the amount of information from an information-theoretic point of view, then it would choose the bets that win by 1/2 through maximization. For some reason, humans evolved to choose 1/3, and thatâs why.
Nakayama, Tokiten Learning revolves around the amount of information acquired. If learning = survival, it seems to be acquired from selection pressure.
Iâm not sure if the selection pressure is a genetic layer or a meme layer. Test to see if 1/3 preference occurs in non-human creatures.
Nakayama, Izakaten Didnât we have something like that in the monkey experiment where you get a reward for pushing a button?ăI heard that if you mess with the probability of getting a reward, you can break their brains and make them push it indefinitely.
This page is auto-translated from /nishio/ăăăŒăšćŸăăăæ ć ±ăźæ性ć using DeepL. If you looks something interesting but the auto-translated English is not good enough to understand it, feel free to let me know at @nishio_en. Iâm very happy to spread my thought to non-Japanese readers.