Supposing one day you just walked out of your house and suddenly stepped on something and fell. When turning back to see who the “culprit” is, you suddenly faced a bill that someone has dropped.
You could totally say that you were “lucky”. More scientifically saying, you have just come across a series of random events whose results are positive to you. Actually, for each of us, perhaps encountering these series of events has become normal, and even some have encountered such repetitive series of events more than once.
So one question is asked: Are these series of events truly “accidental”? Is there any way to explain them? Moreover, how to optimize your luck? Let’s try to find the answer today!
Before we get to the analysis, the first two questions can easily be answered. These sequences are random, however, they do not fall from heaven. In fact, there is always a very small possibility set for each event. So when it comes to luck, actually we’re talking about this possibility, and when it comes to “optimizing luck”, we can understand it in one of two ways: either to improve it as much as possible or to predict this possibility most accurately to make the best decision.
Let’s take the example of a random event with different output cases. Each of these cases has a certain likelihood to occur, and this likelihood (or how we call it, “possibility”) is defined as the number of times we can get the desired result divided by the total number of tests, provided that the number of attempts has to be very large. Of course, the total possibility of different cases must be 1, or 100%.
So do we just need to refer to previous attempts to conclude the possibility of the incident, then decide whether or not to do it? Theoretically, this is completely true, but to apply this approach, we must first ensure all the conditions coincide. Assuming in the same test, if randomly circling all the sentences, you are then equally likely to have any mark from 0 to 10; however, if you study hard, the possibility you receive a bad mark will decrease significantly. Clearly, the difference in conditions, namely the amount of knowledge, has caused a change in the probability distribution of cases.
In fact, it is extremely difficult to find a sufficiently large sample of events with conditions that completely coincide so that an absolute true comparison can be made. Therefore, statisticians have decided to invent the standard deviation, which can be roughly interpreted as the amplitude of your “luck”. In any case, there is a 68% chance that it is within a radius of 1 standard deviation from the mean and 95% probability it is within a radius of 2 standard deviations. To know more about this, please kindly refer to the webpage on normal distribution in the references below.
So if you play a dart-throwing game at a fair 20 times, and statistics have shown that previous 20-turn players hit the target an average of 5 times with a standard deviation of 2 times, then you can rest assured that there is very low chance that you will not hit any of your shots – 95% of the time you will hit between 1 and 9 shots.
As you can see, even luck has its limits.
So in the end, how can we “optimize” our luck? According to the first understanding, we must improve the relevant factors, thereby improving the overall possibility. This in most cases is synonymous with reducing harmful factors. For example, in the coin-tossing game, you would like to reduce the width of the coin as much as possible to reduce the chance of it standing upright and resulting in neither heads nor tails, thereby improving your own chances of winning. It is worth mentioning that dirty plays also belong to this category of luck-improvement, as it improves your chances of winning by putting in beneficial factors.
By the second perspective, we have to increase the amount of data we have, thereby reducing the standard deviation and having a more accurate estimate of our “range of luck”. A relatively important concept regarding this is the confidence interval, which is basically a calculated interval with a high chance of containing the true probability. Let’s say, for instance, there is a game machine that we don’t know the winning probability, but if the 95% confidence interval is from 0.2 to 0.4, then there is 95% chance that this probability will fall from 0.2 to 0.4. In other words, don’t play, because you will be likely to lose more than you win. An important notice is that the confidence interval is inversely proportional to the square root of the sample size, therefore the more data we gather, the more accurate our prediction would be.
With these two ways, each of us can manage the risks in life better, or become “luckier”!