Feeling a bit tired of the "Dolce Far Niente" during my vacation, I started flipping through a little book on stochastic calculus. While looking at an exercise in the book, I had the following reflection:
Indeed, the work of quants requires a lot of rigor and manipulation of mathematical objects, which generally requires verifying several hypotheses. However, when working as a quant and under some pressure from traders to quickly move things forward, one might sometimes skip these verifications, which can lead to completely off-track results and constitutes an operational risk.
For example, the goal of this exercise was to calculate lim n->oo E[|Xn - K|] where (Xn)n>0 is a sequence of positive random variables with constant expectation equal to a and verifying lim n -> oo Xn = 0 a.s. Generally, this type of exercise is solved by interchanging the limit and the expectation, but it is still necessary to verify that this is possible, either by the Dominated Convergence Theorem or the Monotone Convergence Theorem. The somewhat hasty quant who says, "Anyway, dominated convergence is always satisfied, and we're no longer in grad school to waste time verifying it" might be severely mistaken in thinking that lim n->oo E[|Xn - K|] = K , which, in my opinion, could make them lose a lot of money!
In fact, in this case, the hypotheses of the Dominated Convergence Theorem are not satisfied, and by rewriting |Xn - K| = Xn + K - 2min(Xn, K), it can be easily shown that lim n->oo E[|Xn - K|] = a + K!
Bonus: The financial consequence of this result is that in a Black-Scholes model, the value of the straddle at the forward money of maturity T tends towards 2S0 when T tends towards infinity. I'll let you tell me why ;).
Researcher (Quantum Information, Condensed Matter, Data Science)
2moCool article! I believe this is yet another example of generic counterintuitive nature of conditional probabilities. But it is simple if we think through standard terminology and math. The standard way of thinking about the problem would be to consider the conditional probability of finding the second ball red, given that the first one was red, without any knowledge of the number of red or green balls. Therefore, the standard brute-force solution involves finding the joint probability of both balls being red and dividing it by the probability of first being red. Given N balls, the instances where we have m red balls for different values of m=0,1,…,N are equally likely. For any m, the probability P_m that the first ball is red is m/N, and the probability of both balls in a row being red, P_m (both balls are red) is m(m−1)/[N(N−1)]. To obtain the overall probabilities, we need to average over m for both of these probabilities. This gives P(first ball is red)=(1/2)(N+1)/N and P(both balls are red)=(1/3)(N+1)/N, which finally results in the conditional probability P(second ball is red|first ball is red)=2/3.