Recently on twitter I saw this idea that if you would pick a number from a uniform distribution [0, 1] and repeat drawing numbers until the sum of the numbers is lager than one, you’ll on average pick e numbers. Didn’t doubt it at all, just thought it would be fun checking this theory (theorem?) in R:
Let’s build a function that can test this. We’ll do it one million times, and then calculate the mean (that should be equal to e).
test_theory <- function(no_tests = 10) {
results <- vector(mode = "integer", length = no_tests)
for(i in 1:no_tests) {
sum = 0
cnt = 0
while(sum <= 1) {
sum = sum + runif(1, min = 0, max = 1)
cnt = cnt + 1
}
results[i] = cnt
}
return(mean(results))
}
test_theory(1000000)
## [1] 2.716295
e in R is 2.7182818
For a mathematical breakdown [that is beyond my comprehension], see this website.