Concept
Statistical independence is a key principle in probability theory. It describes a scenario where two events are independent if the occurrence of one event does not affect the probability of the other event happening.
For two events A and B, they are independent if the probability of both events occurring simultaneously is equal to the product of their individual probabilities:
Formal Definition
Two events A and B are considered independent if and only if:
P(A ∩ B) = P(A) ⋅ P(B)
Here:
P(A ∩ B): Probability that both eventsAandBoccur simultaneously.P(A): Probability that eventAoccurs.P(B): Probability that eventBoccurs.
Example: Rolling Two Dice
Consider rolling two fair dice. Let:
A: The event that the first die shows a 4.B: The event that the second die shows a 6.
Since the outcome of one die does not affect the outcome of the other, events A and B are independent. Their probabilities are calculated as follows:
P(A) = 1/6(probability that the first die shows 4).P(B) = 1/6(probability that the second die shows 6).P(A ∩ B) = 1/6 × 1/6 = 1/36(probability that both dice show the specified values).
Since P(A ∩ B) = P(A) ⋅ P(B), events A and B are independent.
Analogy with Conditional Probability
If two events A and B are independent, then knowing that B has occurred does not change the probability of A occurring:
P(A | B) = P(A) and similarly P(B | A) = P(B).
This means that for independent events, the occurrence of one event provides no additional information about the likelihood of the other event occurring.