Skip to article frontmatterSkip to article content
Contents
and

18. Distributions and Probabilities

18.1Outline

In this lecture we give a quick introduction to data and probability distributions using Python.

!pip install --upgrade yfinance
Output
Requirement already satisfied: yfinance in /opt/hostedtoolcache/Python/3.12.9/x64/lib/python3.12/site-packages (0.2.53)
Requirement already satisfied: pandas>=1.3.0 in /opt/hostedtoolcache/Python/3.12.9/x64/lib/python3.12/site-packages (from yfinance) (2.2.3)
Requirement already satisfied: numpy>=1.16.5 in /opt/hostedtoolcache/Python/3.12.9/x64/lib/python3.12/site-packages (from yfinance) (2.1.3)
Requirement already satisfied: requests>=2.31 in /opt/hostedtoolcache/Python/3.12.9/x64/lib/python3.12/site-packages (from yfinance) (2.32.3)
Requirement already satisfied: multitasking>=0.0.7 in /opt/hostedtoolcache/Python/3.12.9/x64/lib/python3.12/site-packages (from yfinance) (0.0.11)
Requirement already satisfied: platformdirs>=2.0.0 in /opt/hostedtoolcache/Python/3.12.9/x64/lib/python3.12/site-packages (from yfinance) (4.3.6)
Requirement already satisfied: pytz>=2022.5 in /opt/hostedtoolcache/Python/3.12.9/x64/lib/python3.12/site-packages (from yfinance) (2025.1)
Requirement already satisfied: frozendict>=2.3.4 in /opt/hostedtoolcache/Python/3.12.9/x64/lib/python3.12/site-packages (from yfinance) (2.4.6)
Requirement already satisfied: peewee>=3.16.2 in /opt/hostedtoolcache/Python/3.12.9/x64/lib/python3.12/site-packages (from yfinance) (3.17.9)
Requirement already satisfied: beautifulsoup4>=4.11.1 in /opt/hostedtoolcache/Python/3.12.9/x64/lib/python3.12/site-packages (from yfinance) (4.13.3)
Requirement already satisfied: soupsieve>1.2 in /opt/hostedtoolcache/Python/3.12.9/x64/lib/python3.12/site-packages (from beautifulsoup4>=4.11.1->yfinance) (2.6)
Requirement already satisfied: typing-extensions>=4.0.0 in /opt/hostedtoolcache/Python/3.12.9/x64/lib/python3.12/site-packages (from beautifulsoup4>=4.11.1->yfinance) (4.12.2)
Requirement already satisfied: python-dateutil>=2.8.2 in /opt/hostedtoolcache/Python/3.12.9/x64/lib/python3.12/site-packages (from pandas>=1.3.0->yfinance) (2.9.0.post0)
Requirement already satisfied: tzdata>=2022.7 in /opt/hostedtoolcache/Python/3.12.9/x64/lib/python3.12/site-packages (from pandas>=1.3.0->yfinance) (2025.1)
Requirement already satisfied: charset-normalizer<4,>=2 in /opt/hostedtoolcache/Python/3.12.9/x64/lib/python3.12/site-packages (from requests>=2.31->yfinance) (3.4.1)
Requirement already satisfied: idna<4,>=2.5 in /opt/hostedtoolcache/Python/3.12.9/x64/lib/python3.12/site-packages (from requests>=2.31->yfinance) (3.10)
Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/hostedtoolcache/Python/3.12.9/x64/lib/python3.12/site-packages (from requests>=2.31->yfinance) (2.3.0)
Requirement already satisfied: certifi>=2017.4.17 in /opt/hostedtoolcache/Python/3.12.9/x64/lib/python3.12/site-packages (from requests>=2.31->yfinance) (2025.1.31)
Requirement already satisfied: six>=1.5 in /opt/hostedtoolcache/Python/3.12.9/x64/lib/python3.12/site-packages (from python-dateutil>=2.8.2->pandas>=1.3.0->yfinance) (1.17.0)
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import yfinance as yf
import scipy.stats
import seaborn as sns

18.2Common distributions

In this section we recall the definitions of some well-known distributions and explore how to manipulate them with SciPy.

18.2.1Discrete distributions

Let’s start with discrete distributions.

A discrete distribution is defined by a set of numbers S={x1,,xn}S = \{x_1, \ldots, x_n\} and a probability mass function (PMF) on SS, which is a function pp from SS to [0,1][0,1] with the property

i=1np(xi)=1\sum_{i=1}^n p(x_i) = 1

We say that a random variable XX has distribution pp if XX takes value xix_i with probability p(xi)p(x_i).

That is,

P{X=xi}=p(xi)for i=1,,n\mathbb P\{X = x_i\} = p(x_i) \quad \text{for } i= 1, \ldots, n

The mean or expected value of a random variable XX with distribution pp is

E[X]=i=1nxip(xi)\mathbb{E}[X] = \sum_{i=1}^n x_i p(x_i)

Expectation is also called the first moment of the distribution.

We also refer to this number as the mean of the distribution (represented by) pp.

The variance of XX is defined as

V[X]=i=1n(xiE[X])2p(xi)\mathbb{V}[X] = \sum_{i=1}^n (x_i - \mathbb{E}[X])^2 p(x_i)

Variance is also called the second central moment of the distribution.

The cumulative distribution function (CDF) of XX is defined by

F(x)=P{Xx}=i=1n1{xix}p(xi)F(x) = \mathbb{P}\{X \leq x\} = \sum_{i=1}^n \mathbb 1\{x_i \leq x\} p(x_i)

Here 1{statement}=1\mathbb 1\{ \textrm{statement} \} = 1 if “statement” is true and zero otherwise.

Hence the second term takes all xixx_i \leq x and sums their probabilities.

18.2.1.1Uniform distribution

One simple example is the uniform distribution, where p(xi)=1/np(x_i) = 1/n for all ii.

We can import the uniform distribution on S={1,,n}S = \{1, \ldots, n\} from SciPy like so:

n = 10
u = scipy.stats.randint(1, n+1)

Here’s the mean and variance:

u.mean(), u.var()
(np.float64(5.5), np.float64(8.25))

The formula for the mean is (n+1)/2(n+1)/2, and the formula for the variance is (n21)/12(n^2 - 1)/12.

Now let’s evaluate the PMF:

u.pmf(1)
np.float64(0.1)
u.pmf(2)
np.float64(0.1)

Here’s a plot of the probability mass function:

fig, ax = plt.subplots()
S = np.arange(1, n+1)
ax.plot(S, u.pmf(S), linestyle='', marker='o', alpha=0.8, ms=4)
ax.vlines(S, 0, u.pmf(S), lw=0.2)
ax.set_xticks(S)
ax.set_xlabel('S')
ax.set_ylabel('PMF')
plt.show()
<Figure size 640x480 with 1 Axes>

Here’s a plot of the CDF:

fig, ax = plt.subplots()
S = np.arange(1, n+1)
ax.step(S, u.cdf(S))
ax.vlines(S, 0, u.cdf(S), lw=0.2)
ax.set_xticks(S)
ax.set_xlabel('S')
ax.set_ylabel('CDF')
plt.show()
<Figure size 640x480 with 1 Axes>

The CDF jumps up by p(xi)p(x_i) at xix_i.

18.2.1.2Bernoulli distribution

Another useful distribution is the Bernoulli distribution on S={0,1}S = \{0,1\}, which has PMF:

p(i)=θi(1θ)1i(i=0,1)p(i) = \theta^i (1 - \theta)^{1-i} \qquad (i = 0, 1)

Here θ[0,1]\theta \in [0,1] is a parameter.

We can think of this distribution as modeling probabilities for a random trial with success probability θ.

  • p(1)=θp(1) = \theta means that the trial succeeds (takes value 1) with probability θ
  • p(0)=1θp(0) = 1 - \theta means that the trial fails (takes value 0) with probability 1θ1-\theta

The formula for the mean is θ, and the formula for the variance is θ(1θ)\theta(1-\theta).

We can import the Bernoulli distribution on S={0,1}S = \{0,1\} from SciPy like so:

θ = 0.4
u = scipy.stats.bernoulli(θ)

Here’s the mean and variance at θ=0.4\theta=0.4

u.mean(), u.var()
(np.float64(0.4), np.float64(0.24))

We can evaluate the PMF as follows

u.pmf(0), u.pmf(1)
(np.float64(0.6), np.float64(0.4))

18.2.1.3Binomial distribution

Another useful (and more interesting) distribution is the binomial distribution on S={0,,n}S=\{0, \ldots, n\}, which has PMF:

p(i)=(ni)θi(1θ)nip(i) = \binom{n}{i} \theta^i (1-\theta)^{n-i}

Again, θ[0,1]\theta \in [0,1] is a parameter.

The interpretation of p(i)p(i) is: the probability of ii successes in nn independent trials with success probability θ.

For example, if θ=0.5\theta=0.5, then p(i)p(i) is the probability of ii heads in nn flips of a fair coin.

The formula for the mean is nθn \theta and the formula for the variance is nθ(1θ)n \theta (1-\theta).

Let’s investigate an example

n = 10
θ = 0.5
u = scipy.stats.binom(n, θ)

According to our formulas, the mean and variance are

n * θ,  n *  θ * (1 - θ)
(5.0, 2.5)

Let’s see if SciPy gives us the same results:

u.mean(), u.var()
(np.float64(5.0), np.float64(2.5))

Here’s the PMF:

u.pmf(1)
np.float64(0.009765625000000002)
fig, ax = plt.subplots()
S = np.arange(1, n+1)
ax.plot(S, u.pmf(S), linestyle='', marker='o', alpha=0.8, ms=4)
ax.vlines(S, 0, u.pmf(S), lw=0.2)
ax.set_xticks(S)
ax.set_xlabel('S')
ax.set_ylabel('PMF')
plt.show()
<Figure size 640x480 with 1 Axes>

Here’s the CDF:

fig, ax = plt.subplots()
S = np.arange(1, n+1)
ax.step(S, u.cdf(S))
ax.vlines(S, 0, u.cdf(S), lw=0.2)
ax.set_xticks(S)
ax.set_xlabel('S')
ax.set_ylabel('CDF')
plt.show()
<Figure size 640x480 with 1 Axes>
Solution to Exercise 2

Here is one solution:

fig, ax = plt.subplots()
S = np.arange(1, n+1)
u_sum = np.cumsum(u.pmf(S))
ax.step(S, u_sum)
ax.vlines(S, 0, u_sum, lw=0.2)
ax.set_xticks(S)
ax.set_xlabel('S')
ax.set_ylabel('CDF')
plt.show()
<Figure size 640x480 with 1 Axes>

We can see that the output graph is the same as the one above.

18.2.1.4Geometric distribution

The geometric distribution has infinite support S={0,1,2,}S = \{0, 1, 2, \ldots\} and its PMF is given by

p(i)=(1θ)iθp(i) = (1 - \theta)^i \theta

where θ[0,1]\theta \in [0,1] is a parameter

(A discrete distribution has infinite support if the set of points to which it assigns positive probability is infinite.)

To understand the distribution, think of repeated independent random trials, each with success probability θ.

The interpretation of p(i)p(i) is: the probability there are ii failures before the first success occurs.

It can be shown that the mean of the distribution is 1/θ1/\theta and the variance is (1θ)/θ(1-\theta)/\theta.

Here’s an example.

θ = 0.1
u = scipy.stats.geom(θ)
u.mean(), u.var()
(np.float64(10.0), np.float64(90.0))

Here’s part of the PMF:

fig, ax = plt.subplots()
n = 20
S = np.arange(n)
ax.plot(S, u.pmf(S), linestyle='', marker='o', alpha=0.8, ms=4)
ax.vlines(S, 0, u.pmf(S), lw=0.2)
ax.set_xticks(S)
ax.set_xlabel('S')
ax.set_ylabel('PMF')
plt.show()
<Figure size 640x480 with 1 Axes>

18.2.1.5Poisson distribution

The Poisson distribution on S={0,1,}S = \{0, 1, \ldots\} with parameter λ>0\lambda > 0 has PMF

p(i)=λii!eλp(i) = \frac{\lambda^i}{i!} e^{-\lambda}

The interpretation of p(i)p(i) is: the probability of ii events in a fixed time interval, where the events occur independently at a constant rate λ.

It can be shown that the mean is λ and the variance is also λ.

Here’s an example.

λ = 2
u = scipy.stats.poisson(λ)
u.mean(), u.var()
(np.float64(2.0), np.float64(2.0))

Here’s the PMF:

u.pmf(1)
np.float64(0.2706705664732254)
fig, ax = plt.subplots()
S = np.arange(1, n+1)
ax.plot(S, u.pmf(S), linestyle='', marker='o', alpha=0.8, ms=4)
ax.vlines(S, 0, u.pmf(S), lw=0.2)
ax.set_xticks(S)
ax.set_xlabel('S')
ax.set_ylabel('PMF')
plt.show()
<Figure size 640x480 with 1 Axes>

18.2.2Continuous distributions

A continuous distribution is represented by a probability density function, which is a function pp over R\mathbb R (the set of all real numbers) such that p(x)0p(x) \geq 0 for all xx and

p(x)dx=1\int_{-\infty}^\infty p(x) dx = 1

We say that random variable XX has distribution pp if

P{a<X<b}=abp(x)dx\mathbb P\{a < X < b\} = \int_a^b p(x) dx

for all aba \leq b.

The definition of the mean and variance of a random variable XX with distribution pp are the same as the discrete case, after replacing the sum with an integral.

For example, the mean of XX is

E[X]=xp(x)dx\mathbb{E}[X] = \int_{-\infty}^\infty x p(x) dx

The cumulative distribution function (CDF) of XX is defined by

F(x)=P{Xx}=xp(x)dxF(x) = \mathbb P\{X \leq x\} = \int_{-\infty}^x p(x) dx

18.2.2.1Normal distribution

Perhaps the most famous distribution is the normal distribution, which has density

p(x)=12πσexp((xμ)22σ2)p(x) = \frac{1}{\sqrt{2\pi}\sigma} \exp\left(-\frac{(x-\mu)^2}{2\sigma^2}\right)

This distribution has two parameters, μR\mu \in \mathbb R and σ(0,)\sigma \in (0, \infty).

Using calculus, it can be shown that, for this distribution, the mean is μ and the variance is σ2\sigma^2.

We can obtain the moments, PDF and CDF of the normal density via SciPy as follows:

μ, σ = 0.0, 1.0
u = scipy.stats.norm(μ, σ)
u.mean(), u.var()
(np.float64(0.0), np.float64(1.0))

Here’s a plot of the density --- the famous “bell-shaped curve”:

μ_vals = [-1, 0, 1]
σ_vals = [0.4, 1, 1.6]
fig, ax = plt.subplots()
x_grid = np.linspace(-4, 4, 200)

for μ, σ in zip(μ_vals, σ_vals):
    u = scipy.stats.norm(μ, σ)
    ax.plot(x_grid, u.pdf(x_grid),
    alpha=0.5, lw=2,
    label=rf'$\mu={μ}, \sigma={σ}$')
ax.set_xlabel('x')
ax.set_ylabel('PDF')
plt.legend()
plt.show()
<Figure size 640x480 with 1 Axes>

Here’s a plot of the CDF:

fig, ax = plt.subplots()
for μ, σ in zip(μ_vals, σ_vals):
    u = scipy.stats.norm(μ, σ)
    ax.plot(x_grid, u.cdf(x_grid),
    alpha=0.5, lw=2,
    label=rf'$\mu={μ}, \sigma={σ}$')
    ax.set_ylim(0, 1)
ax.set_xlabel('x')
ax.set_ylabel('CDF')
plt.legend()
plt.show()
<Figure size 640x480 with 1 Axes>

18.2.2.2Lognormal distribution

The lognormal distribution is a distribution on (0,)\left(0, \infty\right) with density

p(x)=1σx2πexp((logxμ)22σ2)p(x) = \frac{1}{\sigma x \sqrt{2\pi}} \exp \left(- \frac{\left(\log x - \mu\right)^2}{2 \sigma^2} \right)

This distribution has two parameters, μ and σ.

It can be shown that, for this distribution, the mean is exp(μ+σ2/2)\exp\left(\mu + \sigma^2/2\right) and the variance is [exp(σ2)1]exp(2μ+σ2)\left[\exp\left(\sigma^2\right) - 1\right] \exp\left(2\mu + \sigma^2\right).

It can be proved that

  • if XX is lognormally distributed, then logX\log X is normally distributed, and
  • if XX is normally distributed, then expX\exp X is lognormally distributed.

We can obtain the moments, PDF, and CDF of the lognormal density as follows:

μ, σ = 0.0, 1.0
u = scipy.stats.lognorm(s=σ, scale=np.exp(μ))
u.mean(), u.var()
(np.float64(1.6487212707001282), np.float64(4.670774270471604))
μ_vals = [-1, 0, 1]
σ_vals = [0.25, 0.5, 1]
x_grid = np.linspace(0, 3, 200)

fig, ax = plt.subplots()
for μ, σ in zip(μ_vals, σ_vals):
    u = scipy.stats.lognorm(σ, scale=np.exp(μ))
    ax.plot(x_grid, u.pdf(x_grid),
    alpha=0.5, lw=2,
    label=fr'$\mu={μ}, \sigma={σ}$')
ax.set_xlabel('x')
ax.set_ylabel('PDF')
plt.legend()
plt.show()
<Figure size 640x480 with 1 Axes>
fig, ax = plt.subplots()
μ = 1
for σ in σ_vals:
    u = scipy.stats.norm(μ, σ)
    ax.plot(x_grid, u.cdf(x_grid),
    alpha=0.5, lw=2,
    label=rf'$\mu={μ}, \sigma={σ}$')
    ax.set_ylim(0, 1)
    ax.set_xlim(0, 3)
ax.set_xlabel('x')
ax.set_ylabel('CDF')
plt.legend()
plt.show()
<Figure size 640x480 with 1 Axes>

18.2.2.3Exponential distribution

The exponential distribution is a distribution supported on (0,)\left(0, \infty\right) with density

p(x)=λexp(λx)(x>0)p(x) = \lambda \exp \left( - \lambda x \right) \qquad (x > 0)

This distribution has one parameter λ.

The exponential distribution can be thought of as the continuous analog of the geometric distribution.

It can be shown that, for this distribution, the mean is 1/λ1/\lambda and the variance is 1/λ21/\lambda^2.

We can obtain the moments, PDF, and CDF of the exponential density as follows:

λ = 1.0
u = scipy.stats.expon(scale=1/λ)
u.mean(), u.var()
(np.float64(1.0), np.float64(1.0))
fig, ax = plt.subplots()
λ_vals = [0.5, 1, 2]
x_grid = np.linspace(0, 6, 200)

for λ in λ_vals:
    u = scipy.stats.expon(scale=1/λ)
    ax.plot(x_grid, u.pdf(x_grid),
    alpha=0.5, lw=2,
    label=rf'$\lambda={λ}$')
ax.set_xlabel('x')
ax.set_ylabel('PDF')
plt.legend()
plt.show()
<Figure size 640x480 with 1 Axes>
fig, ax = plt.subplots()
for λ in λ_vals:
    u = scipy.stats.expon(scale=1/λ)
    ax.plot(x_grid, u.cdf(x_grid),
    alpha=0.5, lw=2,
    label=rf'$\lambda={λ}$')
    ax.set_ylim(0, 1)
ax.set_xlabel('x')
ax.set_ylabel('CDF')
plt.legend()
plt.show()
<Figure size 640x480 with 1 Axes>

18.2.2.4Beta distribution

The beta distribution is a distribution on (0,1)(0, 1) with density

p(x)=Γ(α+β)Γ(α)Γ(β)xα1(1x)β1p(x) = \frac{\Gamma(\alpha + \beta)}{\Gamma(\alpha) \Gamma(\beta)} x^{\alpha - 1} (1 - x)^{\beta - 1}

where Γ is the gamma function.

(The role of the gamma function is just to normalize the density, so that it integrates to one.)

This distribution has two parameters, α>0\alpha > 0 and β>0\beta > 0.

It can be shown that, for this distribution, the mean is α/(α+β)\alpha / (\alpha + \beta) and the variance is αβ/(α+β)2(α+β+1)\alpha \beta / (\alpha + \beta)^2 (\alpha + \beta + 1).

We can obtain the moments, PDF, and CDF of the Beta density as follows:

α, β = 3.0, 1.0
u = scipy.stats.beta(α, β)
u.mean(), u.var()
(np.float64(0.75), np.float64(0.0375))
α_vals = [0.5, 1, 5, 25, 3]
β_vals = [3, 1, 10, 20, 0.5]
x_grid = np.linspace(0, 1, 200)

fig, ax = plt.subplots()
for α, β in zip(α_vals, β_vals):
    u = scipy.stats.beta(α, β)
    ax.plot(x_grid, u.pdf(x_grid),
    alpha=0.5, lw=2,
    label=rf'$\alpha={α}, \beta={β}$')
ax.set_xlabel('x')
ax.set_ylabel('PDF')
plt.legend()
plt.show()
<Figure size 640x480 with 1 Axes>
fig, ax = plt.subplots()
for α, β in zip(α_vals, β_vals):
    u = scipy.stats.beta(α, β)
    ax.plot(x_grid, u.cdf(x_grid),
    alpha=0.5, lw=2,
    label=rf'$\alpha={α}, \beta={β}$')
    ax.set_ylim(0, 1)
ax.set_xlabel('x')
ax.set_ylabel('CDF')
plt.legend()
plt.show()
<Figure size 640x480 with 1 Axes>

18.2.2.5Gamma distribution

The gamma distribution is a distribution on (0,)\left(0, \infty\right) with density

p(x)=βαΓ(α)xα1exp(βx)p(x) = \frac{\beta^\alpha}{\Gamma(\alpha)} x^{\alpha - 1} \exp(-\beta x)

This distribution has two parameters, α>0\alpha > 0 and β>0\beta > 0.

It can be shown that, for this distribution, the mean is α/β\alpha / \beta and the variance is α/β2\alpha / \beta^2.

One interpretation is that if XX is gamma distributed and α is an integer, then XX is the sum of α independent exponentially distributed random variables with mean 1/β1/\beta.

We can obtain the moments, PDF, and CDF of the Gamma density as follows:

α, β = 3.0, 2.0
u = scipy.stats.gamma(α, scale=1/β)
u.mean(), u.var()
(np.float64(1.5), np.float64(0.75))
α_vals = [1, 3, 5, 10]
β_vals = [3, 5, 3, 3]
x_grid = np.linspace(0, 7, 200)

fig, ax = plt.subplots()
for α, β in zip(α_vals, β_vals):
    u = scipy.stats.gamma(α, scale=1/β)
    ax.plot(x_grid, u.pdf(x_grid),
    alpha=0.5, lw=2,
    label=rf'$\alpha={α}, \beta={β}$')
ax.set_xlabel('x')
ax.set_ylabel('PDF')
plt.legend()
plt.show()
<Figure size 640x480 with 1 Axes>
fig, ax = plt.subplots()
for α, β in zip(α_vals, β_vals):
    u = scipy.stats.gamma(α, scale=1/β)
    ax.plot(x_grid, u.cdf(x_grid),
    alpha=0.5, lw=2,
    label=rf'$\alpha={α}, \beta={β}$')
    ax.set_ylim(0, 1)
ax.set_xlabel('x')
ax.set_ylabel('CDF')
plt.legend()
plt.show()
<Figure size 640x480 with 1 Axes>

18.3Observed distributions

Sometimes we refer to observed data or measurements as “distributions”.

For example, let’s say we observe the income of 10 people over a year:

data = [['Hiroshi', 1200], 
        ['Ako', 1210], 
        ['Emi', 1400],
        ['Daiki', 990],
        ['Chiyo', 1530],
        ['Taka', 1210],
        ['Katsuhiko', 1240],
        ['Daisuke', 1124],
        ['Yoshi', 1330],
        ['Rie', 1340]]

df = pd.DataFrame(data, columns=['name', 'income'])
df
Loading...

In this situation, we might refer to the set of their incomes as the “income distribution.”

The terminology is confusing because this set is not a probability distribution --- it’s just a collection of numbers.

However, as we will see, there are connections between observed distributions (i.e., sets of numbers like the income distribution above) and probability distributions.

Below we explore some observed distributions.

18.3.1Summary statistics

Suppose we have an observed distribution with values {x1,,xn}\{x_1, \ldots, x_n\}

The sample mean of this distribution is defined as

xˉ=1ni=1nxi\bar x = \frac{1}{n} \sum_{i=1}^n x_i

The sample variance is defined as

1ni=1n(xixˉ)2\frac{1}{n} \sum_{i=1}^n (x_i - \bar x)^2

For the income distribution given above, we can calculate these numbers via

x = df['income']
x.mean(), x.var()
(np.float64(1257.4), np.float64(22680.93333333333))

18.3.2Visualization

Let’s look at different ways that we can visualize one or more observed distributions.

We will cover

  • histograms
  • kernel density estimates and
  • violin plots

18.3.2.1Histograms

We can histogram the income distribution we just constructed as follows

fig, ax = plt.subplots()
ax.hist(x, bins=5, density=True, histtype='bar')
ax.set_xlabel('income')
ax.set_ylabel('density')
plt.show()
<Figure size 640x480 with 1 Axes>

Let’s look at a distribution from real data.

In particular, we will look at the monthly return on Amazon shares between 2000/1/1 and 2024/1/1.

The monthly return is calculated as the percent change in the share price over each month.

So we will have one observation for each month.

df = yf.download('AMZN', '2000-1-1', '2024-1-1', interval='1mo')
prices = df['Close']
x_amazon = prices.pct_change()[1:] * 100
x_amazon.head()
Output
Loading...

The first observation is the monthly return (percent change) over January 2000, which was

x_amazon.iloc[0]
Ticker AMZN 6.679568 Name: 2000-02-01 00:00:00, dtype: float64

Let’s turn the return observations into an array and histogram it.

fig, ax = plt.subplots()
ax.hist(x_amazon, bins=20)
ax.set_xlabel('monthly return (percent change)')
ax.set_ylabel('density')
plt.show()
<Figure size 640x480 with 1 Axes>

18.3.2.2Kernel density estimates

Kernel density estimates (KDE) provide a simple way to estimate and visualize the density of a distribution.

If you are not familiar with KDEs, you can think of them as a smoothed histogram.

Let’s have a look at a KDE formed from the Amazon return data.

fig, ax = plt.subplots()
sns.kdeplot(x_amazon, ax=ax)
ax.set_xlabel('monthly return (percent change)')
ax.set_ylabel('KDE')
plt.show()
<Figure size 640x480 with 1 Axes>

The smoothness of the KDE is dependent on how we choose the bandwidth.

fig, ax = plt.subplots()
sns.kdeplot(x_amazon, ax=ax, bw_adjust=0.1, alpha=0.5, label="bw=0.1")
sns.kdeplot(x_amazon, ax=ax, bw_adjust=0.5, alpha=0.5, label="bw=0.5")
sns.kdeplot(x_amazon, ax=ax, bw_adjust=1, alpha=0.5, label="bw=1")
ax.set_xlabel('monthly return (percent change)')
ax.set_ylabel('KDE')
plt.legend()
plt.show()
<Figure size 640x480 with 1 Axes>

When we use a larger bandwidth, the KDE is smoother.

A suitable bandwidth is not too smooth (underfitting) or too wiggly (overfitting).

18.3.2.3Violin plots

Another way to display an observed distribution is via a violin plot.

fig, ax = plt.subplots()
ax.violinplot(x_amazon)
ax.set_ylabel('monthly return (percent change)')
ax.set_xlabel('KDE')
plt.show()
<Figure size 640x480 with 1 Axes>

Violin plots are particularly useful when we want to compare different distributions.

For example, let’s compare the monthly returns on Amazon shares with the monthly return on Costco shares.

df = yf.download('COST', '2000-1-1', '2024-1-1', interval='1mo')
prices = df['Close']
x_costco = prices.pct_change()[1:] * 100
Output
fig, ax = plt.subplots()
ax.violinplot([x_amazon['AMZN'], x_costco['COST']])
ax.set_ylabel('monthly return (percent change)')
ax.set_xlabel('retailers')

ax.set_xticks([1, 2])
ax.set_xticklabels(['Amazon', 'Costco'])
plt.show()
<Figure size 640x480 with 1 Axes>

18.3.3Connection to probability distributions

Let’s discuss the connection between observed distributions and probability distributions.

Sometimes it’s helpful to imagine that an observed distribution is generated by a particular probability distribution.

For example, we might look at the returns from Amazon above and imagine that they were generated by a normal distribution.

(Even though this is not true, it might be a helpful way to think about the data.)

Here we match a normal distribution to the Amazon monthly returns by setting the sample mean to the mean of the normal distribution and the sample variance equal to the variance.

Then we plot the density and the histogram.

μ = x_amazon.mean()
σ_squared = x_amazon.var()
σ = np.sqrt(σ_squared)
u = scipy.stats.norm(μ, σ)
x_grid = np.linspace(-50, 65, 200)
fig, ax = plt.subplots()
ax.plot(x_grid, u.pdf(x_grid))
ax.hist(x_amazon, density=True, bins=40)
ax.set_xlabel('monthly return (percent change)')
ax.set_ylabel('density')
plt.show()
<Figure size 640x480 with 1 Axes>

The match between the histogram and the density is not bad but also not very good.

One reason is that the normal distribution is not really a good fit for this observed data --- we will discuss this point again when we talk about heavy tailed distributions.

Of course, if the data really is generated by the normal distribution, then the fit will be better.

Let’s see this in action

  • first we generate random draws from the normal distribution
  • then we histogram them and compare with the density.
μ, σ = 0, 1
u = scipy.stats.norm(μ, σ)
N = 2000  # Number of observations
x_draws = u.rvs(N)
x_grid = np.linspace(-4, 4, 200)
fig, ax = plt.subplots()
ax.plot(x_grid, u.pdf(x_grid))
ax.hist(x_draws, density=True, bins=40)
ax.set_xlabel('x')
ax.set_ylabel('density')
plt.show()
<Figure size 640x480 with 1 Axes>

Note that if you keep increasing NN, which is the number of observations, the fit will get better and better.

This convergence is a version of the “law of large numbers”, which we will discuss later.

CC-BY-SA-4.0

Creative Commons License – This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International.