lec19

a. \(4^8 \cdot 3 = 96\) b. It takes exponential time, permutations increase exponenetially with number of data points

2.

a.

D = [2, 3, 4, 8, 9, 10, 15, 16, 17]
M = [5, 14, 21]
A = []
for i in D:
    L = [(i-m)**2 for m in M]
    A.append(M[L.index(min(L))])
A
5555514141414

b.

import random
def update_centroids(M, A):
    def reassign_empty(M, A):
        for i in M:
            if i not in A:
                new = random.choice(D)
                A[D.index(new)] = i
                return reassign_empty(M, A)
        return A
    A = reassign_empty(M, A)
    M_new = [0 for m in M]
    S_len = [0 for m in M]
    for d,a in zip(D, A):
        for m in range(len(M)):
            if a == M[m]:
                S_len[m] += 1
                M_new[m] += d
    M = [x/s for (x, s) in zip(M_new, S_len)]
    return M
M = update_centroids(M, A)
M

c.

D = [2, 3, 4, 8, 9, 10, 15, 16, 17]
A = []
for i in D:
    L = [(i-m)**2 for m in M]
    A.append(M[L.index(min(L))])
A
5.25.25.210.010.010.016.016.016.0

d.

In my code above, I select a random member of D to be reassigned to the empty centroid. I made this function recursive in case the reasignment creates a new empty centroid. The function will run until each centroid has at least one assignment.

e.

Assignment: assign each observation to the closest cluster
while assignments change:
    while centroid has no assignments:
        assign random observation to empty centroid
    recalculate centroids:
        m_i = (1/length(assignments to centroid)) * sum(each assignment to centroid)

Time complexity: \(O(n^{dk+1}\log n)\)

3.

a.

from scipy.stats import norm
D = [1, 4, 6, 9]
Mu = [1, 9]
Sigma = [1, 1]
results = []
for mu, sigma in zip(Mu, Sigma):
    results.append([f"mu={mu}, sigma={sigma}"]+ list(norm.pdf(D, mu, sigma)))
results
mu=1, sigma=10.39894228040143270.00443184841193800751.4867195147342979e-065.052271083536893e-15
mu=9, sigma=15.052271083536893e-151.4867195147342979e-060.00443184841193800750.3989422804014327

b.

D = list(zip(results[0][1:], results[1][1:]))
posterior = []
for i1, i2 in D:
    normalization = i1 + i2
    posterior.append([i1/(normalization), i2/(normalization)])

posterior

c.

prior = [0, 0]
for i in posterior:
    prior[0] += i[0]
    prior[1] += i[1]
prior = [i/len(posterior) for i in prior]
prior

d.

mu = [0, 0]
mu_d = [0, 0]
for p, (d1, d2) in zip(posterior, D):
    mu[0] += p[0] * d1
    mu[1] += p[1] * d2
    mu_d[0] += p[0]
    mu_d[1] += p[1]
mu[0] = mu[0] / mu_d[0]
mu[1] = mu[1] / mu_d[1]
mu
sigma = [0, 0]
sigma_d = [0, 0]
for p, (d1, d2) in zip(posterior, D):
    sigma[0] += p[0] * (d1 - mu[0])**2
    sigma[1] += p[1] * (d2 - mu[1])**2
    sigma_d[0] += p[0]
    sigma_d[1] += p[1]
sigma[0] = sigma[0] / sigma_d[0]
sigma[1] = sigma[1] / sigma_d[1]
sigma
[["mu_1", "mu2"], mu, ["sigma2_1", "sigma2_2"], sigma]

e

a and b are part of the E step, c and d are part of the m step