cs540 machine learning lecture 14 mixtures, em, non ...murphyk/teaching/cs540-fall08/l14.pdflecture...

42
CS540 Machine learning Lecture 14 Mixtures, EM, Non-parametric models

Upload: others

Post on 14-Oct-2020

15 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: CS540 Machine learning Lecture 14 Mixtures, EM, Non ...murphyk/Teaching/CS540-Fall08/L14.pdfLecture 14 Mixtures, EM, Non-parametric models Outline • Mixture models • EM for mixture

CS540 Machine learningLecture 14Mixtures, EM,

Non-parametric models

Page 2: CS540 Machine learning Lecture 14 Mixtures, EM, Non ...murphyk/Teaching/CS540-Fall08/L14.pdfLecture 14 Mixtures, EM, Non-parametric models Outline • Mixture models • EM for mixture

Outline

• Mixture models

• EM for mixture models• K means clustering

• Conditional mixtures• Kernel density estimation

• Kernel regression

Page 3: CS540 Machine learning Lecture 14 Mixtures, EM, Non ...murphyk/Teaching/CS540-Fall08/L14.pdfLecture 14 Mixtures, EM, Non-parametric models Outline • Mixture models • EM for mixture

Gaussian mixture models

• GMM

p(x|θ) =K∑

k=1

p(z = k|π)p(x|z = k,φk)

p(x|z = k,φk) = N (x|µk,Σk)

Page 4: CS540 Machine learning Lecture 14 Mixtures, EM, Non ...murphyk/Teaching/CS540-Fall08/L14.pdfLecture 14 Mixtures, EM, Non-parametric models Outline • Mixture models • EM for mixture

Bernoulli mixture models

• BMM

p(x|θ) =K∑

k=1

p(z = k|π)p(x|z = k,φk)

p(x|z = k,µk) =

d∏

j=1

Ber(xj |µj,k)

Page 5: CS540 Machine learning Lecture 14 Mixtures, EM, Non ...murphyk/Teaching/CS540-Fall08/L14.pdfLecture 14 Mixtures, EM, Non-parametric models Outline • Mixture models • EM for mixture

MLE for mixture models

• Hard to compute. Can find local maximum using gradient methods.

ℓ(θ) = log p(x1:n|θ) =∑

i

log p(xi|θ) =∑

i

log∑

zi

p(xi, zi|θ)

Page 6: CS540 Machine learning Lecture 14 Mixtures, EM, Non ...murphyk/Teaching/CS540-Fall08/L14.pdfLecture 14 Mixtures, EM, Non-parametric models Outline • Mixture models • EM for mixture

Outline

• Mixture models

• EM for mixture models• K means clustering

• Conditional mixtures• Kernel density estimation

• Kernel regression

Page 7: CS540 Machine learning Lecture 14 Mixtures, EM, Non ...murphyk/Teaching/CS540-Fall08/L14.pdfLecture 14 Mixtures, EM, Non-parametric models Outline • Mixture models • EM for mixture

Expectation Maximization

• EM is an algorithm for finding MLE or MAP for problems with hidden variables

• Key intuition: if we knew what cluster each point belonged to (i.e., the z_i variables), we could partition the data and find the MLE for each cluster separately

• E step: infer responsibility of each cluster for each data point

• M step: optimize parameters using “filled in” data z

• Repeat until convergence

rik = p(zi = k|θ,D)

Page 8: CS540 Machine learning Lecture 14 Mixtures, EM, Non ...murphyk/Teaching/CS540-Fall08/L14.pdfLecture 14 Mixtures, EM, Non-parametric models Outline • Mixture models • EM for mixture

Expected complete data loglik

• Complete data loglik

• Expected complete data loglik

ℓc(θ) = log p(x1:n, z1:n|θ)

= log∏

i

p(zi|π)p(xi|zi,φ)

= log∏

i

k

[πkp(xi|φk)]I(zi=k)

=∑

i

k

I(zi = k)[log πk + log p(xi|φk)]

Q(θ,θold)def= E

i

log p(xi, zi|θ)

= E∑

i

k

I(zi = k) log[πkp(xi|φk)]

=∑

i

k

p(zi|xi, θold) log[πkp(xi|φk)]

Page 9: CS540 Machine learning Lecture 14 Mixtures, EM, Non ...murphyk/Teaching/CS540-Fall08/L14.pdfLecture 14 Mixtures, EM, Non-parametric models Outline • Mixture models • EM for mixture

EM for mixture models

• E step: compute responsibilities

• M step: maximize wrt θrik

def= p(zi = k|xi,θ

old)

Q(θ, θold) =∑

i

k

rik log πk +∑

k

i

rik log p(xi|φk)

= J(π) +∑

k

J(φk)

Q(θ,θold)def= E

i

log p(xi, zi|θ)

= E∑

i

k

I(zi = k) log[πkp(xi|φk)]

=∑

i

k

p(zi|xi, θold) log[πkp(xi|φk)]

Page 10: CS540 Machine learning Lecture 14 Mixtures, EM, Non ...murphyk/Teaching/CS540-Fall08/L14.pdfLecture 14 Mixtures, EM, Non-parametric models Outline • Mixture models • EM for mixture

EM for GMM

• E step

• M step

rik = p(zi = k|xi,θold) =

πkN (xi|µk,Σk)∑k′ πk′N (xi|µk′ ,Σk′)

0 =∂

∂πj[∑

i

k

rik log πk + λ(1−∑

k

πk)]

πk =1

n

i

rik =rk

n

Page 11: CS540 Machine learning Lecture 14 Mixtures, EM, Non ...murphyk/Teaching/CS540-Fall08/L14.pdfLecture 14 Mixtures, EM, Non-parametric models Outline • Mixture models • EM for mixture

M step for mu, Sigma

J(µk,Σk) = − 12

i

rik[log |Σk|+ (xi − µk)

TΣ−1k (xi − µk)

]

0 =∂

∂µkJ(µk,Σk)

µk =

∑i rikxi∑i rik

Σk =

∑i rik(xi − µk)(xi − µk)

T

∑i rik

=

∑i rikxix

Ti − rkµkµ

Tk

rk

Page 12: CS540 Machine learning Lecture 14 Mixtures, EM, Non ...murphyk/Teaching/CS540-Fall08/L14.pdfLecture 14 Mixtures, EM, Non-parametric models Outline • Mixture models • EM for mixture

EM for GMM

Page 13: CS540 Machine learning Lecture 14 Mixtures, EM, Non ...murphyk/Teaching/CS540-Fall08/L14.pdfLecture 14 Mixtures, EM, Non-parametric models Outline • Mixture models • EM for mixture

EM for mixtures of Bernoullis

• E step

• M step

rik =πkp(xi|µk)∑k′ πk′p(xi|µk′)

µnewk =

∑n

i=1 rikxi∑n

i=1 rik

Page 14: CS540 Machine learning Lecture 14 Mixtures, EM, Non ...murphyk/Teaching/CS540-Fall08/L14.pdfLecture 14 Mixtures, EM, Non-parametric models Outline • Mixture models • EM for mixture

Singularities in GMM

• Can maximize likelihood by letting σk -> 0 (overfitting)

Page 15: CS540 Machine learning Lecture 14 Mixtures, EM, Non ...murphyk/Teaching/CS540-Fall08/L14.pdfLecture 14 Mixtures, EM, Non-parametric models Outline • Mixture models • EM for mixture

EM for MAP for GMMs

• Maximize expected complete data log likelihood plus log prior

J(θ) =

[∑

i

k

rik log p(xi|zi = k, θ)

]

+ log p(π) +∑

k

log p(φk)

π ∼ Dir(α1)

πk =

∑i rik + α− 1

n+Kα−Kp(µk,Λk) = NW (µk,Λk|mk, ηk,Sk,νk) = N (µk|mk, ηkΛk)Wi(Λk|νk,Sk)

µk =ηkmk

∑i rikxi

ηk +∑

i rik

Λ−1k =

∑i rik(xi − µk)(xi − µk)

T

∑i rik + νk − d

+ηk(µk −mk)(µk −mk)

T + Sk∑i rik + νk − d

Page 16: CS540 Machine learning Lecture 14 Mixtures, EM, Non ...murphyk/Teaching/CS540-Fall08/L14.pdfLecture 14 Mixtures, EM, Non-parametric models Outline • Mixture models • EM for mixture

Setting hyper-parameters

• Can set αk = 1 (see later)

• m0 = 0, ηk=0 (improper)� νκ = d+2, Sk depends on data scale

E[Λ−1k ] =Sk

νk − d− 1

Sk = (νk − d− 1)σ̂2

KId

Sk = (νk − d− 1)1

Kdiag(σ̂1

2, . . . , σ̂d2)

Page 17: CS540 Machine learning Lecture 14 Mixtures, EM, Non ...murphyk/Teaching/CS540-Fall08/L14.pdfLecture 14 Mixtures, EM, Non-parametric models Outline • Mixture models • EM for mixture

Choosing K

• Search over K, score with CV or BIC• Or set αk ≈ 0, and run EM once; unneeded

components get responsibility 0

Page 18: CS540 Machine learning Lecture 14 Mixtures, EM, Non ...murphyk/Teaching/CS540-Fall08/L14.pdfLecture 14 Mixtures, EM, Non-parametric models Outline • Mixture models • EM for mixture

Other issues

• Standard to do multiple random restarts, and return best of T tries to avoid local optima

• For GMMs, common to initialize using K-means or set each µk to one of the data points; otherwise, random initial params

• Convergence declared if params stop changing, or if the observed data loglik stops increasing

Page 19: CS540 Machine learning Lecture 14 Mixtures, EM, Non ...murphyk/Teaching/CS540-Fall08/L14.pdfLecture 14 Mixtures, EM, Non-parametric models Outline • Mixture models • EM for mixture

EM for other models

• Latent variable models eg PPCA, HMMs

• MLE of a single MVN with missing values• MLE of scale mixture model (eg student T, Lasso)

• Empirical Bayes• Etc.

Page 20: CS540 Machine learning Lecture 14 Mixtures, EM, Non ...murphyk/Teaching/CS540-Fall08/L14.pdfLecture 14 Mixtures, EM, Non-parametric models Outline • Mixture models • EM for mixture

Outline

• Mixture models

• EM for mixture models• K means clustering

• Conditional mixtures• Kernel density estimation

• Kernel regression

Page 21: CS540 Machine learning Lecture 14 Mixtures, EM, Non ...murphyk/Teaching/CS540-Fall08/L14.pdfLecture 14 Mixtures, EM, Non-parametric models Outline • Mixture models • EM for mixture

K-means clustering

• GMM with Σk = σ2 Id, πk = 1/K, only µk is learned• In E step, use hard assignment (can use kd-trees to

speed this up)

Algorithm 1: K-meansalgorithm

initializemk, k ← 1 toK1

repeat2

Assign each datapoint to its closest cluster center:3

zi = argmink ||xi −mk||2

Updateeach cluster center by computing the mean of all points assigned to4

it: mk =1nk

∑i:zi=k

xi

until converged5

Page 22: CS540 Machine learning Lecture 14 Mixtures, EM, Non ...murphyk/Teaching/CS540-Fall08/L14.pdfLecture 14 Mixtures, EM, Non-parametric models Outline • Mixture models • EM for mixture

Vector quantization

Replace each xi ∈ R2 with a codeword zi in {1,..,K}

This is an index into the codebook m1, m2, …, mK in R2

Page 23: CS540 Machine learning Lecture 14 Mixtures, EM, Non ...murphyk/Teaching/CS540-Fall08/L14.pdfLecture 14 Mixtures, EM, Non-parametric models Outline • Mixture models • EM for mixture

K-means minimizes the distortion

Original K=2 K=4

Page 24: CS540 Machine learning Lecture 14 Mixtures, EM, Non ...murphyk/Teaching/CS540-Fall08/L14.pdfLecture 14 Mixtures, EM, Non-parametric models Outline • Mixture models • EM for mixture

K-means minimizes the distortion

Original K=2 K=4

Page 25: CS540 Machine learning Lecture 14 Mixtures, EM, Non ...murphyk/Teaching/CS540-Fall08/L14.pdfLecture 14 Mixtures, EM, Non-parametric models Outline • Mixture models • EM for mixture

K-means minimizes the distortion

Original K=2 K=4

Page 26: CS540 Machine learning Lecture 14 Mixtures, EM, Non ...murphyk/Teaching/CS540-Fall08/L14.pdfLecture 14 Mixtures, EM, Non-parametric models Outline • Mixture models • EM for mixture

K-medoids

• Each cluster is represented by a single data point (prototype), rather than an average of many data points

Page 27: CS540 Machine learning Lecture 14 Mixtures, EM, Non ...murphyk/Teaching/CS540-Fall08/L14.pdfLecture 14 Mixtures, EM, Non-parametric models Outline • Mixture models • EM for mixture

EM vs K-means

Cannot use CV to select K for K-means!!

Testerror vs K

Page 28: CS540 Machine learning Lecture 14 Mixtures, EM, Non ...murphyk/Teaching/CS540-Fall08/L14.pdfLecture 14 Mixtures, EM, Non-parametric models Outline • Mixture models • EM for mixture

Outline

• Mixture models

• EM for mixture models• K means clustering

• Conditional mixtures• Kernel density estimation

• Kernel regression

Page 29: CS540 Machine learning Lecture 14 Mixtures, EM, Non ...murphyk/Teaching/CS540-Fall08/L14.pdfLecture 14 Mixtures, EM, Non-parametric models Outline • Mixture models • EM for mixture

Conditional mixtures

Page 30: CS540 Machine learning Lecture 14 Mixtures, EM, Non ...murphyk/Teaching/CS540-Fall08/L14.pdfLecture 14 Mixtures, EM, Non-parametric models Outline • Mixture models • EM for mixture

Mixtures of linear regression

LL=-3 LL=-27.6 Bishop

Page 31: CS540 Machine learning Lecture 14 Mixtures, EM, Non ...murphyk/Teaching/CS540-Fall08/L14.pdfLecture 14 Mixtures, EM, Non-parametric models Outline • Mixture models • EM for mixture

Mixtures of logistic regression

Page 32: CS540 Machine learning Lecture 14 Mixtures, EM, Non ...murphyk/Teaching/CS540-Fall08/L14.pdfLecture 14 Mixtures, EM, Non-parametric models Outline • Mixture models • EM for mixture

Hierarchical mixtures of experts

Brutti

Page 33: CS540 Machine learning Lecture 14 Mixtures, EM, Non ...murphyk/Teaching/CS540-Fall08/L14.pdfLecture 14 Mixtures, EM, Non-parametric models Outline • Mixture models • EM for mixture

One to many functions

Sminchisescu

Page 34: CS540 Machine learning Lecture 14 Mixtures, EM, Non ...murphyk/Teaching/CS540-Fall08/L14.pdfLecture 14 Mixtures, EM, Non-parametric models Outline • Mixture models • EM for mixture

Ambiguity in inferring 3d from 2d

Sminchisescu

Page 35: CS540 Machine learning Lecture 14 Mixtures, EM, Non ...murphyk/Teaching/CS540-Fall08/L14.pdfLecture 14 Mixtures, EM, Non-parametric models Outline • Mixture models • EM for mixture

Outline

• Mixture models

• EM for mixture models• K means clustering

• Conditional mixtures• Kernel density estimation

• Kernel regression

Page 36: CS540 Machine learning Lecture 14 Mixtures, EM, Non ...murphyk/Teaching/CS540-Fall08/L14.pdfLecture 14 Mixtures, EM, Non-parametric models Outline • Mixture models • EM for mixture

Kernel density estimation

• Parzen window density estimator

• Put one centroid on each data point, µi = xi, and set πi = 1/n, Σi = h2 Id

Page 37: CS540 Machine learning Lecture 14 Mixtures, EM, Non ...murphyk/Teaching/CS540-Fall08/L14.pdfLecture 14 Mixtures, EM, Non-parametric models Outline • Mixture models • EM for mixture

Bandwidth h

Page 38: CS540 Machine learning Lecture 14 Mixtures, EM, Non ...murphyk/Teaching/CS540-Fall08/L14.pdfLecture 14 Mixtures, EM, Non-parametric models Outline • Mixture models • EM for mixture

Outline

• Mixture models

• EM for mixture models• K means clustering

• Conditional mixtures• Kernel density estimation

• Kernel regression

Page 39: CS540 Machine learning Lecture 14 Mixtures, EM, Non ...murphyk/Teaching/CS540-Fall08/L14.pdfLecture 14 Mixtures, EM, Non-parametric models Outline • Mixture models • EM for mixture

Kernel regression

• Nadaraya-Watson model

• KDE on (y_i,x_i)

• Eg f is Gaussian

Page 40: CS540 Machine learning Lecture 14 Mixtures, EM, Non ...murphyk/Teaching/CS540-Fall08/L14.pdfLecture 14 Mixtures, EM, Non-parametric models Outline • Mixture models • EM for mixture

Gaussian kernel regression

Numerator

Denominator

Hence

Page 41: CS540 Machine learning Lecture 14 Mixtures, EM, Non ...murphyk/Teaching/CS540-Fall08/L14.pdfLecture 14 Mixtures, EM, Non-parametric models Outline • Mixture models • EM for mixture

Gaussian kernel regression

Page 42: CS540 Machine learning Lecture 14 Mixtures, EM, Non ...murphyk/Teaching/CS540-Fall08/L14.pdfLecture 14 Mixtures, EM, Non-parametric models Outline • Mixture models • EM for mixture

Generative models for regression

• We can other models for p(x,y), eg finite GMM

• Harder to fit; faster at test time• Generative models can handle missing data