Skip to main content

A marginalized variational bayesian approach to the analysis of array data

Abstract

Background

Bayesian unsupervised learning methods have many applications in the analysis of biological data. For example, for the cancer expression array datasets presented in this study, they can be used to resolve possible disease subtypes and to indicate statistically significant dysregulated genes within these subtypes.

Results

In this paper we outline a marginalized variational Bayesian inference method for unsupervised clustering. In this approach latent process variables and model parameters are allowed to be dependent. This is achieved by marginalizing the mixing Dirichlet variables and then performing inference in the reduced variable space. An iterative update procedure is proposed.

Conclusion

Theoretically and experimentally we show that the proposed algorithm gives a much better free-energy lower bound than a standard variational Bayesian approach. The algorithm is computationally efficient and its performance is demonstrated on two expression array data sets.

Background

Unsupervised clustering methods from machine learning are very appropriate in extracting structure from biological data sets. There has been extensive work in this direction using hierarchical clustering analysis [1], K-Means clustering [2] and self-organizing maps [3]. Bayesian methods are an effective alternative since they provide a mechanism for inferring the number of clusters. They can easily incorporate priors which penalise over-complexed models which would fit to noise and they allow probabilistic confidence measures for cluster membership. In this paper, we focus on Bayesian models which use Dirichlet priors. Examples of these models include Latent Dirichlet Allocation [4] (LDA) for use in text modeling and Latent Process Decomposition (LPD) [5] for analysis of microarray gene expression datasets. One appealing feature of the latter models is that each data point can be stochastically associated with multiple clusters. One approach to model inference is to use methods such as Markov Chain Monte Carlo and Gibbs sampling. However, for the large datasets which occur in many biomedical applications these methods can be too slow for certain tasks such as model selection. This motivates our interest in computationally efficient variational inference methods [46].

Typically, these inference methods posit that all the latent variables and model parameters are independent of each other (i.e. a fully factorized family) which is a strong assumption. In this paper we propose and study an alternative inference method for LPD, which we call marginalized variational Bayesian (MVB). In this approach the latent process (cluster) variables and model parameters are allowed to be dependent on each other. As we will show in the next section, this assumption is made feasible by marginalizing the mixing Dirichlet variables, and then performing inference in the reduced variable space. This new approach to constructing an LPD model theoretically and experimentally provides much better free-energy lower bounds than standard a variational Bayes (VB) approach [6, 7]. Moreover, the algorithm is computationally efficient and converges faster, as we demonstrate with experiments using expression array datasets.

Methods

The LPD probabilistic model

We start by recalling LPD [5]. Let d index samples, g the genes (attributes) and k the soft clusters (samples are represented as combinatorial mixtures over clusters). The numbers of clusters, genes and samples are denoted K MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWenfgDOvwBHrxAJfwnHbqeg0uy0HwzTfgDPnwy1aaceaGae8NcXVeaaa@375D@ , G MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWenfgDOvwBHrxAJfwnHbqeg0uy0HwzTfgDPnwy1aaceaGae8NbXFeaaa@3755@ , and D MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWenfgDOvwBHrxAJfwnHbqeg0uy0HwzTfgDPnwy1aaceaGae83aXteaaa@374F@ respectively. For each data E d , we have a multiple process (cluster) latent variable Z d = {Z dg : g = 1,..., G MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWenfgDOvwBHrxAJfwnHbqeg0uy0HwzTfgDPnwy1aaceaGae8NbXFeaaa@3755@ } where each Z dg is a K MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWenfgDOvwBHrxAJfwnHbqeg0uy0HwzTfgDPnwy1aaceaGae8NcXVeaaa@375D@ -dimensional unit-basis vector, i.e., choosing cluster k is represented by Zdg, k= 1 and Zdg, j= 0 for jk, otherwise. Given the mixing coefficient θ d , the conditional distribution of Z d is given by p ( Z d | θ d ) = g , k θ d k Z d g , k MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaemiCaaNaeiikaGIaemOwaO1aaSbaaSqaaiabdsgaKbqabaGccqGG8baFcqaH4oqCdaWgaaWcbaGaemizaqgabeaakiabcMcaPiabg2da9maarababaGaeqiUde3aa0baaSqaaiabdsgaKjabdUgaRbqaaiabdQfaAnaaBaaameaacqWGKbazcqWGNbWzcqGGSaalcqWGRbWAaeqaaaaaaSqaaiabdEgaNjabcYcaSiabdUgaRbqab0Gaey4dIunaaaa@47C3@ . The conditional distributions, given the latent variables, is given by p ( E d | Z d , μ , β ) = g , k [ N ( E d g | μ g k , β g k ) ] Z d g , k MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaemiCaaNaeiikaGIaemyrau0aaSbaaSqaaiabdsgaKbqabaGccqGG8baFcqWGAbGwdaWgaaWcbaGaemizaqgabeaakiabcYcaSiabeY7aTjabcYcaSiabek7aIjabcMcaPiabg2da9maarababaGaei4waS1enfgDOvwBHrxAJfwnHbqeg0uy0HwzTfgDPnwy1aaceaGae8xdX7KaeiikaGIaemyrau0aaSbaaSqaaiabdsgaKjabdEgaNbqabaGccqGG8baFcqaH8oqBdaWgaaWcbaGaem4zaCMaem4AaSgabeaakiabcYcaSiabek7aInaaBaaaleaacqWGNbWzcqWGRbWAaeqaaOGaeiykaKIaeiyxa01aaWbaaSqabeaacqWGAbGwdaWgaaadbaGaemizaqMaem4zaCMaeiilaWIaem4AaSgabeaaaaaaleaacqWGNbWzcqGGSaalcqWGRbWAaeqaniabg+Givdaaaa@6711@ , where N MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWenfgDOvwBHrxAJfwnHbqeg0uy0HwzTfgDPnwy1aaceaGae8xdX7eaaa@3763@ is the Gaussian distribution with mean μ and precision β.

Now we introduce conjugate priors over parameters θ, μ, β. Specifically, we choose p(θ d ) = Dir(θ d |α), and p ( μ ) ~ g , k N ( μ g k | m 0 , v 0 ) MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaemiCaaNaeiikaGIaeqiVd0MaeiykaKIaeiOFa43aaebeaeaat0uy0HwzTfgDPnwy1egaryqtHrhAL1wy0L2yHvdaiqaacqWFneVtcqGGOaakcqaH8oqBdaWgaaWcbaGaem4zaCMaem4AaSgabeaakiabcYha8jabd2gaTnaaBaaaleaacqaIWaamaeqaaOGaeiilaWIaemODay3aaSbaaSqaaiabicdaWaqabaGccqGGPaqkaSqaaiabdEgaNjabcYcaSiabdUgaRbqab0Gaey4dIunaaaa@50F4@ , and p(β) distributed as ∏ gk Γ(β gk |a0, b0) where Γ is defined by Γ ( x | a 0 , b 0 ) = x a 0 1 exp ( x b ) / b 0 a 0 Γ ( b 0 ) MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaeu4KdCKaeiikaGIaemiEaGNaeiiFaWNaemyyae2aaSbaaSqaaiabicdaWaqabaGccqGGSaalcqWGIbGydaWgaaWcbaGaeGimaadabeaakiabcMcaPiabg2da9iabdIha4naaCaaaleqabaGaemyyae2aaSbaaWqaaiabicdaWaqabaWccqGHsislcqaIXaqmaaGccyGGLbqzcqGG4baEcqGGWbaCjuaGdaqadaqaaiabgkHiTmaalaaabaGaemiEaGhabaGaemOyaigaaaGaayjkaiaawMcaaOGaei4la8IaemOyai2aa0baaSqaaiabicdaWaqaaiabdggaHnaaBaaameaacqaIWaamaeqaaaaakiabfo5ahjabcIcaOiabdkgaInaaBaaaleaacqaIWaamaeqaaOGaeiykaKcaaa@540D@ . We assume the data is i.i.d. and let Θ = {μ, β }. The joint distribution is given by

p ( E , θ , Z | Θ ) = d p ( θ d ) p ( Z d | θ d ) p ( E d | μ , β , Z d ) , MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaemiCaaNaeiikaGIaemyrauKaeiilaWIaeqiUdeNaeiilaWIaemOwaOLaeiiFaWNaeuiMdeLaeiykaKIaeyypa0ZaaebuaeaacqWGWbaCcqGGOaakcqaH4oqCdaWgaaWcbaGaemizaqgabeaakiabcMcaPiabdchaWjabcIcaOiabdQfaAnaaBaaaleaacqWGKbazaeqaaOGaeiiFaWNaeqiUde3aaSbaaSqaaiabdsgaKbqabaGccqGGPaqkcqWGWbaCcqGGOaakcqWGfbqrdaWgaaWcbaGaemizaqgabeaakiabcYha8jabeY7aTjabcYcaSiabek7aIjabcYcaSiabdQfaAnaaBaaaleaacqWGKbazaeqaaOGaeiykaKcaleaacqWGKbazaeqaniabg+GivdGccqGGSaalaaa@5D52@
(1)

One can easily see that the marginal likelihood p(E|Θ) is the same as that in [5]. It is important to note that, in standard Gaussian mixture models [8], each data point is only related with a K MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWenfgDOvwBHrxAJfwnHbqeg0uy0HwzTfgDPnwy1aaceaGae8NcXVeaaa@375D@ -dimensional latent variable which restricts the data to being in one cluster. Instead, in LPD each data point E d is associated with multiple latent variables Z d = {Z dg : g = 1,..., G MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWenfgDOvwBHrxAJfwnHbqeg0uy0HwzTfgDPnwy1aaceaGae8NbXFeaaa@3755@ }, and thus E d is stochastically associated with multiple clusters.

Marginalized variational Bayes

In this section we describe a marginalized variational Bayesian approach for LPD. The target of model inference is to compute the posterior distribution p(θ, Z, Θ |E) = p(E, θ, Z|Θ)p(Θ)/p(E). Unfortunately, this involves computationally intensive estimation of the integral in the evidence p(E). Hence, we approximate the posterior distribution in a hypothesis family whose element are denoted by q(θ, Z, Θ).

The standard variational bayesian method [7, 10] uses the equality:

log p ( E ) = log Z p ( E , θ , Z , Θ ) d θ d Θ = E q [ log p ( E , θ , Z | Θ ) p ( Θ ) q ( θ , Z , Θ ) ] + KL ( q ( θ , Z , Θ ) | | p ( θ , Z , Θ ) ) ) . MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaqbaeWabiqaaaqaaiGbcYgaSjabc+gaVjabcEgaNjabdchaWjabcIcaOiabdweafjabcMcaPiabg2da9iGbcYgaSjabc+gaVjabcEgaNnaapeaabaWaaabuaeaacqWGWbaCcqGGOaakcqWGfbqrcqGGSaalcqaH4oqCcqGGSaalcqWGAbGwcqGGSaalcqqHyoqucqGGPaqkcqWGKbazcqaH4oqCcqWGKbazcqqHyoquaSqaaiabdQfaAbqab0GaeyyeIuoaaSqabeqaniabgUIiYdaakeaacqGH9aqptuuDJXwAK1uy0HMmaeHbfv3ySLgzG0uy0HgiuD3BaGabaiab=ri8fnaaBaaaleaacqWGXbqCaeqaaOWaamWaaeaacyGGSbaBcqGGVbWBcqGGNbWzjuaGdaWcaaqaaiabdchaWjabcIcaOiabdweafjabcYcaSiabeI7aXjabcYcaSiabdQfaAjabcYha8jabfI5arjabcMcaPiabdchaWjabcIcaOiabfI5arjabcMcaPaqaaiabdghaXjabcIcaOiabeI7aXjabcYcaSiabdQfaAjabcYcaSiabfI5arjabcMcaPaaaaOGaay5waiaaw2faaiabgUcaRiabbUealjabbYeamjabcIcaOiabdghaXjabcIcaOiabeI7aXjabcYcaSiabdQfaAjabcYcaSiabfI5arjabcMcaPiabcYha8jabcYha8jabdchaWjabcIcaOiabeI7aXjabcYcaSiabdQfaAjabcYcaSiabfI5arjabcMcaPiabcMcaPiabcMcaPiabc6caUaaaaaa@9B18@
(2)

Our optimization target is to maximize the free-energy: E q [ log p ( E , θ , Z | Θ ) p ( Θ ) q ( θ , Z , Θ ) ] MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWefv3ySLgznfgDOjdaryqr1ngBPrginfgDObcv39gaiqaacqWFecFrdaWgaaWcbaGaemyCaehabeaakmaadmaabaGagiiBaWMaei4Ba8Maei4zaCwcfa4aaSaaaeaacqWGWbaCcqGGOaakcqWGfbqrcqGGSaalcqaH4oqCcqGGSaalcqWGAbGwcqGG8baFcqqHyoqucqGGPaqkcqWGWbaCcqGGOaakcqqHyoqucqGGPaqkaeaacqWGXbqCcqGGOaakcqaH4oqCcqGGSaalcqWGAbGwcqGGSaalcqqHyoqucqGGPaqkaaaakiaawUfacaGLDbaaaaa@59BF@ which, equivalently, minimizes the KL-divergence. One standard way is to choose the hypothesis family in a factorized form q(θ, Z, Θ) = q(θ)q(Z)q(Θ). In this setting, the free-energy lower bound (2) for the likelihood can be written as:

( q ( θ ) , q ( Z ) , q ( Θ ) ) : = E q [ log p ( E , θ , Z | Θ ) q ( θ ) , q ( Z ) ] KL ( q ( Θ | | p ( Θ ) ) ) . MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWenfgDOvwBHrxAJfwnHbqeg0uy0HwzTfgDPnwy1aaceaGae8NeHWKaeiikaGIaemyCaeNaeiikaGIaeqiUdeNaeiykaKIaeiilaWIaemyCaeNaeiikaGIaemOwaOLaeiykaKIaeiilaWIaemyCaeNaeiikaGIaeuiMdeLaeiykaKIaeiykaKIaeiOoaOJaeyypa0Zefv3ySLgznfgDOjdarCqr1ngBPrginfgDObcv39gaiuaacqGFecFrdaWgaaWcbaGaemyCaehabeaakmaadmaabaGagiiBaWMaei4Ba8Maei4zaCwcfa4aaSaaaeaacqWGWbaCcqGGOaakcqWGfbqrcqGGSaalcqaH4oqCcqGGSaalcqWGAbGwcqGG8baFcqqHyoqucqGGPaqkaeaacqWGXbqCcqGGOaakcqaH4oqCcqGGPaqkcqGGSaalcqWGXbqCcqGGOaakcqWGAbGwcqGGPaqkaaaakiaawUfacaGLDbaacqGHsislcqqGlbWscqqGmbatcqGGOaakcqWGXbqCcqGGOaakcqqHyoqucqGG8baFcqGG8baFcqWGWbaCcqGGOaakcqqHyoqucqGGPaqkcqGGPaqkcqGGPaqkcqGGUaGlaaa@861D@
(3)

In this paper we study an alternative approach motivated by [9] which only marginalizes the latent variable θ and do variational inference only with respect to the leftover latent variable Z. In essence, we assume that the latent variables θ can be dependent on Z, Θ and the hypothesis family is chosen in the form of q(θ, Z, Θ) = q(θ |Z, Θ)q(Z)q(Θ). Since the distribution q(θ|Z, Θ) is arbitrary, let it be equal to p ( θ | E , Z , Θ ) = p ( E , θ , Z , Θ ) p ( E , Z , Θ ) MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaemiCaaNaeiikaGIaeqiUdeNaeiiFaWNaemyrauKaeiilaWIaemOwaOLaeiilaWIaeuiMdeLaeiykaKIaeyypa0tcfa4aaSaaaeaacqWGWbaCcqGGOaakcqWGfbqrcqGGSaalcqaH4oqCcqGGSaalcqWGAbGwcqGGSaalcqqHyoqucqGGPaqkaeaacqWGWbaCcqGGOaakcqWGfbqrcqGGSaalcqWGAbGwcqGGSaalcqqHyoqucqGGPaqkaaaaaa@4D2B@ . Putting this into equation (2) and observing that p ( E , θ , Z | Θ ) p ( θ | E , Z , Θ ) p ( E , Z | Θ ) MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaqcfa4aaSaaaeaacqWGWbaCcqGGOaakcqWGfbqrcqGGSaalcqaH4oqCcqGGSaalcqWGAbGwcqGG8baFcqqHyoqucqGGPaqkaeaacqWGWbaCcqGGOaakcqaH4oqCcqGG8baFcqWGfbqrcqGGSaalcqWGAbGwcqGGSaalcqqHyoqucqGGPaqkaaGccqWGWbaCcqGGOaakcqWGfbqrcqGGSaalcqWGAbGwcqGG8baFcqqHyoqucqGGPaqkaaa@4D6F@ gives

log p ( E ) = E q [ log p ( E , Z | Θ ) q ( Z ) ] KL ( q ( Θ | | p ( Θ ) ) ) + KL ( p ( θ | Z , Θ ) q ( Θ ) q ( Z ) | | p ( θ , Z , Θ ) ) MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGagiiBaWMaei4Ba8Maei4zaCMaemiCaaNaeiikaGIaemyrauKaeiykaKIaeyypa0Zefv3ySLgznfgDOjdaryqr1ngBPrginfgDObcv39gaiqaacqWFecFrdaWgaaWcbaGaemyCaehabeaakiabcUfaBjGbcYgaSjabc+gaVjabcEgaNLqbaoaalaaabaGaemiCaaNaeiikaGIaemyrauKaeiilaWIaemOwaOLaeiiFaWNaeuiMdeLaeiykaKcabaGaemyCaeNaeiikaGIaemOwaOLaeiykaKcaaOGaeiyxa0LaeyOeI0Iaee4saSKaeeitaWKaeiikaGIaemyCaeNaeiikaGIaeuiMdeLaeiiFaWNaeiiFaWNaemiCaaNaeiikaGIaeuiMdeLaeiykaKIaeiykaKIaeiykaKIaey4kaSIaee4saSKaeeitaWKaeiikaGIaemiCaaNaeiikaGIaeqiUdeNaeiiFaWNaemOwaOLaeiilaWIaeuiMdeLaeiykaKIaemyCaeNaeiikaGIaeuiMdeLaeiykaKIaemyCaeNaeiikaGIaemOwaOLaeiykaKIaeiiFaWNaeiiFaWNaemiCaaNaeiikaGIaeqiUdeNaeiilaWIaemOwaOLaeiilaWIaeuiMdeLaeiykaKIaeiykaKcaaa@8CC1@
(4)
= E q [ log p ( E , Z | Θ ) q ( Z ) ] KL ( q ( Θ | | p ( Θ ) ) ) + KL ( q ( Z ) q ( Θ ) | | p ( Z , Θ ) ) . MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaeyypa0Zefv3ySLgznfgDOjdaryqr1ngBPrginfgDObcv39gaiqaacqWFecFrdaWgaaWcbaGaemyCaehabeaakiabcUfaBjGbcYgaSjabc+gaVjabcEgaNLqbaoaalaaabaGaemiCaaNaeiikaGIaemyrauKaeiilaWIaemOwaOLaeiiFaWNaeuiMdeLaeiykaKcabaGaemyCaeNaeiikaGIaemOwaOLaeiykaKcaaOGaeiyxa0LaeyOeI0Iaee4saSKaeeitaWKaeiikaGIaemyCaeNaeiikaGIaeuiMdeLaeiiFaWNaeiiFaWNaemiCaaNaeiikaGIaeuiMdeLaeiykaKIaeiykaKIaeiykaKIaey4kaSIaee4saSKaeeitaWKaeiikaGIaemyCaeNaeiikaGIaemOwaOLaeiykaKIaemyCaeNaeiikaGIaeuiMdeLaeiykaKIaeiiFaWNaeiiFaWNaemiCaaNaeiikaGIaemOwaOLaeiilaWIaeuiMdeLaeiykaKIaeiykaKIaeiOla4caaa@78DE@
(5)

Therefore, it is sufficient to maximize the lower bound

( q ( Z ) , q ( Θ ) : = E q ( Θ ) q ( Z ) [ log p ( E , Z | Θ ) q ( Z ) ] KL ( q ( Θ | | q ( Θ ) ) . MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWenfgDOvwBHrxAJfwnHbqeg0uy0HwzTfgDPnwy1aaceaGae8NeHWKaeiikaGIaemyCaeNaeiikaGIaemOwaOLaeiykaKIaeiilaWIaemyCaeNaeiikaGIaeuiMdeLaeiykaKIaeiOoaOJaeyypa0Zefv3ySLgznfgDOjdarCqr1ngBPrginfgDObcv39gaiuaacqGFecFrdaWgaaWcbaGaemyCaeNaeiikaGIaeuiMdeLaeiykaKIaemyCaeNaeiikaGIaemOwaOLaeiykaKcabeaakmaadmaabaGagiiBaWMaei4Ba8Maei4zaCwcfa4aaSaaaeaacqWGWbaCcqGGOaakcqWGfbqrcqGGSaalcqWGAbGwcqGG8baFcqqHyoqucqGGPaqkaeaacqWGXbqCcqGGOaakcqWGAbGwcqGGPaqkaaaakiaawUfacaGLDbaacqGHsislcqqGlbWscqqGmbatcqGGOaakcqWGXbqCcqGGOaakcqqHyoqucqGG8baFcqGG8baFcqWGXbqCcqGGOaakcqqHyoqucqGGPaqkcqGGPaqkcqGGUaGlaaa@7DF2@
(6)

Observe that log p ( E , Z | Θ ) q ( Z ) q ( θ ) log p ( E , θ , Z | Θ ) q ( Z ) q ( θ ) d θ MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaqcfa4aaSaaaeaacqWGWbaCcqGGOaakcqWGfbqrcqGGSaalcqWGAbGwcqGG8baFcqqHyoqucqGGPaqkaeaacqWGXbqCcqGGOaakcqWGAbGwcqGGPaqkaaGccqGHLjYSdaWdbaqaaiabdghaXjabcIcaOiabeI7aXjabcMcaPiGbcYgaSjabc+gaVjabcEgaNLqbaoaalaaabaGaemiCaaNaeiikaGIaemyrauKaeiilaWIaeqiUdeNaeiilaWIaemOwaOLaeiiFaWNaeuiMdeLaeiykaKcabaGaemyCaeNaeiikaGIaemOwaOLaeiykaKIaemyCaeNaeiikaGIaeqiUdeNaeiykaKcaaaWcbeqab0Gaey4kIipakiabdsgaKjabeI7aXbaa@5F7F@ . Consequently, we see that

( q ( θ ) , q ( Z ) , q ( Θ ) ) ( q ( Z ) , q ( Θ ) ) . MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWenfgDOvwBHrxAJfwnHbqeg0uy0HwzTfgDPnwy1aaceaGae8NeHWKaeiikaGIaemyCaeNaeiikaGIaeqiUdeNaeiykaKIaeiilaWIaemyCaeNaeiikaGIaemOwaOLaeiykaKIaeiilaWIaemyCaeNaeiikaGIaeuiMdeLaeiykaKIaeiykaKIaeyizImQae8NeHWKaeiikaGIaemyCaeNaeiikaGIaemOwaOLaeiykaKIaeiilaWIaemyCaeNaeiikaGIaeuiMdeLaeiykaKIaeiykaKIaeiOla4caaa@5734@
(7)

As mentioned above, since θ can be dependent on Z, Θ, marginalized VB (MVB) yields a tighter lower bound for the likelihood than the standard VB approach in [6], thus potentially yielding better clustering results.

Model inference and learning

We now turn our attention to the derivation of updates for marginalized VB following the inference methodology [7, 10]. For simplicity, let the posterior distribution q(Z), q(μ), q(β) be indexed by parameters. Specifically, we assume that q ( Z ) = d , g , k r d g , k Z d g , k MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaemyCaeNaeiikaGIaemOwaOLaeiykaKIaeyypa0ZaaebeaeaacqWGYbGCdaqhaaWcbaGaemizaqMaem4zaCMaeiilaWIaem4AaSgabaGaemOwaO1aaSbaaWqaaiabdsgaKjabdEgaNjabcYcaSiabdUgaRbqabaaaaaWcbaGaemizaqMaeiilaWIaem4zaCMaeiilaWIaem4AaSgabeqdcqGHpis1aaaa@45A0@ , q ( μ ) = g , k N ( μ g k | m g k , v g k ) MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaemyCaeNaeiikaGIaeqiVd0MaeiykaKIaeyypa0Zaaebeaeaat0uy0HwzTfgDPnwy1egaryqtHrhAL1wy0L2yHvdaiqaacqWFneVtcqGGOaakcqaH8oqBdaWgaaWcbaGaem4zaCMaem4AaSgabeaakiabcYha8jabd2gaTnaaBaaaleaacqWGNbWzcqWGRbWAaeqaaOGaeiilaWIaemODay3aaSbaaSqaaiabdEgaNjabdUgaRbqabaGccqGGPaqkaSqaaiabdEgaNjabcYcaSiabdUgaRbqab0Gaey4dIunaaaa@5408@ , and q(β) = ∏g, kΓ(β gk |a gk , b gk ). Correspondingly, the free-energy lower bound (q(Z), q(Θ)) in equation (6) becomes a variational functional over these parameters, and hence we use (R, μ, β) later on. The model inference can be summarized by the following coordinate ascent updates.

Let Z\dgdenote the random variables excluding Z dg . For any d, g let Θ and Z\dgbe fixed, then we take the functional derivative of the free-energy (q(Z), q(Θ)) w.r.t. q(Z dg ) and obtain the update:

q ( Z d g ) exp ( E q \ d g [ log p ( E , Z | Θ ) ] ) . MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaemyCaeNaeiikaGIaemOwaO1aaSbaaSqaaiabdsgaKjabdEgaNbqabaGccqGGPaqkcqGHDisTcyGGLbqzcqGG4baEcqGGWbaCdaqadaqaamrr1ngBPrwtHrhAYaqeguuDJXwAKbstHrhAGq1DVbaceaGae8hHWx0aaSbaaSqaaiabdghaXnaaCaaameqabaGaeiixaWLaemizaqMaem4zaCgaaaWcbeaakiabcUfaBjGbcYgaSjabc+gaVjabcEgaNjabdchaWjabcIcaOiabdweafjabcYcaSiabdQfaAjabcYha8jabfI5arjabcMcaPiabc2faDbGaayjkaiaawMcaaiabc6caUaaa@5CFE@
(8)

For the updates for q(Θ), we obtain

q ( μ ) p ( μ ) exp ( E q \ μ [ log p ( E , Z | Θ ) ] ) , q ( β ) p ( β ) exp ( E q \ β [ log p ( E , Z | Θ ) ] ) . MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaemyCaeNaeiikaGIaeqiVd0MaeiykaKIaeyyhIuRaemiCaaNaeiikaGIaeqiVd0MaeiykaKIagiyzauMaeiiEaGNaeiiCaa3aaeWaaeaatuuDJXwAK1uy0HMmaeHbfv3ySLgzG0uy0HgiuD3BaGabaiab=ri8fnaaBaaaleaacqWGXbqCdaahaaadbeqaaiabcYfaCjabeY7aTbaaaSqabaGccqGGBbWwcyGGSbaBcqGGVbWBcqGGNbWzcqWGWbaCcqGGOaakcqWGfbqrcqGGSaalcqWGAbGwcqGG8baFcqqHyoqucqGGPaqkcqGGDbqxaiaawIcacaGLPaaacqGGSaalcqWGXbqCdaqadaqaaiabek7aIbGaayjkaiaawMcaaiabg2Hi1kabdchaWnaabmaabaGaeqOSdigacaGLOaGaayzkaaGagiyzauMaeiiEaGNaeiiCaa3aaeWaaeaacqWFecFrdaWgaaWcbaGaemyCae3aaWbaaWqabeaacqGGCbaxcqaHYoGyaaaaleqaaOGaei4waSLagiiBaWMaei4Ba8Maei4zaCMaemiCaaNaeiikaGIaemyrauKaeiilaWIaemOwaOLaeiiFaWNaeuiMdeLaeiykaKIaeiyxa0facaGLOaGaayzkaaGaeiOla4caaa@8696@
(9)

Marginalizing out θ in (1) yields

p ( E , Z | Θ ) : = d [ p ( Z d | α ) ] p ( E d | μ d , β d , Z d ) = d [ Γ ( k α k ) Γ ( k α k + g , k Z d g , k ) k Γ ( α k + g Z d g , k ) Γ ( α k ) ] g , k [ N ( E d g | μ g k , β g k ) ] Z d g , k . MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaqbaeaabiqaaaqaaiabdchaWjabcIcaOiabdweafjabcYcaSiabdQfaAjabcYha8jabfI5arjabcMcaPiabcQda6iabg2da9maarababaWaamWaaeaacqWGWbaCcqGGOaakcqWGAbGwdaWgaaWcbaGaemizaqgabeaakiabcYha8jabeg7aHjabcMcaPaGaay5waiaaw2faaiabdchaWjabcIcaOiabdweafnaaBaaaleaacqWGKbazaeqaaOGaeiiFaWNaeqiVd02aaSbaaSqaaiabdsgaKbqabaGccqGGSaalcqaHYoGydaWgaaWcbaGaemizaqgabeaakiabcYcaSiabdQfaAnaaBaaaleaacqWGKbazaeqaaOGaeiykaKcaleaacqWGKbazaeqaniabg+GivdaakeaacqGH9aqpdaqeqaqaamaadmaabaqcfa4aaSaaaeaacqqHtoWrcqGGOaakdaaeqaqaaiabeg7aHnaaBaaabaGaem4AaSgabeaaaeaacqWGRbWAaeqacqGHris5aiabcMcaPaqaaiabfo5ahjabcIcaOmaaqababaGaeqySde2aaSbaaeaacqWGRbWAaeqaaaqaaiabdUgaRbqabiabggHiLdGaey4kaSYaaabeaeaacqWGAbGwdaWgaaqaaiabdsgaKjabdEgaNjabcYcaSiabdUgaRbqabaGaeiykaKcabaGaem4zaCMaeiilaWIaem4AaSgabeGaeyyeIuoaaaGcdaqeqaqaaKqbaoaalaaabaGaeu4KdCKaeiikaGIaeqySde2aaSbaaeaacqWGRbWAaeqaaiabgUcaRmaaqababaGaemOwaO1aaSbaaeaacqWGKbazcqWGNbWzcqGGSaalcqWGRbWAaeqaaaqaaiabdEgaNbqabiabggHiLdGaeiykaKcabaGaeu4KdCKaeiikaGIaeqySde2aaSbaaeaacqWGRbWAaeqaaiabcMcaPaaaaSqaaiabdUgaRbqab0Gaey4dIunaaOGaay5waiaaw2faamaarababaWaamWaaeaat0uy0HwzTfgDPnwy1egaryqtHrhAL1wy0L2yHvdaiqaacqWFneVtdaqadaqaaiabdweafnaaBaaaleaacqWGKbazcqWGNbWzaeqaaOGaeiiFaWNaeqiVd02aaSbaaSqaaiabdEgaNjabdUgaRbqabaGccqGGSaalcqaHYoGydaWgaaWcbaGaem4zaCMaem4AaSgabeaaaOGaayjkaiaawMcaaaGaay5waiaaw2faamaaCaaaleqabaGaemOwaO1aaSbaaWqaaiabdsgaKjabdEgaNjabcYcaSiabdUgaRbqabaaaaaWcbaGaem4zaCMaeiilaWIaem4AaSgabeqdcqGHpis1aOGaeiOla4caleaacqWGKbazaeqaniabg+Givdaaaaaa@C3B0@
(10)

Estimating the expectations of the log likelihoods in equations (8) and (9), we derive the variational EM-updates as follows. Details are postponed to the Appendix.

E-step: using equation (8) and denoting the digamma function by ψ, we have

r d g , k ( α k + g g r d g , k ) exp ( N d g , k ) exp ( g g r d g , k ( 1 r d g , k ) 2 ( α k + g g r d g , k ) 2 ) MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaemOCai3aaSbaaSqaaiabdsgaKjabdEgaNjabcYcaSiabdUgaRbqabaGccqGHDisTjuaGdaWcaaqaaiabcIcaOiabeg7aHnaaBaaabaGaem4AaSgabeaacqGHRaWkdaaeqaqaaiabdkhaYnaaBaaabaGaemizaqMafm4zaCMbauaacqGGSaalcqWGRbWAaeqaaaqaaiqbdEgaNzaafaGaeyiyIKRaem4zaCgabeGaeyyeIuoacqGGPaqkcyGGLbqzcqGG4baEcqGGWbaCcqGGOaakcqWGobGtdaWgaaqaaiabdsgaKjabdEgaNjabcYcaSiabdUgaRbqabaGaeiykaKcabaGagiyzauMaeiiEaGNaeiiCaa3aaeWaaeaadaWcaaqaamaaqababaGaemOCai3aaSbaaeaacqWGKbazcuWGNbWzgaqbaiabcYcaSiabdUgaRbqabaaabaGafm4zaCMbauaacqGHGjsUcqWGNbWzaeqacqGHris5aiabcIcaOiabigdaXiabgkHiTiabdkhaYnaaBaaabaGaemizaqMafm4zaCMbauaacqGGSaalcqWGRbWAaeqaaiabcMcaPaqaaiabikdaYiabcIcaOiabeg7aHnaaBaaabaGaem4AaSgabeaacqGHRaWkdaaeqaqaaiabdkhaYnaaBaaabaGaemizaqMafm4zaCMbauaacqGGSaalcqWGRbWAaeqaaaqaaiqbdEgaNzaafaGaeyiyIKRaem4zaCgabeGaeyyeIuoacqGGPaqkdaahaaqabeaacqaIYaGmaaaaaaGaayjkaiaawMcaaaaaaaa@848E@
(11)

where Ndg, kis given by 0.5(ψ(a gk )+log b gk ) - 0.5a gk b gk ((E dg - m gk )2 + v g k 1 MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaemODay3aa0baaSqaaiabdEgaNjabdUgaRbqaaiabgkHiTiabigdaXaaaaaa@320A@ ) and rdg, kshould be normalized to one over k.

M-step: using equation (9):

v g k = v 0 + a g k b g k d r d g , k , MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaemODay3aaSbaaSqaaiabdEgaNjabdUgaRbqabaGccqGH9aqpcqWG2bGDdaWgaaWcbaGaeGimaadabeaakiabgUcaRiabdggaHnaaBaaaleaacqWGNbWzcqWGRbWAaeqaaOGaemOyai2aaSbaaSqaaiabdEgaNjabdUgaRbqabaGcdaaeqbqaaiabdkhaYnaaBaaaleaacqWGKbazcqWGNbWzcqGGSaalcqWGRbWAaeqaaaqaaiabdsgaKbqab0GaeyyeIuoakiabcYcaSaaa@4848@
(12)
m g k = 1 v g k [ v 0 m 0 + a g k b g k d r d g , k E d g ] , MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaemyBa02aaSbaaSqaaiabdEgaNjabdUgaRbqabaGccqGH9aqpjuaGdaWcaaqaaiabigdaXaqaaiabdAha2naaBaaabaGaem4zaCMaem4AaSgabeaaaaGccqGGBbWwcqWG2bGDdaWgaaWcbaGaeGimaadabeaakiabd2gaTnaaBaaaleaacqaIWaamaeqaaOGaey4kaSIaemyyae2aaSbaaSqaaiabdEgaNjabdUgaRbqabaGccqWGIbGydaWgaaWcbaGaem4zaCMaem4AaSgabeaakmaaqafabaGaemOCai3aaSbaaSqaaiabdsgaKjabdEgaNjabcYcaSiabdUgaRbqabaGccqWGfbqrdaWgaaWcbaGaemizaqMaem4zaCgabeaaaeaacqWGKbazaeqaniabggHiLdGccqGGDbqxcqGGSaalaaa@5712@
(13)
a g k = a 0 + 0.5 d r d g , k , MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaemyyae2aaSbaaSqaaiabdEgaNjabdUgaRbqabaGccqGH9aqpcqWGHbqydaWgaaWcbaGaeGimaadabeaakiabgUcaRiabicdaWiabc6caUiabiwda1maaqafabaGaemOCai3aaSbaaSqaaiabdsgaKjabdEgaNjabcYcaSiabdUgaRbqabaaabaGaemizaqgabeqdcqGHris5aOGaeiilaWcaaa@424E@
(14)
1 b g k = 1 b 0 + 0.5 d r d g , k [ ( E d g m g k ) 2 + 1 v g k ] . MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaqcfa4aaSaaaeaacqaIXaqmaeaacqWGIbGydaWgaaqaaiabdEgaNjabdUgaRbqabaaaaOGaeyypa0tcfa4aaSaaaeaacqaIXaqmaeaacqWGIbGydaWgaaqaaiabicdaWaqabaaaaOGaey4kaSIaeGimaaJaeiOla4IaeGynauZaaabuaeaacqWGYbGCdaWgaaWcbaGaemizaqMaem4zaCMaeiilaWIaem4AaSgabeaaaeaacqWGKbazaeqaniabggHiLdGcdaWadaqaaiabcIcaOiabdweafnaaBaaaleaacqWGKbazcqWGNbWzaeqaaOGaeyOeI0IaemyBa02aaSbaaSqaaiabdEgaNjabdUgaRbqabaGccqGGPaqkdaahaaWcbeqaaiabikdaYaaakiabgUcaRKqbaoaalaaabaGaeGymaedabaGaemODay3aaSbaaeaacqWGNbWzcqWGRbWAaeqaaaaaaOGaay5waiaaw2faaiabc6caUaaa@5A1C@
(15)

We pursue the above iterative procedure until convergence of the lower bound (R; Θ) whose evaluation is given in the Appendix. Since Zdg, kdetermines the cluster for the observed data point E d at attribute g and rdg, kis its expectation, we intuitively assign data E d to cluster arg max{∑ g rdg;k: k = 1,...., K MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWenfgDOvwBHrxAJfwnHbqeg0uy0HwzTfgDPnwy1aaceaGae8NcXVeaaa@375D@ }. We can also do model selection over the number of clusters based on a free energy lower bound of the marginalized VB. Experiments in the next section show that this approach is reasonable.

Results

We ran marginalized VB on three data sets. The first was the wine data set from the UCI Repository [11]: this has 178 samples and each sample has 13 features. This data set was chosen for the purpose of validating the proposed method since there are 3 distinct clusters present (derived from 3 cultivars). As more biologically relevant examples we then selected two cancer expression array datasets. The first of these was a lung cancer data set [12] consisting of 73 samples and 918 features. The second was a leukemia data set [13] with 90 samples and 500 features. All the data sets were normalized to zero mean and unit variance and the hyper-parameters m0, v0, a0, and b0 were chosen to have the same values in both standard VB and marginalized VB. Since the datasets are normalized and m0, v0 are hyper-parameters of the Gaussian prior distribution over the mean for the data, it is reasonable to choose m0 = 0, v0 = 1. For similar reasons, given a0, b0 are hyper-parameters of the Gamma prior distribution over the precision (inverse variance) of the data and the mean of a Gamma distributed random variable is a0b0, we chose a0 = 20 and b0 = 0.05 throughout these experiments.

First we compared the free energy lower bound of marginalized VB and standard VB based on 30 random initialization. In Figure 1 (top row) we observe an improvement in the free energy as a function of iteration step, for marginalized VB over standard VB. In analogy to standard VB, marginalized VB can determine the appropriate number of soft clusters by estimating the free energy bound given by equation (6) in contrast to the hold-out cross-validation procedure for a maximum likelihood approach to LPD [5]. To investigate the effectiveness of this approach to model selection, free energies were averaged over 20 runs from different random initializations. As shown in Figure 1 (middle row), marginalized VB performed well in determining the correct number of clusters (three) in the UCI wine data set. For the cancer array datasets, the peak in the averaged free energy is less marked with an indication of six soft clusters for the leukemia data set and seven clusters for the lung cancer data set.

Figure 1
figure 1

Results for the wine data set (left column), lung cancer data set (middle column) and leukemia data set (right column). Top row (a-c): free energy bounds comparison (upper curve:MVB, lower curve:VB). Middle row (d-f): free energy (y-axis) versus K MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWenfgDOvwBHrxAJfwnHbqeg0uy0HwzTfgDPnwy1aaceaGae8NcXVeaaa@375D@ , the number of clusters. Bottom row (g-i): the normalized ∑ g rdg, kgives a confidence measure that sample d belongs to a cluster k. For the two cancer datasets, samples separated by dashed lines belong to an identified class e.g. adenocarcinoma samples or small cell lung cancer samples (figure (h), see text).

In the bottom row of Figure 1, we see that marginalized VB shows quite promising clustering results using the normalized ∑ g rdg, k: these peaks indicate the confidence in the allocation of the dth sample to the kth cluster and accord well with known classifications. The lung cancer dataset of Garber et al [12] (middle column, Figure 1) consisted of 73 gene expression profiles from normal and tumour samples with the tumours labelled as squamous, large cell, small cell and adenocarcinoma. The samples are in the order in which they are presented in the original paper [12] with the dashed lines showing their original principal sample groupings. As with Garber et al [12] we identified seven clusters in the data with the adenocarcinoma samples falling into three separate clusters with strong correlation with clinical outcomes. For their ordering (which we follow) samples 1–19 belong to adenocarcinoma cluster 1, samples 20–26 belong to adenocarcinoma cluster 2, samples 27–32 are normal tissue samples, samples 33–43 are adenocarcinoma cluster 3, samples 44–60 are squamous cell carcinomas, samples 61–67 are small cell carcinomas and samples 68–73 are from large cell tumours.

As our last example, we applied the proposed MVB method to an oligonucleotide microarray dataset from 360 patients with acute lymphoblastic leukemia (ALL) from Yeoh et al [13]. ALL is known to have a number of subtypes with variable responses to chemotherapy. In many cases fusion genes are implicated in the genesis of the disease. For the Yeoh et al [13] dataset, samples were drawn from leukemias with rearrangements involving BCR-ABL, E2A-PBX1, TEL-AML1, rearrangements of MLL gene, hyperdiploid karyotope (more than 50 chromosomes) and T lineage leukemias (T-ALL). The free energy is plotted in Figure 1 (right column, middle row) with a peak suggesting 6 subtypes. The dashed lines represent the original demarcations of groups according to known genetic rearrangement. Samples 1–15 are BCR-ABL, 16–42 are E2A-PBX1, 43–106 Hyperdiploid > 50, 107–126 MLL, 206–248 T-ALL, 249–327 TEL-AML1, 328–335 Group23 and 127–205 are labelled as Others. Some groupings, such as E2A-PBX1, are very distinct. However, the overall groupings are not as well defined as with lung cancer.

Conclusion

We have proposed an efficient variational Bayesian inference method for LPD probabilistic models. By allowing the variables to be dependent on each other, the method can provide more accurate approximation than standard VB. Also, the method provides a principled approach to model selection via the free energy bound. Promising clustering results were also reported on lung cancer and leukemia data sets. Extensions of this method to semi-supervised clustering will be reported elsewhere.

Appendix

In this appendix we derive the EM-updates and free energy bound for MVB.

Derivation of updates

Noting that, for any d, g, ∑ k Zdg, k= 1 and denoting the number of features by G MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWenfgDOvwBHrxAJfwnHbqeg0uy0HwzTfgDPnwy1aaceaGae8NbXFeaaa@3755@ we obtain from equation (10):

log p ( E , Z | Θ ) = D log Γ ( k α k ) D k log Γ ( α k ) D log Γ ( k α k + G ) + d , k log Γ ( α k + g Z d g , k ) + d , g , k Z d g , k log N ( E d g | μ g k , β g k ) . MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaqbaeaabiqaaaqaaiGbcYgaSjabc+gaVjabcEgaNjabdchaWjabcIcaOiabdweafjabcYcaSiabdQfaAjabcYha8jabfI5arjabcMcaPiabg2da9mrtHrhAL1wy0L2yHvtyaeHbnfgDOvwBHrxAJfwnaGabaiab=nq8ejGbcYgaSjabc+gaVjabcEgaNjabfo5ahjabcIcaOmaaqababaGaeqySde2aaSbaaSqaaiabdUgaRbqabaaabaGaem4AaSgabeqdcqGHris5aOGaeiykaKIaeyOeI0Iae83aXt0aaabeaeaacyGGSbaBcqGGVbWBcqGGNbWzcqqHtoWrcqGGOaakcqaHXoqydaWgaaWcbaGaem4AaSgabeaakiabcMcaPaWcbaGaem4AaSgabeqdcqGHris5aOGaeyOeI0Iae83aXtKagiiBaWMaei4Ba8Maei4zaCMaeu4KdCKaeiikaGYaaabeaeaacqaHXoqydaWgaaWcbaGaem4AaSgabeaakiabgUcaRiab=zq8hbWcbaGaem4AaSgabeqdcqGHris5aOGaeiykaKcabaGaey4kaSYaaabeaeaacyGGSbaBcqGGVbWBcqGGNbWzcqqHtoWrcqGGOaakcqaHXoqydaWgaaWcbaGaem4AaSgabeaakiabgUcaRmaaqababaGaemOwaO1aaSbaaSqaaiqbdsgaKzaafaGafm4zaCMbauaacqGGSaalcqWGRbWAaeqaaaqaaiqbdEgaNzaafaaabeqdcqGHris5aOGaeiykaKIaey4kaSYaaabeaeaacqWGAbGwdaWgaaWcbaGafmizaqMbauaacuWGNbWzgaqbaiabcYcaSiabdUgaRbqabaGccyGGSbaBcqGGVbWBcqGGNbWzcqWFneVtcqGGOaakcqWGfbqrdaWgaaWcbaGafmizaqMbauaacuWGNbWzgaqbaaqabaGccqGG8baFcqaH8oqBdaWgaaWcbaGafm4zaCMbauaacqWGRbWAaeqaaOGaeiilaWIaeqOSdi2aaSbaaSqaaiqbdEgaNzaafaGaem4AaSgabeaaaeaacuWGKbazgaqbaiabcYcaSiqbdEgaNzaafaGaeiilaWIaem4AaSgabeqdcqGHris5aOGaeiykaKIaeiOla4caleaacuWGKbazgaqbaiabcYcaSiabdUgaRbqab0GaeyyeIuoaaaaaaa@B9B7@
(16)

Since Γ ( α k + g g Z d g , k + Z d g , k ) = ( α k + g g Z d g , k ) Z d g , k Γ ( α k + g g Z d g , k ) MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaeu4KdC0aaeWaaeaacqaHXoqydaWgaaWcbaGaem4AaSgabeaakiabgUcaRmaaqababaGaemOwaO1aaSbaaSqaaiabdsgaKjqbdEgaNzaafaGaeiilaWIaem4AaSgabeaakiabgUcaRiabdQfaAnaaBaaaleaacqWGKbazcqWGNbWzcqGGSaalcqWGRbWAaeqaaaqaaiqbdEgaNzaafaGaeyiyIKRaem4zaCgabeqdcqGHris5aaGccaGLOaGaayzkaaGaeyypa0ZaaeWaaeaacqaHXoqydaWgaaWcbaGaem4AaSgabeaakiabgUcaRmaaqababaGaemOwaO1aaSbaaSqaaiabdsgaKjqbdEgaNzaafaGaeiilaWIaem4AaSgabeaaaeaacuWGNbWzgaqbaiabgcMi5kabdEgaNbqab0GaeyyeIuoaaOGaayjkaiaawMcaamaaCaaaleqabaGaemOwaO1aaSbaaWqaaiabdsgaKjabdEgaNjabcYcaSiabdUgaRbqabaaaaOGaeu4KdC0aaeWaaeaacqaHXoqydaWgaaWcbaGaem4AaSgabeaakiabgUcaRmaaqababaGaemOwaO1aaSbaaSqaaiabdsgaKjqbdEgaNzaafaGaeiilaWIaem4AaSgabeaaaeaacuWGNbWzgaqbaiabgcMi5kabdEgaNbqab0GaeyyeIuoaaOGaayjkaiaawMcaaaaa@7489@ , putting this observation into the log p(E, Z|Θ) yields:

E q \ d g ( log p ( E , Z | Θ ) ) = k Z d g , k ( E q \ d g [ log ( α k + g g Z d g , k ) ] + E q ( Θ ) [ log N ( E d g | μ g k , σ g k ) ] ) + constant terms , MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaqbaeaabiWaaaqaamrr1ngBPrwtHrhAYaqeguuDJXwAKbstHrhAGq1DVbaceaGae8hHWx0aaSbaaSqaaiabdghaXjabcYfaCjabdsgaKjabdEgaNbqabaGcdaqadaqaaiGbcYgaSjabc+gaVjabcEgaNjabdchaWjabcIcaOiabdweafjabcYcaSiabdQfaAjabcYha8jabfI5arjabcMcaPaGaayjkaiaawMcaaaqaaiabg2da9aqaamaaqababaGaemOwaO1aaSbaaSqaaiabdsgaKjabdEgaNjabcYcaSiabdUgaRbqabaGcdaqabaqaaiab=ri8fnaaBaaaleaacqWGXbqCcqGGCbaxcqWGKbazcqWGNbWzaeqaaOWaamWaaeaacyGGSbaBcqGGVbWBcqGGNbWzdaqadaqaaiabeg7aHnaaBaaaleaacqWGRbWAaeqaaOGaey4kaSYaaabeaeaacqWGAbGwdaWgaaWcbaGaemizaqMafm4zaCMbauaacqGGSaalcqWGRbWAaeqaaaqaaiqbdEgaNzaafaGaeyiyIKRaem4zaCgabeqdcqGHris5aaGccaGLOaGaayzkaaaacaGLBbGaayzxaaaacaGLOaaaaSqaaiabdUgaRbqab0GaeyyeIuoaaOqaaaqaaiabgUcaRaqaamaabiaabaGae8hHWx0aaSbaaSqaaiabdghaXjabcIcaOiabfI5arjabcMcaPaqabaGcdaWadaqaaiGbcYgaSjabc+gaVjabcEgaNnrtHrhAL1wy0L2yHvtyaeXbnfgDOvwBHrxAJfwnaGqbaiab+1q8ojabcIcaOiabdweafnaaBaaaleaacqWGKbazcqWGNbWzaeqaaOGaeiiFaWNaeqiVd02aaSbaaSqaaiabdEgaNjabdUgaRbqabaGccqGGSaalcqaHdpWCdaWgaaWcbaGaem4zaCMaem4AaSgabeaakiabcMcaPaGaay5waiaaw2faaaGaayzkaaGaey4kaSIaee4yamMaee4Ba8MaeeOBa4Maee4CamNaeeiDaqNaeeyyaeMaeeOBa4MaeeiDaqNaeeiiaaIaeeiDaqNaeeyzauMaeeOCaiNaeeyBa0Maee4CamNaeiilaWcaaaaa@B861@

where constant terms are independent of Zdg, k. Hence, substituting this into equation (8) we conclude that

r d g , k exp ( E q ( Θ ) [ log N ( E d g | μ g k , β g k ) ] + log E q \ d g [ log ( α k + g g Z d g , k ) ] ) . MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaemOCai3aaSbaaSqaaiabdsgaKjabdEgaNjabcYcaSiabdUgaRbqabaGccqGHDisTcyGGLbqzcqGG4baEcqGGWbaCdaqadaqaamrr1ngBPrwtHrhAYaqeguuDJXwAKbstHrhAGq1DVbaceaGae8hHWx0aaSbaaSqaaiabdghaXjabcIcaOiabfI5arjabcMcaPaqabaGcdaWadaqaaiGbcYgaSjabc+gaVjabcEgaNnrtHrhAL1wy0L2yHvtyaeXbnfgDOvwBHrxAJfwnaGqbaiab+1q8ojabcIcaOiabdweafnaaBaaaleaacqWGKbazcqWGNbWzaeqaaOGaeiiFaWNaeqiVd02aaSbaaSqaaiabdEgaNjabdUgaRbqabaGccqGGSaalcqaHYoGydaWgaaWcbaGaem4zaCMaem4AaSgabeaakiabcMcaPaGaay5waiaaw2faaiabgUcaRiGbcYgaSjabc+gaVjabcEgaNjab=ri8fnaaBaaaleaacqWGXbqCcqGGCbaxcqWGKbazcqWGNbWzaeqaaOWaamWaaeaacyGGSbaBcqGGVbWBcqGGNbWzdaqadaqaaiabeg7aHnaaBaaaleaacqWGRbWAaeqaaOGaey4kaSYaaabeaeaacqWGAbGwdaWgaaWcbaGaemizaqMafm4zaCMbauaacqGGSaalcqWGRbWAaeqaaaqaaiqbdEgaNzaafaGaeyiyIKRaem4zaCgabeqdcqGHris5aaGccaGLOaGaayzkaaaacaGLBbGaayzxaaaacaGLOaGaayzkaaGaeiOla4caaa@937E@
(17)

To estimate the expectation of the Normal distribution, we use the following observations (e.g. [7]) for the Gamma and Normal distributions:

E q ( β ) [ β g k ] = a g k b g k , E q ( β ) [ log β g k ] = ψ ( a g k ) + log b g k , MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaqbaeqabeGaaaqaamrr1ngBPrwtHrhAYaqeguuDJXwAKbstHrhAGq1DVbaceaGae8hHWx0aaSbaaSqaaiabdghaXjabcIcaOiabek7aIjabcMcaPaqabaGccqGGBbWwcqaHYoGydaWgaaWcbaGaem4zaCMaem4AaSgabeaakiabc2faDjabg2da9iabdggaHnaaBaaaleaacqWGNbWzcqWGRbWAaeqaaOGaemOyai2aaSbaaSqaaiabdEgaNjabdUgaRbqabaGccqGGSaalaeaacqWFecFrdaWgaaWcbaGaemyCaeNaeiikaGIaeqOSdiMaeiykaKcabeaakiabcUfaBjGbcYgaSjabc+gaVjabcEgaNjabek7aInaaBaaaleaacqWGNbWzcqWGRbWAaeqaaOGaeiyxa0Laeyypa0JaeqiYdKNaeiikaGIaemyyae2aaSbaaSqaaiabdEgaNjabdUgaRbqabaGccqGGPaqkcqGHRaWkcyGGSbaBcqGGVbWBcqGGNbWzcqWGIbGydaWgaaWcbaGaem4zaCMaem4AaSgabeaakiabcYcaSaaaaaa@739A@

and

E q ( μ ) [ μ g k 2 ] = m g k 2 + v g k 1 , E q ( μ ) [ μ g k ] = m g k . MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWefv3ySLgznfgDOjdaryqr1ngBPrginfgDObcv39gaiqaacqWFecFrdaWgaaWcbaGaemyCaeNaeiikaGIaeqiVd0MaeiykaKcabeaakiabcUfaBjabeY7aTnaaDaaaleaacqWGNbWzcqWGRbWAaeaacqaIYaGmaaGccqGGDbqxcqGH9aqpcqWGTbqBdaqhaaWcbaGaem4zaCMaem4AaSgabaGaeGOmaidaaOGaey4kaSIaemODay3aa0baaSqaaiabdEgaNjabdUgaRbqaaiabgkHiTiabigdaXaaakiabcYcaSiab=ri8fnaaBaaaleaacqWGXbqCcqGGOaakcqaH8oqBcqGGPaqkaeqaaOGaei4waSLaeqiVd02aaSbaaSqaaiabdEgaNjabdUgaRbqabaGccqGGDbqxcqGH9aqpcqWGTbqBdaWgaaWcbaGaem4zaCMaem4AaSgabeaakiabc6caUaaa@680C@

Consequently, simple manipulation yields:

E q ( Θ ) [ log N ( E d g | μ g k , β g k ) ] MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWefv3ySLgznfgDOjdaryqr1ngBPrginfgDObcv39gaiqaacqWFecFrdaWgaaWcbaGaemyCaeNaeiikaGIaeuiMdeLaeiykaKcabeaakmaadmaabaGagiiBaWMaei4Ba8Maei4zaC2enfgDOvwBHrxAJfwnHbqeh0uy0HwzTfgDPnwy1aacfaGae4xdX7KaeiikaGIaemyrau0aaSbaaSqaaiabdsgaKjabdEgaNbqabaGccqGG8baFcqaH8oqBdaWgaaWcbaGaem4zaCMaem4AaSgabeaakiabcYcaSiabek7aInaaBaaaleaacqWGNbWzcqWGRbWAaeqaaOGaeiykaKcacaGLBbGaayzxaaaaaa@5F9E@

equals, up to a constant term:

0.5 ( ψ ( a g k ) + log b g k ) 0.5 a g k b g k ( ( E d g m g k ) 2 + v g k 1 ) . MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaeGimaaJaeiOla4IaeGynauJaeiikaGIaeqiYdKNaeiikaGIaemyyae2aaSbaaSqaaiabdEgaNjabdUgaRbqabaGccqGGPaqkcqGHRaWkcyGGSbaBcqGGVbWBcqGGNbWzcqWGIbGydaWgaaWcbaGaem4zaCMaem4AaSgabeaakiabcMcaPiabgkHiTiabicdaWiabc6caUiabiwda1iabdggaHnaaBaaaleaacqWGNbWzcqWGRbWAaeqaaOGaemOyai2aaSbaaSqaaiabdEgaNjabdUgaRbqabaGccqGGOaakcqGGOaakcqWGfbqrdaWgaaWcbaGaemizaqMaem4zaCgabeaakiabgkHiTiabd2gaTnaaBaaaleaacqWGNbWzcqWGRbWAaeqaaOGaeiykaKYaaWbaaSqabeaacqaIYaGmaaGccqGHRaWkcqWG2bGDdaqhaaWcbaGaem4zaCMaem4AaSgabaGaeyOeI0IaeGymaedaaOGaeiykaKIaeiOla4caaa@6375@
(18)

We also use approximating methods [9] to estimate log E q \ d g [ log ( α k + g g Z d g , k ) ] MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWefv3ySLgznfgDOjdaryqr1ngBPrginfgDObcv39gaiqaacqWFecFrdaWgaaWcbaGaemyCaeNaeiixaWLaemizaqMaem4zaCgabeaakmaadmaabaGagiiBaWMaei4Ba8Maei4zaCMaeiikaGIaeqySde2aaSbaaSqaaiabdUgaRbqabaGccqGHRaWkdaaeqaqaaiabdQfaAnaaBaaaleaacqWGKbazcuWGNbWzgaqbaiabcYcaSiabdUgaRbqabaaabaGafm4zaCMbauaacqGHGjsUcqWGNbWzaeqaniabggHiLdGccqGGPaqkaiaawUfacaGLDbaaaaa@55D5@ . For this purpose, we observe, for any positive random variable x, that

E ( log ( α k + x ) ) log ( α k + E x ) Var ( x ) 2 ( α k + E x ) 2 , MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWefv3ySLgznfgDOjdaryqr1ngBPrginfgDObcv39gaiqaacqWFecFrdaqadaqaaiGbcYgaSjabc+gaVjabcEgaNjabcIcaOiabeg7aHnaaBaaaleaacqWGRbWAaeqaaOGaey4kaSIaemiEaGNaeiykaKcacaGLOaGaayzkaaGaeyisISRagiiBaWMaei4Ba8Maei4zaCMaeiikaGIaeqySde2aaSbaaSqaaiabdUgaRbqabaGccqGHRaWkcqWFecFrcqWG4baEcqGGPaqkcqGHsisljuaGdaWcaaqaaiabbAfawjabbggaHjabbkhaYjabcIcaOiabdIha4jabcMcaPaqaaiabikdaYiabcIcaOiabeg7aHnaaBaaabaGaem4AaSgabeaacqGHRaWkcqWFecFrcqWG4baEcqGGPaqkdaahaaqabeaacqaIYaGmaaaaaOGaeiilaWcaaa@6930@

and E q \ d g [ g g Z d g , k ] = g g r d g , k MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWefv3ySLgznfgDOjdaryqr1ngBPrginfgDObcv39gaiqaacqWFecFrdaWgaaWcbaGaemyCaeNaeiixaWLaemizaqMaem4zaCgabeaakmaadmaabaWaaabeaeaacqWGAbGwdaWgaaWcbaGaemizaqMafm4zaCMbauaacqGGSaalcqWGRbWAaeqaaaqaaiqbdEgaNzaafaGaeyiyIKRaem4zaCgabeqdcqGHris5aaGccaGLBbGaayzxaaGaeyypa0ZaaabeaeaacqWGYbGCdaWgaaWcbaGaemizaqMafm4zaCMbauaacqGGSaalcqWGRbWAaeqaaaqaaiqbdEgaNzaafaGaeyiyIKRaem4zaCgabeqdcqGHris5aaaa@59DA@ , Var ( g g Z d g , k ) = g g r d g , k ( 1 r d g , k ) MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaeeOvayLaeeyyaeMaeeOCai3aaeWaaeaadaaeqaqaaiabdQfaAnaaBaaaleaacqWGKbazcuWGNbWzgaqbaiabcYcaSiabdUgaRbqabaaabaGafm4zaCMbauaacqGHGjsUcqWGNbWzaeqaniabggHiLdaakiaawIcacaGLPaaacqGH9aqpdaaeqaqaaiabdkhaYnaaBaaaleaacqWGKbazcuWGNbWzgaqbaiabcYcaSiabdUgaRbqabaaabaGafm4zaCMbauaacqGHGjsUcqWGNbWzaeqaniabggHiLdGccqGGOaakcqaIXaqmcqGHsislcqWGYbGCdaWgaaWcbaGaemizaqMafm4zaCMbauaacqGGSaalcqWGRbWAaeqaaOGaeiykaKcaaa@561E@ . Plugging the above observations into equation (17) yields the desired E-step updates.

For the updates for q(Θ), the updates are essentially the same as those in [6, 7] since the associated terms with variables with Θ in E q \ μ [ log p ( E , Z | Θ ) ] MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWefv3ySLgznfgDOjdaryqr1ngBPrginfgDObcv39gaiqaacqWFecFrdaWgaaWcbaGaemyCaeNaeiixaWLaeqiVd0gabeaakmaadmaabaGagiiBaWMaei4Ba8Maei4zaCMaemiCaaNaeiikaGIaemyrauKaeiilaWIaemOwaOLaeiiFaWNaeuiMdeLaeiykaKcacaGLBbGaayzxaaaaaa@4B9E@ [log p(E, Z|Θ)] are exact the same, that is, Θ only appears in the Normal distribution. Hence, noting that the product of two Gamma (Normal) distributions is a Gamma (Normal) distribution, we can obtain, from equations (16) and (9), the M-step updates.

Free energy bound

The free-energy lower bound of marginalized VB is defined by equation (6):

( R ; Θ ) = E q [ log p ( E , Z | Θ ) ] E q ( Z ) [ q ( Z ) ] KL ( q ( μ ) | | p ( μ ) ) KL ( q ( β ) | | p ( β ) ) . MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWenfgDOvwBHrxAJfwnHbqeg0uy0HwzTfgDPnwy1aaceaGae8NeHWKaeiikaGIaemOuaiLaei4oaSJaeuiMdeLaeiykaKIaeyypa0Zefv3ySLgznfgDOjdarCqr1ngBPrginfgDObcv39gaiuaacqGFecFrdaWgaaWcbaGaemyCaehabeaakiabcUfaBjGbcYgaSjabc+gaVjabcEgaNjabdchaWjabcIcaOiabdweafjabcYcaSiabdQfaAjabcYha8jabfI5arjabcMcaPiabc2faDjabgkHiTiab+ri8fnaaBaaaleaacqWGXbqCcqGGOaakcqWGAbGwcqGGPaqkaeqaaOGaei4waSLaemyCaeNaeiikaGIaemOwaOLaeiykaKIaeiyxa0LaeyOeI0Iaee4saSKaeeitaW0aaeWaaeaacqWGXbqCcqGGOaakcqaH8oqBcqGGPaqkcqGG8baFcqGG8baFcqWGWbaCcqGGOaakcqaH8oqBcqGGPaqkaiaawIcacaGLPaaacqGHsislcqqGlbWscqqGmbatdaqadaqaaiabdghaXjabcIcaOiabek7aIjabcMcaPiabcYha8jabcYha8jabdchaWjabcIcaOiabek7aIjabcMcaPaGaayjkaiaawMcaaiabc6caUaaa@8C9A@

From the fact that Γ(x + 1) = xΓ(x) for any x > 0, we know that Γ ( α k + g Z d g , k ) = Γ ( α k ) g = 1 G ( α k + j = g + 1 G Z d j , k ) Z d g , k MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaeu4KdCKaeiikaGIaeqySde2aaSbaaSqaaiabdUgaRbqabaGccqGHRaWkdaaeqaqaaiabdQfaAnaaBaaaleaacqWGKbazcqWGNbWzcqGGSaalcqWGRbWAaeqaaaqaaiabdEgaNbqab0GaeyyeIuoakiabcMcaPiabg2da9iabfo5ahjabcIcaOiabeg7aHnaaBaaaleaacqWGRbWAaeqaaOGaeiykaKYaaebmaeaacqGGOaakcqaHXoqydaWgaaWcbaGaem4AaSgabeaakiabgUcaRmaaqadabaGaemOwaO1aaSbaaSqaaiabdsgaKjabdQgaQjabcYcaSiabdUgaRbqabaaabaGaemOAaOMaeyypa0Jaem4zaCMaey4kaSIaeGymaedabaWenfgDOvwBHrxAJfwnHbqeg0uy0HwzTfgDPnwy1aaceaGae8NbXFeaniabggHiLdGccqGGPaqkdaahaaWcbeqaaiabdQfaAnaaBaaameaacqWGKbazcqWGNbWzcqGGSaalcqWGRbWAaeqaaaaaaSqaaiabdEgaNjabg2da9iabigdaXaqaaiab=zq8hbqdcqGHpis1aaaa@708C@ , where we use the convention j = G + 1 G = 0 MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWaaabmaeaacqGH9aqpcqaIWaamaSqaaiabdQgaQjabg2da9mrtHrhAL1wy0L2yHvtyaeHbnfgDOvwBHrxAJfwnaGabaiab=zq8hjabgUcaRiabigdaXaqaaiab=zq8hbqdcqGHris5aaaa@4143@ . Putting this equation into the expression (10) of log likelihood, we obtain:

E q [ log p ( E , Z | Θ ) ] = D log Γ ( k α k ) D k log Γ ( α k ) D log Γ ( k α k + G ) + d , k E q [ log Γ ( α k + g Z d g , k ) ] + d , g , k r d g , k log N ( E d g | μ g k , σ g k ) = D log Γ ( k α k ) D log Γ ( k α k + G ) + d , g , k r d g , k ( E q [ log ( α k + j g + 1 Z d j , k ) ] + log N ( E d g | μ g k , σ g k ) ) . MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaqbaeaabqqaaaaabaWefv3ySLgznfgDOjdaryqr1ngBPrginfgDObcv39gaiqaacqWFecFrdaWgaaWcbaGaemyCaehabeaakiabcUfaBjGbcYgaSjabc+gaVjabcEgaNjabdchaWjabcIcaOiabdweafjabcYcaSiabdQfaAjabcYha8jabfI5arjabcMcaPiabc2faDjabg2da9mrtHrhAL1wy0L2yHvtyaeXbnfgDOvwBHrxAJfwnaGqbaiab+nq8ejGbcYgaSjabc+gaVjabcEgaNjabfo5ahjabcIcaOmaaqababaGaeqySde2aaSbaaSqaaiabdUgaRbqabaaabaGaem4AaSgabeqdcqGHris5aOGaeiykaKIaeyOeI0Iae43aXt0aaabeaeaacyGGSbaBcqGGVbWBcqGGNbWzcqqHtoWrcqGGOaakcqaHXoqydaWgaaWcbaGaem4AaSgabeaakiabcMcaPiabgkHiTiab+nq8ejGbcYgaSjabc+gaVjabcEgaNjabfo5ahjabcIcaOmaaqababaGaeqySde2aaSbaaSqaaiabdUgaRbqabaaabaGaem4AaSgabeqdcqGHris5aOGaey4kaSIae4NbXFKaeiykaKcaleaacqWGRbWAaeqaniabggHiLdaakeaacqGHRaWkdaaeqaqaaiab=ri8fnaaBaaaleaacqWGXbqCaeqaaOGaei4waSLagiiBaWMaei4Ba8Maei4zaCMaeu4KdCKaeiikaGIaeqySde2aaSbaaSqaaiabdUgaRbqabaGccqGHRaWkdaaeqaqaaiabdQfaAnaaBaaaleaacqWGKbazcqWGNbWzcqGGSaalcqWGRbWAaeqaaaqaaiabdEgaNbqab0GaeyyeIuoakiabcMcaPiabc2faDjabgUcaRmaaqababaGaemOCai3aaSbaaSqaaiabdsgaKjabdEgaNjabcYcaSiabdUgaRbqabaaabaGaemizaqMaeiilaWIaem4zaCMaeiilaWIaem4AaSgabeqdcqGHris5aOGagiiBaWMaei4Ba8Maei4zaCMae4xdX7KaeiikaGIaemyrau0aaSbaaSqaaiabdsgaKjabdEgaNbqabaGccqGG8baFcqaH8oqBdaWgaaWcbaGaem4zaCMaem4AaSgabeaakiabcYcaSiabeo8aZnaaBaaaleaacqWGNbWzcqWGRbWAaeqaaOGaeiykaKcaleaacqWGKbazcqGGSaalcqWGRbWAaeqaniabggHiLdaakeaacqGH9aqpcqGFdeprcyGGSbaBcqGGVbWBcqGGNbWzcqqHtoWrcqGGOaakdaaeqaqaaiabeg7aHnaaBaaaleaacqWGRbWAaeqaaaqaaiabdUgaRbqab0GaeyyeIuoakiabcMcaPiabgkHiTiab+nq8ejGbcYgaSjabc+gaVjabcEgaNjabfo5ahjabcIcaOmaaqababaGaeqySde2aaSbaaSqaaiabdUgaRbqabaaabaGaem4AaSgabeqdcqGHris5aOGaey4kaSIae4NbXFKaeiykaKcabaGaey4kaSYaaabeaeaacqWGYbGCdaWgaaWcbaGaemizaqMaem4zaCMaeiilaWIaem4AaSgabeaaaeaacqWGKbazcqGGSaalcqWGNbWzcqGGSaalcqWGRbWAaeqaniabggHiLdGccqGGOaakcqWFecFrdaWgaaWcbaGaemyCaehabeaakiabcUfaBjGbcYgaSjabc+gaVjabcEgaNjabcIcaOiabeg7aHnaaBaaaleaacqWGRbWAaeqaaOGaey4kaSYaaabeaeaacqWGAbGwdaWgaaWcbaGaemizaqMaemOAaOMaeiilaWIaem4AaSgabeaaaeaacqWGQbGAcqGHLjYScqWGNbWzcqGHRaWkcqaIXaqmaeqaniabggHiLdGccqGGPaqkcqGGDbqxcqGHRaWkcyGGSbaBcqGGVbWBcqGGNbWzcqGFneVtcqGGOaakcqWGfbqrdaWgaaWcbaGaemizaqMaem4zaCgabeaakiabcYha8jabeY7aTnaaBaaaleaacqWGNbWzcqWGRbWAaeqaaOGaeiilaWIaeq4Wdm3aaSbaaSqaaiabdEgaNjabdUgaRbqabaGccqGGPaqkcqGGPaqkcqGGUaGlaaaaaa@3ACE@

Since we used the convention j G + 1 Z d j , k = 0 MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWaaabeaeaacqWGAbGwdaWgaaWcbaGaemizaqMaemOAaOMaeiilaWIaem4AaSgabeaakiabg2da9iabicdaWaWcbaGaemOAaOMaeyyzIm7enfgDOvwBHrxAJfwnHbqeg0uy0HwzTfgDPnwy1aaceaGae8NbXFKaey4kaSIaeGymaedabeqdcqGHris5aaaa@4681@ , E q [ log ( α k + j G + 1 Z d j , k ) ] = log α k MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWefv3ySLgznfgDOjdaryqr1ngBPrginfgDObcv39gaiqaacqWFecFrdaWgaaWcbaGaemyCaehabeaakmaadmaabaGagiiBaWMaei4Ba8Maei4zaC2aaeWaaeaacqaHXoqydaWgaaWcbaGaem4AaSgabeaakiabgUcaRmaaqababaGaemOwaO1aaSbaaSqaaiabdsgaKjabdQgaQjabcYcaSiabdUgaRbqabaaabaGaemOAaOMaeyyzIm7enfgDOvwBHrxAJfwnHbqeh0uy0HwzTfgDPnwy1aacfaGae4NbXFKaey4kaSIaeGymaedabeqdcqGHris5aaGccaGLOaGaayzkaaaacaGLBbGaayzxaaGaeyypa0JagiiBaWMaei4Ba8Maei4zaCMaeqySde2aaSbaaSqaaiabdUgaRbqabaaaaa@6601@ . It remains to estimate the term E q [ log ( α k + j g + 1 Z d j , k ) ] MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWefv3ySLgznfgDOjdaryqr1ngBPrginfgDObcv39gaiqaacqWFecFrdaWgaaWcbaGaemyCaehabeaakmaadmaabaGagiiBaWMaei4Ba8Maei4zaC2aaeWaaeaacqaHXoqydaWgaaWcbaGaem4AaSgabeaakiabgUcaRmaaqababaGaemOwaO1aaSbaaSqaaiabdsgaKjabdQgaQjabcYcaSiabdUgaRbqabaaabaGaemOAaOMaeyyzImRaem4zaCMaey4kaSIaeGymaedabeqdcqGHris5aaGccaGLOaGaayzkaaaacaGLBbGaayzxaaaaaa@5389@ for g = 1,..., G MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWenfgDOvwBHrxAJfwnHbqeg0uy0HwzTfgDPnwy1aaceaGae8NbXFeaaa@3755@ - 1. To this end, we use the approximation (18) again to get:

E q [ log ( α k + j g + 1 Z d j , k ) ] log ( α k + j g + 1 r d j , k ) j g + 1 r d j , k ( 1 r d j , k ) 2 ( α k + j g + 1 r d j , k ) 2 . MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWefv3ySLgznfgDOjdaryqr1ngBPrginfgDObcv39gaiqaacqWFecFrdaWgaaWcbaGaemyCaehabeaakiabcUfaBjGbcYgaSjabc+gaVjabcEgaNjabcIcaOiabeg7aHnaaBaaaleaacqWGRbWAaeqaaOGaey4kaSYaaabuaeaacqWGAbGwdaWgaaWcbaGaemizaqMaemOAaOMaeiilaWIaem4AaSgabeaaaeaacqWGQbGAcqGHLjYScqWGNbWzcqGHRaWkcqaIXaqmaeqaniabggHiLdGccqGGPaqkcqGGDbqxcqGHijYUcyGGSbaBcqGGVbWBcqGGNbWzcqGGOaakcqaHXoqydaWgaaWcbaGaem4AaSgabeaakiabgUcaRmaaqafabaGaemOCai3aaSbaaSqaaiabdsgaKjabdQgaQjabcYcaSiabdUgaRbqabaaabaGaemOAaOMaeyyzImRaem4zaCMaey4kaSIaeGymaedabeqdcqGHris5aOGaeiykaKIaeyOeI0scfa4aaSaaaeaadaaeqaqaaiabdkhaYnaaBaaabaGaemizaqMaemOAaOMaeiilaWIaem4AaSgabeaacqGGOaakcqaIXaqmcqGHsislcqWGYbGCdaWgaaqaaiabdsgaKjabdQgaQjabcYcaSiabdUgaRbqabaGaeiykaKcabaGaemOAaOMaeyyzImRaem4zaCMaey4kaSIaeGymaedabeGaeyyeIuoaaeaacqaIYaGmcqGGOaakcqaHXoqydaWgaaqaaiabdUgaRbqabaGaey4kaSYaaabeaeaacqWGYbGCdaWgaaqaaiabdsgaKjabdQgaQjabcYcaSiabdUgaRbqabaaabaGaemOAaOMaeyyzImRaem4zaCMaey4kaSIaeGymaedabeGaeyyeIuoacqGGPaqkdaahaaqabeaacqaIYaGmaaaaaiabc6caUaaa@A0AF@

Consequently, we conclude:

E q [ log p ( E , Z | Θ ) = D log Γ ( k α k ) D log Γ ( k α k + G ) + d , g , k r d g , k [ log ( α k + j g + 1 r d j , k ) i g + 1 r d j , k ( 1 r d j , k ) 2 ( α k + j g + 1 r d j , k ) 2 ] + d , g , k r d g , k [ 0.5 log 2 π + 0.5 ( ψ ( a g k ) + log b g k ) 0.5 a g k b g k ( 1 v g k + ( E d g m g k ) 2 ] . MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaqbaeaabqqaaaaabaWefv3ySLgznfgDOjdaryqr1ngBPrginfgDObcv39gaiqaacqWFecFrdaWgaaWcbaGaemyCaehabeaakiabcUfaBjGbcYgaSjabc+gaVjabcEgaNjabdchaWjabcIcaOiabdweafjabcYcaSiabdQfaAjabcYha8jabfI5arjabcMcaPiabg2da9mrtHrhAL1wy0L2yHvtyaeXbnfgDOvwBHrxAJfwnaGqbaiab+nq8ejGbcYgaSjabc+gaVjabcEgaNjabfo5ahjabcIcaOmaaqababaGaeqySde2aaSbaaSqaaiabdUgaRbqabaaabaGaem4AaSgabeqdcqGHris5aOGaeiykaKIaeyOeI0Iae43aXtKagiiBaWMaei4Ba8Maei4zaCMaeu4KdCKaeiikaGYaaabeaeaacqaHXoqydaWgaaWcbaGaem4AaSgabeaakiabgUcaRiab+zq8hbWcbaGaem4AaSgabeqdcqGHris5aOGaeiykaKcabaGaey4kaSYaaabeaeaacqWGYbGCdaWgaaWcbaGaemizaqMaem4zaCMaeiilaWIaem4AaSgabeaaaeaacqWGKbazcqGGSaalcqWGNbWzcqGGSaalcqWGRbWAaeqaniabggHiLdGccqGGBbWwcyGGSbaBcqGGVbWBcqGGNbWzcqGGOaakcqaHXoqydaWgaaWcbaGaem4AaSgabeaakiabgUcaRmaaqababaGaemOCai3aaSbaaSqaaiabdsgaKjabdQgaQjabcYcaSiabdUgaRbqabaaabaGaemOAaOMaeyyzImRaem4zaCMaey4kaSIaeGymaedabeqdcqGHris5aOGaeiykaKIaeyOeI0scfa4aaSaaaeaadaaeqaqaaiabdkhaYnaaBaaabaGaemizaqMaemOAaOMaeiilaWIaem4AaSgabeaacqGGOaakcqaIXaqmcqGHsislcqWGYbGCdaWgaaqaaiabdsgaKjabdQgaQjabcYcaSiabdUgaRbqabaGaeiykaKcabaGaemyAaKMaeyyzImRaem4zaCMaey4kaSIaeGymaedabeGaeyyeIuoaaeaacqaIYaGmcqGGOaakcqaHXoqydaWgaaqaaiabdUgaRbqabaGaey4kaSYaaabeaeaacqWGYbGCdaWgaaqaaiabdsgaKjabdQgaQjabcYcaSiabdUgaRbqabaaabaGaemOAaOMaeyyzImRaem4zaCMaey4kaSIaeGymaedabeGaeyyeIuoacqGGPaqkdaahaaqabeaacqaIYaGmaaaaaOGaeiyxa0fabaGaey4kaSYaaabeaeaacqWGYbGCdaWgaaWcbaGaemizaqMaem4zaCMaeiilaWIaem4AaSgabeaaaeaacqWGKbazcqGGSaalcqWGNbWzcqGGSaalcqWGRbWAaeqaniabggHiLdGccqGGBbWwcqGHsislcqaIWaamcqGGUaGlcqaI1aqncyGGSbaBcqGGVbWBcqGGNbWzcqaIYaGmcqaHapaCcqGHRaWkcqaIWaamcqGGUaGlcqaI1aqncqGGOaakcqaHipqEcqGGOaakcqWGHbqydaWgaaWcbaGaem4zaCMaem4AaSgabeaakiabcMcaPiabgUcaRiGbcYgaSjabc+gaVjabcEgaNjabdkgaInaaBaaaleaacqWGNbWzcqWGRbWAaeqaaOGaeiykaKcabaGaeyOeI0IaeGimaaJaeiOla4IaeGynauJaemyyae2aaSbaaSqaaiabdEgaNjabdUgaRbqabaGccqWGIbGydaWgaaWcbaGaem4zaCMaem4AaSgabeaakiabcIcaOKqbaoaalaaabaGaeGymaedabaGaemODay3aaSbaaeaacqWGNbWzcqWGRbWAaeqaaaaakiabgUcaRiabcIcaOiabdweafnaaBaaaleaacqWGKbazcqWGNbWzaeqaaOGaeyOeI0IaemyBa02aaSbaaSqaaiabdEgaNjabdUgaRbqabaGccqGGPaqkdaahaaWcbeqaaiabikdaYaaakiabc2faDjabc6caUaaaaaa@2328@
(19)

where the convention g G + 1 G = 0 MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWaaabmaeaacqGH9aqpcqaIWaamaSqaaiabdEgaNjabgwMiZortHrhAL1wy0L2yHvtyaeHbnfgDOvwBHrxAJfwnaGabaiab=zq8hjabgUcaRiabigdaXaqaaiab=zq8hbqdcqGHris5aaaa@41FD@ is used again.

In addition,

E q ( Z ) [ log q ( Z ) ] = d , g , k r d g , k log r d g , k . MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWefv3ySLgznfgDOjdaryqr1ngBPrginfgDObcv39gaiqaacqWFecFrdaWgaaWcbaGaemyCaeNaeiikaGIaemOwaOLaeiykaKcabeaakmaadmaabaGagiiBaWMaei4Ba8Maei4zaCMaemyCaeNaeiikaGIaemOwaOLaeiykaKcacaGLBbGaayzxaaGaeyypa0ZaaabuaeaacqWGYbGCdaWgaaWcbaGaemizaqMaem4zaCMaeiilaWIaem4AaSgabeaakiGbcYgaSjabc+gaVjabcEgaNjabdkhaYnaaBaaaleaacqWGKbazcqWGNbWzcqGGSaalcqWGRbWAaeqaaaqaaiabdsgaKjabcYcaSiabdEgaNjabcYcaSiabdUgaRbqab0GaeyyeIuoakiabc6caUaaa@61F8@

For the KL divergences, we have:

KL ( q ( μ ) | | p ( μ ) ) = g , k 0.5 log v g k v 0 + 0.5 v 0 [ m g k m 0 ] 2 + 0.5 ( v 0 v g k 1 ) , MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaee4saSKaeeitaW0aaeWaaeaacqWGXbqCcqGGOaakcqaH8oqBcqGGPaqkcqGG8baFcqGG8baFcqWGWbaCcqGGOaakcqaH8oqBcqGGPaqkaiaawIcacaGLPaaacqGH9aqpdaaeqbqaaiabicdaWiabc6caUiabiwda1iGbcYgaSjabc+gaVjabcEgaNLqbaoaalaaabaGaemODay3aaSbaaeaacqWGNbWzcqWGRbWAaeqaaaqaaiabdAha2naaBaaabaGaeGimaadabeaaaaaaleaacqWGNbWzcqGGSaalcqWGRbWAaeqaniabggHiLdGccqGHRaWkcqaIWaamcqGGUaGlcqaI1aqncqWG2bGDdaWgaaWcbaGaeGimaadabeaakiabcUfaBjabd2gaTnaaBaaaleaacqWGNbWzcqWGRbWAaeqaaOGaeyOeI0IaemyBa02aaSbaaSqaaiabicdaWaqabaGccqGGDbqxdaahaaWcbeqaaiabikdaYaaakiabgUcaRiabicdaWiabc6caUiabiwda1maabmaabaqcfa4aaSaaaeaacqWG2bGDdaWgaaqaaiabicdaWaqabaaabaGaemODay3aaSbaaeaacqWGNbWzcqWGRbWAaeqaaaaakiabgkHiTiabigdaXaGaayjkaiaawMcaaiabcYcaSaaa@72C6@

and

KL ( q ( β | | p ( β ) ) ) = g , k ( a g k a 0 ) ψ ( a g k ) log b g k a g k log Γ ( a g k ) + log Γ ( a 0 ) + a 0 log b 0 ( a 0 1 ) ( ψ ( a g k ) + log b g k ) + a g k b g k b 0 . MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaqbaeaabiWaaaqaaiabbUealjabbYeamjabcIcaOiabdghaXjabcIcaOiabek7aIjabcYha8jabcYha8jabdchaWjabcIcaOiabek7aIjabcMcaPiabcMcaPiabcMcaPaqaaiabg2da9aqaamaaqababaGaeiikaGIaemyyae2aaSbaaSqaaiabdEgaNjabdUgaRbqabaGccqGHsislcqWGHbqydaWgaaWcbaGaeGimaadabeaakiabcMcaPiabeI8a5jabcIcaOiabdggaHnaaBaaaleaacqWGNbWzcqWGRbWAaeqaaOGaeiykaKIaeyOeI0IagiiBaWMaei4Ba8Maei4zaCMaemOyai2aaSbaaSqaaiabdEgaNjabdUgaRbqabaGccqGHsislcqWGHbqydaWgaaWcbaGaem4zaCMaem4AaSgabeaakiabgkHiTiGbcYgaSjabc+gaVjabcEgaNjabfo5ahjabcIcaOiabdggaHnaaBaaaleaacqWGNbWzcqWGRbWAaeqaaOGaeiykaKcaleaacqWGNbWzcqGGSaalcqWGRbWAaeqaniabggHiLdaakeaaaeaacqGHRaWkaeaacyGGSbaBcqGGVbWBcqGGNbWzcqqHtoWrcqGGOaakcqWGHbqydaWgaaWcbaGaeGimaadabeaakiabcMcaPiabgUcaRiabdggaHnaaBaaaleaacqaIWaamaeqaaOGagiiBaWMaei4Ba8Maei4zaCMaemOyai2aaSbaaSqaaiabicdaWaqabaGccqGHsislcqGGOaakcqWGHbqydaWgaaWcbaGaeGimaadabeaakiabgkHiTiabigdaXiabcMcaPiabcIcaOiabeI8a5jabcIcaOiabdggaHnaaBaaaleaacqWGNbWzcqWGRbWAaeqaaOGaeiykaKIaey4kaSIagiiBaWMaei4Ba8Maei4zaCMaemOyai2aaSbaaSqaaiabdEgaNjabdUgaRbqabaGccqGGPaqkcqGHRaWkjuaGdaWcaaqaaiabdggaHnaaBaaabaGaem4zaCMaem4AaSgabeaacqWGIbGydaWgaaqaaiabdEgaNjabdUgaRbqabaaabaGaemOyai2aaSbaaeaacqaIWaamaeqaaaaakiabc6caUaaaaaa@A9EC@

References

  1. Eisen MB, et al: Cluster analysis and display of genome-wide expression patterns. Proc Natl Acad Sci USA. 1998, 95: 14863-14868. 10.1073/pnas.95.25.14863.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  2. Tavazoie S, et al: Systematic determination of genetic network architecture. Nature Genetics. 1999, 22: 281-285. 10.1038/10343.

    Article  CAS  PubMed  Google Scholar 

  3. Tamayo P, et al: Interpreting patterns of gene expression with self-organizing maps: methods and application to hematopoietic differentiation. Proc Natl Acad Sci USA. 1999, 96: 2907-2912. 10.1073/pnas.96.6.2907.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  4. Blei DM, Ng AY, Jordan MI: Latent Dirichlet Allocation. Journal of Machine Learning Research. 2003, 3: 993-1022. 10.1162/jmlr.2003.3.4-5.993.

    Google Scholar 

  5. Rogers S, Girolami M, Campbell C, Breitling R: The Latent Process Decomposition of cDNA Microarray Datasets. IEEE/ACM Transactions on Computational Biology and Bioinformatics. 2005, 2: 143-156. 10.1109/TCBB.2005.29.

    Article  CAS  PubMed  Google Scholar 

  6. Carrivick L, Campbell C: A Bayesian approach to the analysis of micoarray datasets using variational inference. Journal submission. 2007

    Google Scholar 

  7. Attias H: A variational Bayesian framework for graphical models. Advances in Neural Information Processing Systems. 2002, 12: 209-215.

    Google Scholar 

  8. Bishop CM: Pattern recognition and machine learning. 2006, (Series: Information Science & Statistics), Springer

    Google Scholar 

  9. Teh YW, Newman D, Welling M: A collapsed variational bayesian inference algorithm for latent dirichlet allocation. Advances in Neural Information Processing Systems. 2006, 19: 1353-1360.

    Google Scholar 

  10. Jordan MI, Ghahramani Z, Jaakkola T, Saul LK: An introduction to variational methods for graphical models. Machine Learning. 1999, 37: 183-233. 10.1023/A:1007665907178.

    Article  Google Scholar 

  11. Blake CL, Newman DJ, Hettich S, Merz CJ: UCI repository of machine learning databases. 1998, [http://www.ics.uci.edu/~mlearn/mlrepository.html]

    Google Scholar 

  12. Garber E, et al: Diversity of gene expression in adenocarcinoma of the lung. Proc Natl Acad Sci U S A. 2001, 98: 13784-13789. 10.1073/pnas.221451398.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  13. Yeoh E-J, et al: Classification, subtype discovery, and prediction of outcome in pediatric acute lymphoblastic leukemia by gene expression profiling. Cancer Cell. 2002, 1: 133-143. 10.1016/S1535-6108(02)00032-6.

    Article  CAS  PubMed  Google Scholar 

Download references

Acknowledgements

This work is supported by EPSRC grant EP/E027296/1.

This article has been published as part of BMC Proceedings Volume 2 Supplement 4, 2008: Selected Proceedings of Machine Learning in Systems Biology: MLSB 2007. The full contents of the supplement are available online at http://www.biomedcentral.com/1753-6561/2?issue=S4.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yiming Ying.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

YY and CC conceived the method and drafted the manuscript. PL and YY coded the algorithm and implemented the experiments. All authors read and approved the final manuscript.

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Ying, Y., Li, P. & Campbell, C. A marginalized variational bayesian approach to the analysis of array data. BMC Proc 2 (Suppl 4), S7 (2008). https://doi.org/10.1186/1753-6561-2-s4-s7

Download citation

  • Published:

  • DOI: https://doi.org/10.1186/1753-6561-2-s4-s7

Keywords