A brief look at conditional distributions:

One element of multivariate normal

Suppose \[x \sim \mathcal{N}(\mu, Q)\]

We’re looking for \(x_i \vert x_{-i} \sim \mathcal{N}(?, ?)\)

Then we have (in the log-density):

\[(x - \mu)^\intercal Q (x - \mu)\]

We extract the relevant components involving \(x_i\)

\[\begin{equation} (x_i - \mu_i)^\intercal Q_{ii} (x_i - \mu_i) + 2\sum_{i \ne j}(x_i - \mu_i)Q_{ij}(x_j - \mu_j) \end{equation}\]

\[\begin{equation} Q_{ii}x_i^2 - 2Q_{ii}x_i\mu_i + 2\sum_{i \ne j}Q_{ij}x_i(x_j - \mu_j) \end{equation}\]

So that

\[\begin{equation} x \sim \mathcal{N}(\mu_i - \sum_{i \ne j}\frac{Q_{ij}}{Q_{ii}}(x_j - \mu_j), Q_{11}) \end{equation}\]

In the conjugate setting

We can expand this to the conjugate setting, where we wish to sample one element of \(\mu\):

\[\begin{equation} \begin{split} y &\sim \mathcal{N}(\mu, Q) \\ \mu &\sim \mathcal{N}(\mu_0, \tau_0) \end{split} \end{equation}\]

This can be particularly useful when, e.g., some of your \(\mu\)’s are observed, while others are not, and you wish to impute.

We have (in the log-density):

\[\begin{equation} (y - \mu)^\intercal Q (y - \mu) + \tau_0(\mu_i - \mu_0)^2 \end{equation}\]

We extract the relevant components involving \(\mu_i\)

\[\begin{equation} (y_i - \mu_i)^\intercal Q_{ii} (y_i - \mu_i) + 2\sum_{i \ne j}(y_i - \mu_i)Q_{ij}(y_j - \mu_j) + \tau_0\mu_i^2 - 2\tau_0\mu_i\mu_0 \end{equation}\]

\[\begin{equation} Q_{ii}\mu_i^2 - 2Q_{ii}y_i\mu_i + 2\sum_{i \ne j}Q_{ij}\mu_i(\mu_j - y_j) + \tau_0\mu_i^2 - 2\tau_0\mu_i\mu_0 \end{equation}\]

The precision is then \(Q_{ii} + \tau_0\), and completing the square gives

\[\begin{equation} \mu_i \sim \mathcal{N}\left(\frac{1}{Q_{ii} + \tau_0} \left(\tau_0\mu_0 + Q_{ii}y_i - \sum_{i \ne j}Q_{ij}(\mu_j - y_j)\right), Q_{ii} + \tau_0\right) \end{equation}\]

This is exactly the “usual” conjugate form for a normal distribution with unknown \(\mu\) combined with the conditional distribution of \(\mu_i\) (from above, but with \(x\) and \(\mu\) exchanged for each other).