I want to solve a matrix $\Omega$ from a equation $\sum_k (\Omega + \Theta_k)^{-1} = Q$. The $Q$ and $\Theta, \forall k=1...K$ are known, and are positive definite matrices. $\Omega$ also has to be positive definite. all matrices are large (a few thousands of columns and rows). My questions are:

(1) Is there a closed-form solution? How do I simplify the sum of the inverse of two matrix sum?

(2) I'm OK to go for a numerical solution. But how do I define this problem? An optimization problem to minimize something like $f(\Omega) = ||\sum_k (\Omega + \Theta_k)^{-1} - Q||$? Do I need to minimize the frobenius norm, (just like minimizing the L-2 norm in a least square problem)? Considering the constraint that $\Omega$ is positive definite, can I solve it by semi-definite programming? How do I redefine the problem in a linear/semi-definite programming? I don't have much knowledge of linear programming. I would prefer a general gradient descent rather than LP. But I'm OK to use LP if I know how to do.

This problem comes from the estimation of inverse covariance matrix of multi-variate Gaussian distribution.

EDIT: Both $\Theta_k$ and $\Omega$ are sparse, if that helps.

3more comments