I have a symmetric matrix $A$, which has zeroes all along the diagonal i.e. $A_{ii}=0$.
I cannot change the off diagonal elements of this matrix, I can only change the diagonal elements. I need this matrix to be positive definite.
One way to ensure this is as follows:
Let $\lambda'$ by the absolute value of the most negative eigenvalue and transform $A\mapsto A + \lambda'I_{na}$. Notice this leaves the off-diagonal elements unchanged, but now it is positive definite:
$(A+\lambda'I_{na})x_{i}=(\lambda_{i}+\lambda')x_{i}=\lambda_{i}^{new}x_{i}$,
where $(x_{i},\lambda_{i})$ are an eigenpair. The eigenvalues of the new matrix formed by adding $\lambda'$ to the diagonal are all positive.
I fear that this solution is sub-optimal - in my application I want to add only as much as I need to and no more. I would like to know if I can ensure positive definiteness by more generally performing
$A+D$
where $D$ is some diagonal matrix.
[Application: Statistics. $A$ is related to the covariance of some augmented data. I want it to be as small as possible so as to reduce the leverage of the missing data.
EDIT: Changed $X_{a}^{T}X_{a}$ to $A$. I was ahead of myself, once I get my desired positive definite matrix I want to set $A_{pd}=A+D=X^{T}_{a}X_{a}$ and take the cholesky decomposition to get $X_{a}$.
Yes, as Rahul stated in the comments, this is a semidefinite program, and a relatively straightforward one at that. In fact, it's very similar to the so-called educational testing problem: $$\begin{array}{ll}\text{maximize} & \textstyle\sum_i D_{ii} \\ \text{subject to} & \Sigma - D \succeq 0 \\ & D \succeq 0 \end{array}$$ In the ETP, $\Sigma$ is already positive semidefinite, and you're subtracting as large of a diagonal matrix as possible without changing that. In contrast, your problem is $$\begin{array}{ll}\text{minimize} & \textstyle\sum_i D_{ii} \\ \text{subject to} & \Sigma + D \succeq 0 \\ & D \succeq 0\end{array}$$ But not surprisingly the methods for solving these problems are extremely similar. Of course, this assumes that you like $\sum_i D_{ii}$ as a measure of how much you are perturbing the matrix. You could also consider $\max_i D_{ii}$, $\sum_i D_{ii}^2$, etc.; as long as the measure remains convex, the problem is readily solved.
Any solver supporting semidefinite programming can handle this. Some software I wrote for MATLAB, CVX, makes this easy; so would similar software YALMIP (also for MATLAB) and CVXOPT for Python. In CVX, your model looks like this: