From Boyd & Vandenberghe's Convex Optimization, exercise 9.30, we have where $a_i^T$ is a row of the matrix $A$.
$$ \min: f(x) = -\sum_{i=0}^m\log(1 - a_i^Tx) - \sum\log(1-x_i^2) $$
with variables $x \in {\Bbb R}^n$ and $\operatorname{dom} f = \left\{ x \mid a_i^Tx < 1,\: i=1,\dots,m, \: |x_i| < 1, i=1, \dots, n \right\}$.
I am unable to compute the gradient descent in terms of the matrix $A$. So far, I can compute the derivative of $f(x)$ w.r.t. $x_j$ using chain rule which gives me: $$ \frac{1}{1-a_i^Tx}. a_{ij} - \frac{1}{1+x} + \frac{1}{1-x}$$ and I am not too sure if it's correct. I can sum over the above term getting $A^T$ for the term $a_{ij}$ for the gradient but I am stumped in $\frac{1}{(1 - a_i^Tx)}$
How to obtain $\nabla f(x)$ in terms of the matrix $A$, that is remove the $\sum$ term. Also, how to express $f(x)$ in terms of $A$ if possible?
Let's introduce some notation for the elementwise/Hadamard and trace/Frobenius products $$\eqalign{ A &= B\odot C &\implies A_{ij}=B_{ij} C_{ij} \cr \alpha &= B:C = {\rm tr}(B^TC) &\implies \alpha = \sum_{i,j}B_{ij} C_{ij} \cr }$$ And for Hadamard division: $D=\frac{B}{C}\implies D_{ij}=\frac{B_{ij}}{C_{ij}}$
For typing convenience, let's define the vector variables $$\eqalign{ w &= (1_n-x\odot x) &\implies dw = -2x\odot dx \cr y &= (1_m-A^Tx) &\implies dy = -A^Tdx \cr u &= \frac{1_n}{w},\,\,\,\,\,v = \frac{1_m}{y} \cr }$$ where $1_p\in{\mathbb R}^p$ denotes a vector of all ones.
Using elementwise evaluation of the $\log$ functions, we can write down your function and find its differential and gradient $$\eqalign{ f &= -1_m:\log(y) -1_n:\log(w) \cr df &= -1_m:\frac{dy}{y} -1_n:\frac{dw}{w} \cr &= -v:dy -u:dw \cr &= v:A^T\,dx + u:(2x\odot dx) \cr &= (Av + 2x\odot u):dx \cr \frac{\partial f}{\partial x} &= Av + 2x\odot u \,\,\,=\,\, A\bigg(\frac{1_m}{1_m-A^Tx}\bigg) + \frac{2x}{1-x\odot x} \cr }$$ To find the optimal value, you must set this gradient to zero and solve for $x$. You could derive a simple iterative process like $$\eqalign{ 2x\odot u &= -Av \cr x_+ &= -\frac{1}{2}w\odot(Av) \cr }$$ or use the gradient in a more sophisticated conjugate gradients algorithm.