$$ \begin{array}{ll} \underset {x} {\text{maximize}} & \left(\sum\limits_{i=1}^m \dfrac{1}{a_i^T x + b_i}\right)^{-1} \\ \text{subject to} & Ax > b \end{array} $$ where $a_i^T$ is the $i$-th row of the matrix $A$. Transform this optimization problem into a second-order cone program (SOCP).
The solution is maximize $1^Tt$ subject to $t_i(a_ix+b_i) \ge 1, i = 1,2...,m$ and $t \ge 0$.
Can someone explain to me why are these problems equivalent? I don't see how transformation has been carried out. I think, there has been made some change of variables. Like $t_i = 1/(a_ix+b_i)$. However, if so how should one get $t_i(a_ix+b_i) \ge 1$. I learnt that by the change of variable one can obtain an equivalent problem if the transformation that one uses is one-to-one and its image covers the domain of the original function. I don't see it here. Also how was the $^{-1}$ (inverse) sign be eliminated in the "transformed" problem (SOCP).
The objective of maximizing $$\mathrm{maximize}\ \frac{1}{\sum_i \frac{1}{a_i^Tx+b}}$$ is equivalent to minimizing the denominator $$\mathrm{minimize}\ {\sum_i \frac{1}{a_i^Tx+b}}$$ which is equivalent to minimizing $$\mathrm{minimize}\ {\sum_i t_i}\quad \mathrm{subject\ to}\quad t_i\geq\frac{1}{a_i^Tx+b}, i=1,\ldots,n$$ where $t_i$ are new auxiliary variables. The last step holds because in the optimal solution the constraints in "subject to" will actually be satisfied with equality (if not, one could decrease some $t_i$ and have a better objective). This is just a special variant of the more general rule that $$\mathrm{minimize}_{x}\ f(x)$$ is the same as $$\mathrm{minimize}_{t,x}\ t\quad \mathrm{subject\ to}\ t\geq f(x).$$
Have a look at the MOSEK Moseling Cookbook for examples of this kind in conic quadratic optimization.