Given the following convex optimization problem
$\min_{x,p} ||x|| - p$
subject to $p > 0$
Can I change the above to an unconstrained convex optimization problem by substituting $c = \log(p)$ and minimize
$\min_{x,c} ||x|| - \exp(c)$
If this is possible I am interested in literature about this topic, but I couldn't find much.
Edit: This is just a minimal example. The problem in the example is of course unbounded, but I think my full optimization problem adds more complexity than necessary to answer my question.
I would suggest that you reconsider sparing us from the complexity of your problem. I think it's clear that we're all missing something here.
First of all: as I stated in the comment above, your proposed reformulation is not convex. Your objective function becomes the difference of two convex functions. Given that you've offered the word "convex" in boldface above I assume that is important to you (and it should be!). So the short, but likely unhelpful, answer to your question is no, you cannot do that.
But why would you want to do this in the first place? If you have just a single nonnegativity to deal with, it seems like an unnecessarily complex way to handle it. There's no need to be afraid of constrained convex optimization, especially for a simple constraint like that.
Still, suppose you have good reason to limit yourself to unconstrained minimizations. Then we can still dispatch a simple model like this easily. Consider the model $$\begin{array}{ll}\text{minimize}_{x,p} & f(x,p) \\ \text{subject to} & p > 0 \end{array}$$ where $f$ is convex in $x$ and $p$. Now consider solving a pair of unconstrained problems: $$\begin{array}{ll}\text{minimize}_{x,p} & f(x,p) \end{array}$$ $$\begin{array}{ll}\text{minimize}_x & f(x,\epsilon) \end{array}$$ for some small $\epsilon>0$ (indeed, $\epsilon=0$ might work depending on some mild continuity conditions on $f$). If the first problem obtains $p>0$, then that is the solution to the constrained problem. If the first problem obtains $p\leq 0$, then the second problem yields the solution to the constrained problem (approximately, if $\epsilon\neq 0$).
In practice, we do not actually have to solve the first problem to completion if it is producing $p<0$. We can terminate as soon as we discover (through duality, say) that the optimal solution will have $p\leq 0$. And if you have multiple variables that must be positive, then an active set method might allow us to generalize this approach.
Now, you probably have a very good reason why you can't do something like this. But because you've not shared the full model, we don't know it. As written, this is the best we can offer. If you're willing to use constrained optimization machinery, of course, then there are all sorts of options available to you.