the following function was shown not to be computable:
$h(x) = \begin{cases} \mu n.\Phi_x(n) \downarrow & \mbox{if } \exists n \Phi_x(n) \downarrow \\ \uparrow & \mbox{otherwise} \end{cases}$
using the following auxiliary function:
$f(x,y) = \begin{cases} 0 & \mbox{if } y > 0 \mbox{ or } \Phi_x(x) \downarrow \\ \uparrow & \mbox{otherwise} \end{cases}$
My question is now: what is the reason behind choosing this particular auxiliary function? I am not quite sure about the role of the $y > 0$.
Thanks.
Let me first prove that $h$ is not computable and then try to motivate the choice of $f$.
First of all note that the function $f$ is computable (Intuition: if $y > 0$, then output $0$; otherwise, start computing $\Phi_x(x)$; if that never terminates, then the computation of $f$ doesn't terminate either; if the computation of $\Phi_x(x)$ terminates, then output $0$.)
As in the comment by the OP, let $i$ be an index of $f$, i.e., $f = \Phi_i$; By the s-m-n theorem, $f(x,y) = \Phi_i(x,y) = \Phi_{s(i,x)}(y)$ for all $x, y$.
Now suppose that $h$ is computable. Then so would be the function $\lambda x.h(s(i,x))$. However $$\begin{align*} h(s(i,x)) & = \begin{cases} \mu y.\Phi_{s(i,x)}(y) \downarrow & \text{if such a $y$ exists} \\ \uparrow & \text{otherwise} \end{cases} \\ & = \mu y. f(x,y)\downarrow \qquad \text{(note that such a $y$ always exists)} \\ & = \begin{cases} 0 & \text{if } \Phi_x(x) \downarrow \\ 1 & \text{if } \Phi_x(x)\uparrow. \end{cases} \end{align*}$$ So, this would allow you to decide whether or not $\Phi_x(x)\downarrow$, giving a contradiction.
To give some motivation for the choice of $f$, note that although the function $g$ defined by $$g(x) = \begin{cases} 0 & \text{if $\exists n. \Phi_x(n) \downarrow$} \\ \uparrow & \text{otherwise} \end{cases}$$ is computable, finding the smallest $n$ for which $\Phi_x(n) \downarrow$ is not computable.
(Intuition: you can compute $g$ by interleaving the computations of $\Phi_x(0)$, $\Phi_x(1)$, $\dots$. If one of those ever terminates, then output $0$; if they all never terminate, then neither does the computation of $g$. For $h$ this trick doesn't work, because even if, say, the computation of $\Phi_x(100)$ terminates, you'll never know if you should output $100$ or wait until one of the computations of $\Phi_x(0)$, $\dots$, $\Phi_x(99)$ terminates.)
So what $f$ is doing is making sure that the computation of $f(x,1)$ always terminates. However, you can't be sure that $1$ is the smallest $y$ for which the computation $f(x,y)$ terminates, because termination of the computation of $f(x,0)$ is equivalent to the termination of the computation of $\Phi_x(x)$, which is undecidable.