In my computability theory course notes, I have written that, if $f:\mathbb{N}^{n+1} \to \mathbb{N}$ is a partial function, then the function obtained from $f$ by minimisation is the partial function $g(\vec{x}) = \mu y (f(\vec{x}, y) = 0):\mathbb{N}^n \to \mathbb{N}$ defined by $$ g(\vec{x}) = \begin{cases} r & \text{if } f(\vec{x}, r) = 0 \text{ and for all } s < r, \, f(\vec{x}, s) \text{ is defined but not } 0 \\ \text{undefined} & \text{otherwise.} \end{cases} $$
But then underneath, I have also written that $g$ may be partial, even if $f$ is total and vice versa.
I'm a bit confused as to what I meant here (can $g$ be partial if $f$ is total, and can $g$ be total if $f$ is partial?) so I had a look on Wikipedia.
For a given $R(y)$ the unbounded mu operator $\mu y R(y)$ (note no requirement for "$(\exists y)$" ) is a partial function. Kleene makes it as a total function instead (cf. p. 317):
$\varepsilon y R(\vec{x}, y) = \begin{cases} \text{the least $y$ such that $R(\vec{x}, y)$ is true} & \text{if $\exists y (R(\vec{x}, y)$} \\ 0 & \text{otherwise.} \end{cases}$
What does this mean? I have no idea what this is trying to say, or if it's even related to the $\mu$ operator.
So when is minimisation partial and when is it total?
I'll consider the general case of $g(\vec x)=\mu y\,R(\vec x,y)$; if you prefer the special case involving $f$ as in the question, just replace $R(\vec x,y)$ by $f(\vec x,y)=0$ throughout the following. I'll assume for the time being that $R(\vec x,y)$ makes sense, i.e., is either true or false, for all values of $\vec x$ and $y$.
By definition, $\mu y\,R(\vec x,y)$ is defined and equal to $r$ if and only if $r$ is the smallest natural number making $R(\vec x,r)$ true. Therefore, $\mu y\,R(\vec x,y)$ is undefined if and only if there is no such smallest $r$. By the least number principle (for the natural numbers), that's if and only if there is no $r$ making $R(\vec x,r)$ true.
The function $g$ defined by $g(\vec x)=\mu y\,R(\vec x,y)$ is partial if and only if there is at least one value of $\vec x$ such that $g(\vec x)$ is undefined. So $g$ is partial if and only if there exists some value of $\vec x$ such that all values of $y$ make $R(\vec x,y)$ false.
Equivalently: $g$ is total if and only if $(\forall\vec x)(\exists y)\,R(\vec x,y)$.
The preceding discussion assumed that $R$ always makes sense. Let me say a few words about the case where $f$ is a partial function, so the relevant $R$, namely $f(\vec x,y)=0$, might not always make sense, because $f(\vec x,y)$ might be undefined. In this situation, it is certainly possible for $g(\vec x)$ to also be undefined; an extreme example is that, if $f(\vec x,y)$ is always undefined, then so is $g(\vec x)$. But perhaps surprisingly, it is possible for $g$ to be total even if $f$ isn't. Suppose, for an easy example, $f(x,y)=x-y$ whenever $x\geq y$, and $f(x,y)$ is undefined whenever $x<y$. Then $f$ is certainly not total, but $\mu y\,f(x,y)=0$ is defined and equal to $y$ for all $y$, as can easily be checked using the definition of $g$. So $g$ is the total identity function.
Finally, let me comment on a terminological matter that may be confusing. The phrase "partial function" usually means "function that need not (but might) be total"; so the total functions form a subclass of the partial functions. But when one says that a function is partial, it usually means that it is definitely not total. This clash of terminology is unfortunate (to put it mildly), but one just has to get used to it. It's avoidable if one just says that a function is not total, rather than saying that it is partial, but it seems hopeless to get everyone to do this and avoid the clash. So, at least for now, "partial functions" are functions that might be partial, not functions that definitely are partial.