I understand the main idea behind a multiplicative noise in signal processing, but I'm struggling to see it expressed in a specific example. Could someone help me?
For example, if I have a system of ODEs to which I intend to implement a multiplicative Gaussian noise, do I simply add a noise term to each equation?
In order to answer this question, you would in general need to understand how the noise enters your system. The thing is, you do not add multiplicative noise to a system, but there is noise working on your system that results in a multiplicative term. Think for example of the following equation: \begin{align*} \frac{d}{dt}x(t)=x(1-x)(x-a), \end{align*} where $a$ is some external parameter. If we now suppose that the parameter $a$ fluctuates around a fixed value $\bar a$, i.e. we assume $a=\bar a+\beta_t$, the ODE above would be replaced by the SDE \begin{align*} dx=x(1-x)(x-\bar a)dt+u(1-u)d\beta_t. \end{align*} As you can see, we added Gaussian noise to a parameter, but the effect on the variable you are interested in, $x(t)$, is multiplicative.