I have been trying to do some simple system identification. I have some input and output data from a system and I was trying to manually tune a transfer function that would behave in a similar manner.
In the plot below you can see the measured input data (blue), the measured output data (red/orange), and the output that my current model gives (yellow):

The model that I came up with is: $$ G(s)=\frac{-0.5882}{1+s0.35} $$
As for the general behavior, it is not that much off. However, there is an issue with the offset (which is around 35.7).
My question is, how can I incorporate that offset input into my transfer function? Or is there something else I need to do?
Transfer functions describe systems that are linear time-invariant. Which means that (ignoring the affects of initial conditions) if you would scale the input by some scalar, then the output should scale by that same scalar as well. Thus having an offset in the output would violate this.
It would still be possible to capture your observed behavior with transfer functions. Only it would require you to introduce another input (so you would have a multiple input single out system or MISO for short). Namely, the constant offset could be captured by an integrator which always receives an input of zero and an initial condition equal to your offset.