Let $x(t)$ be a non-negative time-dependent random variable. It is known that $x(t)$ is stationary, meaning that the probability distribution is the same for all times. However, it is autocorrelated, meaning that the future values of $x$ do to some extent depend on their past. $x(t)$ can be assumed somewhat smooth, but not much is known about it.
Let $y(t)$ be its convolution with a certain kernel $k(t)$, namely
$$y(t) = k*x = \int_{-\infty}^{\infty} k(\tau) x(t - \tau) d\tau$$
About the kernel it is known that it is causal ($k(t) = 0 \; \forall t < 0$), smooth and positive ($k(t) > 0 \; \forall t \geq 0$). Interpreting the kernel as a probability distribution, we can find its first two moments
$$\mu_k = \int_{-\infty}^{\infty} t k(t) dt$$ $$\sigma^2_k = \int_{-\infty}^{\infty} (t - \mu_k)^2 k(t) dt$$
The exact shape of the kernel is not known a priori, but the standard deviation $\sigma_k$ is known, and is guaranteed to be positive. It is also known that $\mu_k \approx \sigma_k$, but exact equality cannot be assumed.
$y(t)$ is known, and the goal of this question is to make some statements about the autocorrelation function of $y(t)$. Autocorrelation of a stationary process defined as
$$ R_y(\Delta t) = E[(y(t) - \mu_y)(y(t+\Delta t) - \mu_y)] = E[(y(t)y(t+\Delta t)] - \mu_y^2$$
where $\mu_y = E[y(t)] = E[x(t)] = \mu_x$, because $k(t)$ is normalized.
The preliminary step is relating $R_y$ and $R_x$. As has been shown here,
$$R_y = (k * k) * R_x = R_h * R_x$$
Questions:
- Can it be shown that $R_y(\Delta t) > R_x(\Delta t) \;\; \forall \Delta t > 0$? It would be logical that a smoothened function would have higher autocorrelation than the original.
- In case there are some distributions of $x(t)$ for which 1. is not true, it would be great to see an example, given that, for example, $k(t)$ is an exponential distribution.
- Most importantly, I am interested in lower-bounding $R_y$ as function of $\Delta t$ and $\sigma_k$
EDIT: Here is an example. In the image below we simulate some signals for the duration of 10s with timestep of 1ms. x-axis will always denote time. Pay attention to the numbers on the x-axis. Sometimes I will only show the very beginning of the dynamics to better resolve it, then one may assume that the behaviour for the rest of the time is comparable.
First row is the source signal $x(t)$ which is white noise in this example, as well as its autocorrelation. AC has a strong peak at t=0 and small fluctuations around zero for other times. In the second row we convolve $x(t)$ with an exponential kernel $k(t)$ of timescale $\tau = 0.4$s. Its autocorrelation should now be a smoothly decaying exponential as well (we see some fluctuations due to finite data size). In the 3rd row we consider a different source signal $x'(t)$, that is convolved with the inverse of $k(t)$. For this we compute the inverse kernel, plot it, as well as plot the new source signal and its autocorrelation. It can be seen that the inverse of the exponential distribution is a single timestep "pulse" - oscillation of very high magnitude and very low timescale, something that I would consider pathological and unplausible for real data. Finally the last row shows that if $k(t)$ is applied to the source signal $x'(t)$, the resulting signal is white noise, as expected by design.
So, in this example, the power spectrum of the convolved signal is more narrow than that of the original signal, which contradicts the hypothesis of my Q1. However, the associated source signal is pathological, and does not fit the assumption of being smooth (i.e. lipshitz-continuous) at least to some extent.
So, the refined version of Q1 is to find the conditions under which the hypothesis holds. Are there examples of smooth source functions which Q1 is violated. What if the autocorrelation of the source signal can be assumed to be non-negative?

(1) does not hold in general. For any kernel $k(\tau)$, let $K(\omega), \omega \in \mathbb{R}$, be its Fourier transform (assuming it exists, which is indeed the case for the "well behaved" kernels you are considering). Now, by textbook theory on filtering (wide sense) stationary random processes, the power spectral density of $y(t)$, $S_y(\omega)$, will be equal to $$ S_y(\omega)=|K(\omega)|^2 S_x(\omega), $$ where $S_x(\omega)$ is the power spectral density of $x(t)$. Now, assuming that $|K(\omega)|>0, \forall \omega$, if $S_x(\omega)=1/(|K(\omega)|^2),\forall \omega$, it follows that $S_y(\omega)=1, \forall \omega$. But this means that $R_y(\Delta t) = 0, \forall \Delta t>0$ (i.e., $y(t)$ is a white process) and, therefore, there should be a $\tau$ such that $R_x(\tau) \geq R_y(\tau)=0$.