I am wondering if the following loss function is well known and if it is, does it have a standard name:
$$ L_{\eta} (\theta, a) = (\theta-a) (\eta - \mathbb{I}_{(-\infty, a)} (\theta) ), \quad \eta \in (0,1). $$ where $\eta$ is fixed.
For the following problem: For a single observation $x \sim \text{Uniform}(0, \theta)$, with prior on $\theta$ being $\tau(\theta) = \theta e^{-\theta} \mathbb{I}_{(0, \infty)}(\theta)$, we get the following posterior distribution: \begin{align*} h_{\Theta|X}(\theta|x) = e^{x-\theta} \mathbb{I}_{(x, \infty)}(\theta). \end{align*} I am trying to compute the bayes estimator, $\hat{\theta}$, under $L_{\eta}$. I am wondering if my answer is correct, here is my working:
We wish to find $\hat{\theta}$ that minimises the expected posterior loss:
\begin{align*} \mathbb{E}[L_{\theta}(\theta, \hat{\theta}) | X] &= \int_{-\infty}^{\infty} (\theta - \hat{\theta}) (\eta - \mathbb{I}_{(-\infty, \hat{\theta})}(\theta)) h(\theta|x) d \theta \\ &=(\eta -1)\int_{-\infty}^{\hat{\theta}} (\theta - \hat{\theta}) h(\theta|x) d \theta + \eta \int_{\hat{\theta}}^{\infty} (\theta - \hat{\theta}) h(\theta|x) d \theta. \end{align*}
Using Leibniz rule to differentiate this wrt $\hat{\theta}$ and setting to zero (and omitting some minor details), we get
\begin{align*} - (\eta -1) \int_{-\infty}^{\hat{\theta}} h(\theta|x) d \theta - \eta \int_{\hat{\theta}}^{\infty} h(\theta|x) d \theta = 0. \end{align*}
Solving for $\hat{\theta}$ gives
\begin{align*} & -(\eta -1)\int_{-\infty}^{\hat{\theta}} e^{x-\theta} \mathbb{I}_{(x, \infty)}(\theta) d \theta - \eta \int_{\hat{\theta}}^{\infty} e^{x-\theta} \mathbb{I}_{(x, \infty)}(\theta) d \theta = 0\\ \implies & -(\eta -1) e^{x} \int_{\min \{ \hat{\theta}, x \} }^{\hat{\theta}} e^{-\theta} d \theta - \eta e^{x} \int_{\max\{\hat{\theta}, x\}}^{\infty} e^{-\theta} d \theta = 0\\ \implies & -(\eta -1) e^{x} \int_{ x }^{\hat{\theta}} e^{-\theta} d \theta - \eta e^{x} \int_{\hat{\theta}}^{\infty} e^{-\theta} d \theta = 0\\ \implies & -(\eta -1) (1-e^{x- \hat{\theta}}) - \eta e^{x - \hat{\theta}} = 0\\ \implies & \hat{\theta}= x-\log(1-\eta). \end{align*}
Note that we have made the assumption that $\hat{\theta} \ge x$, so that \begin{align*} \min \{\hat{\theta}, x \} = x, \quad \max \{\hat{\theta}, x \} = \hat{\theta}, \end{align*}