We have a 90%-confidence interval. I want to check if the following statements are correct.
If double the sample, the possibility that the value that we are looking for is out of the confidence interval is smaller.
The bigger the standard error, the smaller the confidence interval.
Since the confidence interval is $\left (\overline{x}- Z_{a/2}\cdot s_x, \overline{x}+ Z_{a/2}\cdot s_x\right )$, where $s_x$ is the standard error, I think that the second statement is wrong and it should be that the bigger the standard error, the bigger the confidence interval. Is this correct?
What about the first statement?
The first is a bit tricky. It's hard to figure out what the probability that the true value falls within the interval is, and notably this number is not $0.90$. The correct interpretation of the confidence level is that if we were to do this experiment $100$ times, at least $90$ of them would capture the true value. This is not the same thing as saying that the probability of this interval containing the true value is at least $0.90$. This is a subtle, but important, point.
Intervals that satisfy the statement "this interval has a $\geq 90$% chance of containing the correct value" are called Bayesian Confidence Intervals or Credible Intervals and are calculated differently. I'm not sure if the answer is in fact true or false (I'm leaning false in general, and true for "nicely behaved" distributions) but the main takeaway is that it's a bad question.
Discussion of this distinction with a worked out example can be found here.
The second statement is wrong for the reason you stated.