In carrying out a one-sample t test for a sample mean, how should outliers be dealt with? For example, for the data ${110, 110, 110, 118, 122, 150}$ in a sample size of 6, evidently $150$ is an outlier - and t tests are not robust against outliers. (In this instance, assume that the random, normal, and independent conditions are met). Can I assume that because the normal condition is met, the t procedures will remain accurate?
$H_0:\mu=100$
$H_a:\mu>100$
It is possible, but unlikely, that a truly normal process would produce an outlier as extreme as this one.
It is always a judgment call what to do about outliers. Either you have faith that the data accurately reflect the population or process that produced them, or you don't. Sometimes, outliers occur for reasons that are hard to understand. Sometimes they are clearly the result of an error in recording data (meant to input 130 and typed 150 instead) or in analysis error (analyst's notes mention 'strange green crud at bottom of sample after analysis'). In the former case you might be able to correct the outlier, and in the latter case you might get rid of it. But any time you delete an outlier, you have to mention the deletion in the report of your statistical analysis.
If you keep the outlier as-is, you are correct that a t test is not the best course of action. Alternatives are nonparametric tests (here 'nonparametric' means not assuming normal data). There are several possibilities. (a) Sign test, (b) Wilcoxon signed rank (one-sample) test, (c) permutation test.
For your particular null hypothesis, it is clear that you will reject whatever reasonable test you do at the 5% level. You have six observations all exceeding the hypothetical mean. The chances of that due to chance alone if the null hypothesis is true are $1/2^6,$ which is much less than 5%.
Below is a Minitab session in which sign and Wilcoxon tests are performed, all with small P-values leading to rejection of $H_0: \eta = 100.$ where $\eta$ denotes the population $median.$