I have been reading about the Sleeping Beauty problem and I have been considering a slight variation to make the absurdity of the thirder position more apparent:
Suppose we have a biased coin that comes up heads 99/100 times but sleeping beauty is woken up 1,000,000 times when tails occurs. When performing the experiment thirders will:
Wake up and declare that they are almost certain that the coin is tails
Before the experiment acknowledge that there is a 99% chance that they will be put to sleep and be wrong in their interviews.
I can't shake the intuition here that saying "I am basically certain that the coin is tails" is an incorrect thing to say upon waking up. However, I do understand that if you performed this experiment enough times then this statement would make sense probabilistically. When performing the experiment once though it doesn't seem right.
To make the point further suppose that you would be killed at the end of the experiment if you ever said anything wrong in your interviews. Would you then say that the coin is tails upon waking up? This situation appears exactly the same and yet I would guess everyone would bet on heads now.
Your example with the death sentence is indeed very informative, but it does not support the halfer position in particular. It just shows that the correct resolution of the problem depends on a value function, which is ommited from the problem statement. Suppose that, instead of killing Sleeping Beauty, the experimenters would pay her at the end of the experiment, and that the reward she gets depends on her answer and the actual outcome of the coin toss. If she would recieve one dollar each time she answered correctly, then most people would bet on tails, as then they have a 0.01 probability of winning a million dollar.
This idea can also be expressed more mathematically. Identify heads with $1$ and tails with $0$. Take $p \in (0,1)$ and let $X\sim \mathrm{Ber}(p)$ denote the outcome of the coin toss. Denote Sleeping Beauties answer by $a \in [0,1]$. Let $L:\{0,1\}\times [0,1] \to [0,1]$ be an arbirtary loss function, i.e. at the end of the experiment Sleeping Beauty receives the (random) payout of $1-L(X,a)$. Write $n_x$ to the number of times sleeping beauty is being awoken given that $X=x$. So, in your variation of the problem we have that $p=99/100$, $n_1=1$ and $n_0=1000000$.
If we take the loss function $L(x,y)=\tfrac{n_x}{n_0}(x-y)^2$, then the expected loss is minimized at $a=\tfrac{99}{1000099}$, as $$\mathbf{E}[L(X,a)] = \mathbf{E}\left[\tfrac{n_X}{n_0}(X-a)^2\right] = \frac{99}{100}*\frac{1}{1000000}*(1-a)^2+\frac{1}{100}*1*a^2.$$This loss function penalizes each time you voice a false belief.
If we take the loss function $L(x,y)=(x-y)^2$, then the expected loss is minimized at $a=\tfrac{99}{100}$, as $$\mathbf{E}[L(X,a)] = \mathbf{E}\left[(X-a)^2\right] = \frac{99}{100}*(1-a)^2+\frac{1}{100}*a^2.$$This loss function penalizes having a false belief regardless of how often you voice this belief.
Of course, interpolations between the two above loss function are also possible, and one might get even more imaginative with the loss function.
As the problem is usually stated, Sleeping Beauty doesn't receive a reward after the experiment. So we must imagine that her 'reward' is the value Sleeping Beauty attaches to having a correct belief. Thus we see that a resolution of the Sleeping Beauty problem depends among other things on the answers to the following philosophical questions: "Is it bad to have a false belief that you never voice out loud or act upon?" and "Is it worse to voice a wrong belief many times than it is to voice it only once?".