I have a dataset of bets made in a betting game. I want to rank the different players in the game based on total dollar value of bets they have made and number of bets they have made.
I want to come up with a function which gives a high score to users which have high total amount bet but simultaneously punishes small, crappy bets. For instance, User A who made 10 1 dollar bets should have a smaller score than User B who made just 1 10 dollar bet.
I can do that by simply taking the average amount spent per transaction and normalising it against the max of the averages.
However, the dataset I have is very skewed. The max mean value is too large and that results in extremely low scores for >95% of the users.
In the scheme I mentioned, I want User A to have a less score than User B but not an order of magnitude less. That is if user B is given score 0.9 by the function, I want user A to have something more along 0.80.
I want to know what is the thought process that you should have while coming up with such a function (e.g. when to use log of a variable, when to use an exponential etc).
Any help appreciated. Thanks!

Define, for every player $i \in \{1, 2, \ldots, n\}$:
You want to score players according to the $a_i$ values; the problem, as you stated, is that values $a_i$ can assume very different values, ranging in the interval $[0, +\infty[$. When encountering this type of data, one possibility is to transform them into a finite range, e.g., the unitary interval $[0,1]$.
To do this, I usually start by defining the requisites the transformation function $\phi(x)$ must satisfy:
At this point you see that suitable elementary function are logarithms or exponentials. For instance, we could define: $$ \phi(x) = 1-e^{-k\cdot x}, $$ where $k \in \mathbb{R}$ is a parameter you can play with to find the best value for your dataset.
With this choice for $\phi(x)$, your score $s_i$ for player $i$ becomes $$ s_i = \displaystyle\frac{1-e^{-k\cdot a_i}}{\displaystyle \max_{j \in \{1, 2, \ldots, n\}} 1-e^{-k\cdot a_j}}. $$