I'm creating a machine learning model to beat the house in a competitive game. In this setup, there is a lot of past data on previous games, where each game has: a winner, w (either A or B), gambling odds (x and y for teams A and B), and a probability that team A wins, p, which I compute. The gambling odds are how much reward I get after winning a bet. For example, if team A is projected by the house to have a much better chance of winning, they may set x=1.05, and y=7.5. If I bet \$1 on B and win, I get a net profit of \$7.5-\$1=\$6.5. If I bet \$1 on A or B and lose, then I get a net profit of -\$1. I can also choose to not bet for a net profit of \$0. The gambling odds are always greater than or equal to 1.0. I'm on mobile, so I can't latex this, but here's a drawing of the objective function I'm trying to maximize for each game

Since the probability model is a neural network, I need the derivative of f wrt p, but that isn't defined because f is not continuous wrt p. Alternatively, I could just do cross entropy loss on p, but I don't have access to the actual probabilities, just the winner w, and I can't guarantee that the distribution created is therefore good against the house.
Is there a way to modify my objective function that allows me to obtain the desired partial derivative? If not, what else can I do?
Your objective seems to predict how much you would have won assuming your probability is correct, which seems a weird choice. You also force it to choose one or the other, which obviously makes things harder to differentiate.
To fix both I would suggest you let the model suggest a strategy, rather than forcing an all or nothing choice. So if the model returns $p$ you choose $A$ with probability $p$ and $B$ with probability $1-p$. This results in the folowing objective function
$$ f(p) = \begin{cases} p (x-1) + (1-p)(-1) & w = A\\ p (-1) + (1-p) (y-1) & w = B \end{cases} $$
which has a simple derivative with respect to $p$.
If you want to be even more adventurous you can also try to make the model return betting amounts $b_A$ and $b_B$ and optimize the objective function
$$ f(b_A, b_B) = \begin{cases} b_A (x-1) + b_B(-1) & w = A\\ b_A (-1) + b_B(y-1) & w = B \end{cases} $$
but perhaps it's best to limit it to $1 per bet.