Suppose that I have a process that runs repeatedly.
Each run is independent and will succeed with probability p.
p is constant across all runs.
You are initially told that p has been set at random, and that that random distribution is $U[0,1]$.
Finally, you run the process N times, and observe S successes.
What does this tell you about p?
I would assume that it would allow you to derive an updated conditional probability distribution for what value p actually has?
I would further assume that the peak of that distribution will fall at S/N?
Is this correct, and if so, what is the expression for the distribution?
How does that distribution relate to the Binomial Distribution, which is the answer for the question when asked the other way around? (Given p what is the distribution of S?) Is it as simple as being the inverse?
What you attempt to describe is a Bayesian model: $$ p \sim U(0,1)\\ k | p \sim Bin( n, p ) $$
BTW you seem to have correct intuition about the result.
Here we have a prior uniform distribution for $p$ and the conditional distribution of interest is the posterior distribution of $p$, $f(p|n,k) \propto Bin( n, p )$
However, a standard Bayesian model uses Beta distribution for $p$, which has desired properties for combining with the binomial likelihood (uniform is a special case of Beta). Such prior is called conjugate prior, and results in the posterior distribution from the same class, Beta in this case.
Here is one of about a myriad sites that address some aspect of this problem: https://www2.stat.duke.edu/courses/Spring12/sta104.1/Lectures/Lec23.pdf