I would expect a uniform prior to be a good example of an uninformed prior and get the same result as the frequentist approach. However, this is not the case.
As an example, let's look the classical Bernoulli problem - you flip coins and $s$ are heads, $f$ are tails.
You want to predict $p$, the probability of heads on any given coin flip.
Frequentist Approach
The frequentist approach is to say that $p$ is simply $s/(s + f)$. In this approach, we assume nothing about the distribution of $p$ and we're simply finding the maximium likelihood estimate. This makes sense to me.
Bayesian approach
Now, let's say you do the Bayesian approach and you say:
$$ P(p|data) = \dfrac{P(data | p) \cdot P(p)} {P(data)} $$
According to wikipedia, if you make your prior the beta distribution, you get:
$$ P(p|data) = \dfrac{p^{s+\alpha-1}(1-p)^{f+\beta-1}}{B(s+\alpha, f+\beta)} $$
Now let's say we want to find the expected value of p:
$$ E[p|data] = \int_{0}^{1} P(p|data) \cdot p \cdot dp $$ $$ E[p|data] = \int_{0}^{1} \dfrac{p^{s+\alpha-1}(1-p)^{f+\beta-1}}{B(s+\alpha, f+\beta)} \cdot p \cdot dp $$ $$ E[p|data] = \int_{0}^{1} \dfrac{p^{s+\alpha}(1-p)^{f+\beta-1}}{B(s+\alpha, f+\beta)} \cdot dp $$ $$ E[p|data] = \dfrac{B(s+\alpha + 1, f+\beta)}{B(s+\alpha, f+\beta)} $$
Great! We can continuing simplifying this since we know a property of the beta function:
$$ B(s + 1, f) = B(s, f) \cdot \dfrac{s} {s + f} $$
Which implies:
$$ E[p|data] = \dfrac{B(s+\alpha, f+\beta)}{B(s+\alpha, f+\beta)} \cdot \dfrac{s+\alpha}{f+\beta} $$
$$ E[p|data] = \dfrac{s+\alpha}{s+\alpha + f+\beta} $$
Makes sense so far! Now, let's say we want to choose a really "objective" prior to emulate the frequentist approach, like a uniform distribution. This is equivalent to Beta(1,1). In this case, we would get:
$$ E[p|data;\alpha=1,\beta=1] = \dfrac{s+1}{s + f + 2} $$
This hurts my brain. Even though we're being as objective as we can ("p is equally likely to be anything"), we're getting a different result than the frequentist approach.
On the flip side, if we do the improper prior Beta(0,0), we get the same result as the frequentist approach:
$$ E[p|data;\alpha=0,\beta=0] = \dfrac{s}{s + f} $$
What makes Beta(0,0) more similar to the frequentist approach than Beta(1,1)? What assumption are we making in Beta(1,1)? How is Beta(0,0) "less informed"?
The difference is in the prior expectation. The pure frequentists doesn't make any.
Suppose I show you a coin. Before I even toss it, what would you expect the bias to be?
The pure frequentist approach says that the probability is completely indeterminate, $0/0$. The approach only uses sample data to and cannot make claims without it. Divide by zero error.
The baysian approach says that we already expect the probability to be $1/2$ because we made assumptions on how coins work.
Next I toss the coin just once and it comes up heads. The pure frequentist now expects that the coin will have a probability of heads of $1/1$, while the baysian adjusts the expectation of the probability to be $2/3$.
Tossing the coin again and it comes up heads, the pure frequentist persists in measuring the probability as $1/1$ while the baysian adjusts it to $3/4$. And so on.
The pure frequentist always assumes that the sample data is exactly representative of the population of coin tosses, because that's all it can do. While the baysian applies a 'correction factor' due to the prior assumptions that can be made about the population.
Although, we can make better prior assumptions, if we have more information about coin manufacture or, say, the likelihood of me tampering with the coin. Assuming a uniform distribution of bias is fairly naïve.
Of course, as we take more and more samples the two approaches should converge towards an agreement that I used a two headed coin, or something.