A long-time interest of mine has been trying to determine if there is some "natural" criterion by which we can characterize various interpolants of functions that are at first only defined for integer or positive-integer inputs, such as exponentiation and the factorial function, which generalize to the exponential and gamma functions. The aim ultimately is to create an explicit and efficient series formula for the interpolation of tetration
$$^n a = \underbrace{a^{a^{a^{\cdots^a}}}}_\text{$n$ copies of $a$}$$
to noninteger values of $n$, in some suitably natural manner. But before we can get there, it seems more profitable to first look at simpler cases, and one of the simplest such cases seems to be interpolating discrete sums. For example, consider
$$f(n) = \sum_{k=1}^{n} k.$$
As written, this definition makes no sense for, say, $f\left(\frac{1}{2}\right)$. However, it is not hard to see that this can be converted into a "closed form" that, even better, doubles as an interpolative extension:
$$\sum_{k=1}^{n} k = \frac{n(n+1)}{2}$$
which allows us to expand to real $x$ "lengths" via
$$\sum_{k=1}^{x} k := \frac{x(x+1)}{2}, x \in \mathbb{R}$$
(or even $x \in \mathbb{C}$!) And then we can find
$$\sum_{k=1}^{1/2} k = f\left(\frac{1}{2}\right) = \frac{3}{8}.$$
Another example where we can find such a sum is in the case of exponential functions. Consider
$$\sum_{k=0}^{n-1} 2^k$$
for positive integer $n$. It is easy to see this sums to
$$\sum_{k=0}^{n-1} 2^k = 2^n - 1$$
which lets us find that
$$\sum_{k=0}^{1/2} 2^k\ \text{"should be"}\ \sqrt{8} - 1 \approx 1.8284.$$
And of course for arbitrary $b$, we have the geometric series
$$\sum_{k=0}^{n-1} b^k = \frac{b^n - 1}{b - 1}.$$
However, what about for more general functions $f$? The fundamental problem is that, strictly speaking, these interpolants are not unique: most generally, we can "connect the dots" with any curves we like. Even if we impose some "natural" restrictions, say that
$$\sum_{k=0}^{x} f(k) = f(x) + \sum_{k=0}^{x-1} f(k)$$
it isn't enough - the above reduces that freedom to now just an arbitrary 1-cyclic shift. Yet, "for some intuitive reason", the above interpolants "seem just right": they are simple, comparable to the functions they came from, and are suitably "graceful", while other interpolants are necessarily more "wiggly" owing to the 1-cyclic displacement just mentioned. But is there something rigorous that both singles them out over all others, and which allows us to do similar interpolations on a very general range of functions $f$, with such interpolations considered their analogues? Or to make it simpler, what distinguishes $\frac{x(x+1)}{2}$ uniquely from, say, $\frac{x(x+1)}{2} + \sin(2\pi x) - \pi^{\gamma \Gamma\left(\frac{5}{3}\right)} \cos(4\pi x)$ or something, as an interpolant of that particular sum and which it shares with $2^x - 1$ (up to translation) versus $2^x$?
(Started as a comment, but became too long for one.)
I don't know that there exists a universal "natural" criterion for such choices, but rather case-by-case rationales why a certain choice may be more suitable in the context.
Taking $\,f(n) = \sum_{k=1}^{n} k\,$ for example, and as noted in the post, the general solution of the functional equation $\,f(x) = f(x-1) + x\,$ is $\,f(x) = \frac{x(x+1)}{2} + c(x)\,$ where $\,c(x)\,$ is any periodic function of period $\,1\,$.
The simplest choice would of course be $\,c(x) \equiv 0\,$.
The $\,3^{rd}\,$ order finite differences of $\,f(n)\,$ are zero, so it would make sense to look for a continuation with zero $\,3^{rd}\,$ derivative $\,f'''(x) \equiv 0\,$, which requires $\,c(x)\,$ to be constant.
$\,f(n)\,$ is a polynomial, so it would make sense to look for a polynomial $\,f(x)\,$, which also requires $\,c(x)\,$ to be constant, since the only periodic polynomials are the constant ones.
None of the above requires $\,c(x)\,$ to be chosen as zero or a constant, yet they justify that choice.
There are cases, however, where the choice is not as clear cut. Take for example $\,f(n) = n!\,$. There are (at least) two continuations to the real (or complex) domain - the well known Gamma function (which is the only log-convex solution to the factorial recursion, per the Bohr–Mollerup theorem), and the lesser known Hadamard's gamma function (which, unlike $\,\Gamma\,$, is an entire function). Peter Luschny's page on Hadamard versus Euler makes for a good reading on this, and more.