Does any approximation algorithm exist for maximization non-monotone submodular functions that might have negative values or unbounded below?
Fact 1: For monotone submodular functions, Nemhauser, Wolsey and Fisher proved [1] that simple greedy algorithm gives $1-1/e$ approximation guarantee.
Fact 2: For non-monotone non-negative submodular functions, Buchbinder et. all proved [2] that randomized greedy algorithm gives $1/e$ approximation for cardinality constraint.
There has been a recent line of work investigating algorithms for submodular optimization problems when the objective may take negative values. So far, the non-negativity that we can handle is when the submodular objective decomposes into the sum of a monotone submodular and an arbitrary modular function. The optimization problem is $$ \max_{S \in \mathcal{I}} g(S) - c(S) \enspace,$$ where $g:2^V \rightarrow \mathbb{R}_+$ is a non-negative and monotone submodular function, $c:2^V \rightarrow \mathbb{R}$ is a modular function, and $\mathcal{I} \subset 2^V$ is a matroid on the ground set $V$. Note that if $c$ takes positive values, then the entire submodular objective $f = g - c$ is non-monotone and may take negative values. The first approximation result was given by Sviridenko, Vondrák, and Ward in 2014 who showed that one can use a continuous extension & rounding approach to obtain a set $S$ such that $$ g(S) - c(S) \geq (1 - e^{-1}) g(OPT) - c(OPT) $$ They applied this result to obtain tight approximation guarantees for monotone submodular maximization and monotone supermodular minimization with bounded curvature. They also show that this approximation is tight in the sense that no algorithm making polynomially many queries to $g$ and $c$ can do better. Then in 2018, Feldman gave an algorithm that also goes through the continuous domain but is more efficient in that it by-passes a certain "guessing step" in the previous algorithm. Most recently, Harshaw, Feldman, Karbasi, and Ward in 2019 gave a series of much faster algorithms which do not go through the continuous extension, but are based on greedily optimizing a "distorted surrogate objective" which changes thoroughout the algorithm; however, their results apply only when the modular function is non-negative and the constraint is a cardinality constraint. They also extend the analysis to show that when $g$ is $\gamma$-weakly submodular, then one may obtain the analogous approximation $$ g(S) - c(S) \geq (1 - e^{-\gamma}) g(OPT) - c(OPT) $$ Moreover, this extended approximation guarantee is tight in the value oracle model.
As notwatson pointed out, there is the hardness result of Uriel Feige which shows that even verifying that $f(S) \geq 0$ for some $S$ is NP-Hard. So, the decomposition into a sum of submodular and monotone functions really is crucial in all these algorithms. I don't know how much further one can push the decomposition approach.